This fixes https://bugs.llvm.org/show_bug.cgi?id=41712 bug.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
| llvm/lib/DebugInfo/CodeView/Formatters.cpp | ||
|---|---|---|
| 27 | I know the patch you're proposing is more optimal in terms of runtime performance, but it adds an extra layer of complexity for a reader who was to figure out why memory is read in that way. Since GUIDs are not that used in binary files, performance is maybe less of a concern. We could maybe rewrite the code to be more self-documenting, something along those lines? void GuidAdapter::format(raw_ostream &Stream, StringRef Style) {
assert(Item.size() == 16 && "Expected 16-byte GUID");
struct _GUID {
support::ulittle32_t Data1;
support::ulittle16_t Data2;
support::ulittle16_t Data3;
support::ubig64_t Data4;
};
const _GUID *G = reinterpret_cast<const _GUID *>(Item.data());
Stream << "{"
<< format_hex_no_prefix(G->Data1, sizeof(G->Data1), /*Upper=*/true)
<< '-'
<< format_hex_no_prefix(G->Data2, sizeof(G->Data2), /*Upper=*/true)
<< '-'
<< format_hex_no_prefix(G->Data3, sizeof(G->Data3), /*Upper=*/true)
<< '-'
<< format_hex_no_prefix(G->Data4, sizeof(G->Data4), /*Upper=*/true)
<< "}";
} | |
| llvm/lib/DebugInfo/CodeView/Formatters.cpp | ||
|---|---|---|
| 27 | +1 | |
Thanks for integrating the comments!
| llvm/lib/ObjectYAML/CodeViewYAMLTypes.cpp | ||
|---|---|---|
| 166 | In order to avoid any magical offsets into the string, can you possibly keep a loop, but add a counter and successively write into Data1, Data2, Data3, etc.? Something like: uint8_t *OutBuffer = S.Guid;
unsigned Index = 0;
uint64_t D41{}, D42{};
bool GotHex;
for (auto Iter = Scalar.begin(); Iter != Scalar.end(); ++Iter) {
auto Next = std::find_if(Iter, Scalar.end(), std::isxdigit);
if (Next == Scalar.end())
break;
StringRef Group(Iter, Next - Iter);
if (Index == 0)
GotHex = to_integer(Group, G.Data1, 16);
else if (Index == 1)
GotHex &= to_integer(Group, G.Data2, 16);
else if (Index == 2)
GotHex &= to_integer(Group, G.Data3, 16);
else if (Index == 3)
GotHex &= to_integer(Group, D41, 16);
else if (Index == 4)
GotHex &= to_integer(Group, D42, 16);
++Index;
}
G.Data4 = (D41 << 48) | D42; | |
| llvm/lib/ObjectYAML/CodeViewYAMLTypes.cpp | ||
|---|---|---|
| 166 | No magical offsets, no loops. But I kept the maximal validation. | |
I know the patch you're proposing is more optimal in terms of runtime performance, but it adds an extra layer of complexity for a reader who was to figure out why memory is read in that way. Since GUIDs are not that used in binary files, performance is maybe less of a concern. We could maybe rewrite the code to be more self-documenting, something along those lines?
void GuidAdapter::format(raw_ostream &Stream, StringRef Style) { assert(Item.size() == 16 && "Expected 16-byte GUID"); struct _GUID { support::ulittle32_t Data1; support::ulittle16_t Data2; support::ulittle16_t Data3; support::ubig64_t Data4; }; const _GUID *G = reinterpret_cast<const _GUID *>(Item.data()); Stream << "{" << format_hex_no_prefix(G->Data1, sizeof(G->Data1), /*Upper=*/true) << '-' << format_hex_no_prefix(G->Data2, sizeof(G->Data2), /*Upper=*/true) << '-' << format_hex_no_prefix(G->Data3, sizeof(G->Data3), /*Upper=*/true) << '-' << format_hex_no_prefix(G->Data4, sizeof(G->Data4), /*Upper=*/true) << "}"; }