Started refactor the Triple class to convert it to data class
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
Is there some separate discussion thread or proposal about this refactoring and its motivations?
This discussion was the main motivation for this change.
https://discourse.llvm.org/t/rfc-refactor-triple-related-classes/70410/11
As a result, Triple should become a data class, and its dependencies such as Architecture, Operating System etc. represent in the form of interfaces that can be implemented for the necessary instances.
FWIW I still don't see advantages by switching to the new llvm::TripleUtils::isArch32Bit style. How is it better than the current way?
+1
What advantages does this approach bring? I re-read the RFC in the discourse link you posted and I'm still not sure why this approach is desirable. Happy to discuss further either on this PR or in the discourse thread.
At the moment, the TargetParser architecture is extensible. This complicates the addition of new architectures, operating systems, and so on.
In the main discussion, it was proposed to redesign the architecture of this module in order to increase the possibility of adding new architectures, etc.
The main idea of the rework was to separate the Triple entity into a data-class, and create a number of interfaces from components that would include Triple, through the implementation of which it would be possible to represent various data bindings of the components. At the moment, Triple is overflowing with various methods that do not fully meet the ideas of OOP.
Since the TargetParser module is quite large and has many dependencies throughout the llvm-project, it was first of all supposed to remove these methods from Triple, since they would not correspond to OOP ideas.
This would help to gradually rid Triple of unnecessary dependencies, and gradually change the architecture inside Triple, without breaking code of another LLVM developers.
This change looks strictly worse in isolation, the proposal on discourse did not reach consensus, and looking at the diagram in https://discourse.llvm.org/t/rfc-refactor-triple-related-classes/70410/11 there is zero chance that it is ever going to be accepted.
I'm still not sure how this would make things simpler. I'll be as specific is possible in what does not click for me.
There is an inherent complexity in parsing triples. Architectures can have variants and subvariants, compilers and other tools do different things for the same architecture and OS when there are different vendors, the environment can subtly change things like ABI, the list goes on. I don't think you're going to be able to wholesale reduce complexity here. The proposal you've laid out here is certainly very OOP-like (in some sense of the phrase "OOP") but you present your ideas under the assumption that this style of OOP is the ideal to strive for. Why is that? Why is this better than what exists today? Is it easier to debug? Is it more performant? Is it easier to maintain? I personally do not think it will be better than what already exists in any of those ways.
In the original discourse thread, you also made the argument that the potential performance overhead and binary size increase should be unnoticeable and that with modern machines we do not need to fight for each microsecond. Without any numbers for performance or size, this is not an argument I can accept. Knowingly adding a performance/size regression to your build tools without an appropriate tradeoff does not make sense to do.
If you want to do this, please provide a concrete argument for what benefit this brings.