This is an archive of the discontinued LLVM Phabricator instance.

[lldb] Generic support for caching register set reads/writes
Needs RevisionPublic

Authored by mgorny on Oct 13 2020, 11:06 AM.

Details

Summary

Introduce a new class for caching register set reads and writes
in the generic NativeRegisterContextRegisterInfo class. Use it
in the NetBSD plugin for initial testing.

The implementation provides a generic caching version of ReadRegister()
and WriteRegister() that call virtual methods implemented in actual
NativeRegisterContext* classes. It supports both caching reads,
and delayed writes. The cache is flushed whenever the thread resumes
execution or a new read follows writes.

The hook API consists of five methods that need to be implemented
in NativeRegisterContext* classes:

  • GetRegisterSetForRegNum() to map register number to a register set number (the latter being opaque to NativeRegisterContextRegisterInfo).
  • ReadRegisterSet() and WriteRegisterSet() to respectively read regset into cache and write it back. The actual cache storage is provided by final classes (m_gpr, m_fpr...).
  • GetRegisterFromCache() and SetRegisterInCache() to respectively get and update individual register values inside the cache. The implementation of both methods can assume that ReadRegisterSet() has been called to fill the cache, and must not call either ReadRegisterSet() or WriteRegisterSet().

The NativeRegisterContextRegisterInfo class maintains state map of cache
for individual regsets, and issues appropriate ReadRegisterSet()
and WriteRegisterSet() calls to maintain it. It includes
a FlushRegisterSets() method that needs to be called whenever the thread
is about to resume execution.

Diff Detail

Event Timeline

mgorny created this revision.Oct 13 2020, 11:06 AM
mgorny requested review of this revision.Oct 13 2020, 11:06 AM
mgorny planned changes to this revision.Oct 14 2020, 4:25 AM

Ok, this is going to have to wait. I've found some hard bug in NetBSD's xstate support and don't have time+resources to figure it out at the moment.

I'm wondering if this could be implemented as some kind of a separate class which you "mix" into your register context when you want to do caching. Possibly something like this:

class NRCLinux_MyArch: public NRCLinux, private RCCache::Backend {
  RCCache m_cache;

  NRCLinux_MyArch(...) : m_cache(this) {}

  // NRC interface
  ReadRegister(...) override { return m_cache.ReadRegister(...); }
  ...

  // Backend interface
  ReadReally(...) override { action happens here }
};

That would enable individual register contexts to explicitly opt into caching, which might be cleaner in the long term, as I anticipate it will take a very long time to convert all registers contexts to the new way of doing things.

lldb/source/Plugins/Process/Utility/NativeRegisterContextRegisterInfo.h
25 ↗(On Diff #297917)

I think using Optional<integral> would be better. Although, I'm not sure what are the cases where a register does not belong to any register set.

64–83 ↗(On Diff #297917)

I think new interfaces should use llvm::Error/Expected

mgorny updated this revision to Diff 300506.Oct 24 2020, 2:53 PM

Moved stuff into mixin-style class. Changing return types is next on the list.

mgorny updated this revision to Diff 300529.Oct 25 2020, 2:11 AM
mgorny marked an inline comment as done.

Switched GetRegisterSetForRegNum to llvm::Optional. Also noticed that NativeRegisterContextNetBSD_x86_64::GetRegisterSetCount() is entirely meaningless.

mgorny updated this revision to Diff 300532.Oct 25 2020, 2:52 AM

Rework error handling to use llvm::Error in the new methods.

mgorny updated this revision to Diff 300533.Oct 25 2020, 3:52 AM
mgorny retitled this revision from [lldb] Generic support for caching register set reads/writes [WIP] to [lldb] Generic support for caching register set reads/writes.
mgorny edited the summary of this revision. (Show Details)

Now with actual caching support.

labath requested changes to this revision.Nov 26 2021, 3:00 AM

I guess this slipped off my radar.... What's the state of this? Are you still interested in the patch? If yes, we can try again after rebasing.

This revision now requires changes to proceed.Nov 26 2021, 3:00 AM

I don't think I really want to work on this right now. Even if it still works reliably, I'm not sure if the gain is really worth the added complexity (and risks).