When tracing on per-core mode, we are tracing all processes, which means
that after hitting a breakpoint, our process will stop running (thus
producing no more tracing data) but other processes will continue
writing to our trace buffers. This causes a big data loss for our trace.
As a way to remediate this, I'm adding some logic to pause and unpause
tracing based on the target's state. The earlier we do it the better,
however, I'm not adding the trigger at the earliest possible point for
simplicity of this diff. Later we can improve that part.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
lldb/source/Plugins/Process/Linux/IntelPTSingleBufferTrace.cpp | ||
---|---|---|
271 | What's the purpose of this new flush flag? When would you want to call this method with it set to false? I can only think of cases when you want to flush the buffer if you're trying to read its data | |
lldb/source/Plugins/Process/Linux/Perf.h | ||
221–227 | Nice, happy to see we are extending the mini perf API (: |
- remove the flush parameter
- use perf_event_attr.disabled to set the initial state of the collection
lldb/source/Plugins/Process/Linux/IntelPTSingleBufferTrace.cpp | ||
---|---|---|
211 | nit: from the "Set" name, my first impression was this method was simply setting m_collection_state, but in reality it's doing some nontrivial operations, namely changing the state of the perf event via ioctl. Consider changing the name from "Set" to avoid giving the impression of a trivial operation | |
309 | nice |
nit: from the "Set" name, my first impression was this method was simply setting m_collection_state, but in reality it's doing some nontrivial operations, namely changing the state of the perf event via ioctl. Consider changing the name from "Set" to avoid giving the impression of a trivial operation