- User Since
- Jan 7 2020, 2:24 AM (20 w, 5 d)
Feb 3 2020
I'm more than happy to take a look at this use case and see what can be done to reduce the peak commit when using rpmalloc, perhaps if you file an issue on rpmalloc github project we can take it from there
Jan 11 2020
I think you will find that the Heap API has subpar performance since that too uses locks to ensure thread safety (unless you can guarantee that memory blocks never traverses thread boundaries).
Jan 9 2020
Regarding license, if you believe in public domain then rpmalloc is in public domain. If not, it's released under MIT which should be compatible with most project. If you still have issues with this, let me know and I can probably accommodate any needs you might have.
Jan 7 2020
I would say it's mostly down to application usage patterns. The worst case is probably a gc-like usage where one thread does all allocation and another all deallocation, as this will cause all blocks to cross the thread cache via an atomic pointer CAS and eventually cause larger spans of blocks to traverse the thread caches via the global cache. However, even this scenario will probably be significantly faster than the standard runtime allocator.