Lumiera  0.pre.03
»edit your freedom«
block-flow.hpp File Reference

Go to the source code of this file.

Description

Memory management scheme for activities and parameter data passed through the Scheduler within the Lumiera render engine.

While — conceptually — the intended render operations are described as connected activity terms, sent as messages through the scheduler, the actual implementation requires a fixed descriptor record sitting at a stable memory location while the computation is underway. Moreover, activities can spawn further activities, implying that activity descriptor records for various deadlines need to be accommodated and the duration to keep those descriptors in valid state is contingent. On the other hand, ongoing rendering produces a constant flow of further activities, necessitating timely clean-up of obsolete descriptors. Used memory should be recycled, calling for an arrangement of pooled allocation tiles, extending the underlying block allocation on increased throughput.

Implementation technique

The usage within the Scheduler can be arranged in a way to avoid concurrency issues altogether; while allocations are not always done by the same thread, it can be ensured at any given time that only a single Worker performs Scheduler administrative tasks (queue management and allocation); a read/write barrier is issued whenever some Worker enters this management mode.

Memory is allocated in larger extents, which are then used to place individual fixed-size allocations. These are not managed further, assuming that the storage is used for POD data records, and the destructors need not be invoked at all. This arrangement is achieved by interpreting the storage extents as temporal Epochs. Each #Epoch holds an Epoch::EpochGate to define a deadline and to allow blocking this Epoch by pending IO operations (with the help of a count-down latch). The rationale is based on the observation that any render activity for late and obsolete goals is pointless and can be just side stepped. Once the scheduling has passed a defined deadline (and no further pending IO operations are around), the Epoch can be abandoned as a whole and the storage extent can be re-used.

Warning
this implies any access after the deadline is undefined behaviour, spanning also further use of the AllocatorHandle obtained for a deadline.

Dynamic adjustments are necessary to keep this scheme running efficiently. Ideally, the temporal stepping between subsequent Epochs should be chosen such as to accommodate all render activities with deadlines falling into this Epoch, without wasting much space for unused storage slots. But the throughput and thus the allocation pressure of the scheduler can change intermittently, necessitating to handle excess allocations by shifting them into the next Epoch. These overflow events are registered, and on clean-up the actual usage ratio of each Epoch is detected, leading to exponentially damped adjustments of the actual Epoch duration. The increasing of capacity on overflow and the exponential targeting of an optimal fill factor counteract each other, typically converging after some »duty cycles«.

Remarks
7/2023 this implementation explicates the intended memory management pattern, yet a lot more measurements and observations with real-world load patterns seem indicated. The characteristic parameters in blockFlow::DefaultConfig expose the most effective tuning points. In its current form, the underlying ExtendFamily allocates the Extents directly from the default heap allocator, which does not seem to be of relevance performance-wise, since the pool of Extents, once allocated, is re-used cyclically.
See also
BlockFlow_test
SchedulerUsage_test
extent-family.hpp underlying allocation scheme

Definition in file block-flow.hpp.

#include "vault/common.hpp"
#include "vault/gear/activity.hpp"
#include "vault/mem/extent-family.hpp"
#include "lib/time/timevalue.hpp"
#include "lib/iter-explorer.hpp"
#include "lib/format-util.hpp"
#include "lib/nocopy.hpp"
#include "lib/util.hpp"
#include <utility>

Classes

class  BlockFlow< CONF >::AllocatorHandle
 Local handle to allow allocating a collection of Activities, all sharing a common deadline. More...
 
class  BlockFlow< CONF >
 Allocation scheme for the Scheduler, based on Epoch(s). More...
 
struct  DefaultConfig
 Lightweight yet safe parametrisation of memory management. More...
 
class  Epoch< ALO >
 Allocation Extent holding scheduler Activities to be performed altogether before a common deadline. Other than the underlying raw Extent, the Epoch maintains a deadline time and keeps track of storage slots already claimed. More...
 
struct  Epoch< ALO >::EpochGate
 specifically rigged GATE Activity, used for managing Epoch metadata More...
 
class  FlowDiagnostic< CONF >
 
class  FlowDiagnostic< CONF >
 
struct  RenderConfig
 Parametrisation tuned for Render Engine performance. More...
 
class  BlockFlow< CONF >::StorageAdaptor
 Adapt the access to the raw storage to present the Extents as Epoch; also caches the address resolution for performance reasons (+20%). More...
 
struct  Strategy< CONF >
 Policy template to mix into the BlockFlow allocator, providing the parametrisation for self-regulation. More...
 

Functions

template<class CONF >
FlowDiagnostic< CONF > watch (BlockFlow< CONF > &theFlow)
 

Variables

const size_t BLOCK_EXPAND_SAFETY_LIMIT = 3000
 < Parametrisation of Scheduler memory management scheme More...
 

Namespaces

 vault
 Vault-Layer implementation namespace root.
 
 vault::gear
 Active working gear and plumbing.
 

Variable Documentation

◆ BLOCK_EXPAND_SAFETY_LIMIT

const size_t BLOCK_EXPAND_SAFETY_LIMIT = 3000

< Parametrisation of Scheduler memory management scheme

limit for maximum number of blocks allowed in Epoch expansion

Note
Scheduler::sanityCheck() defines a similar limit, but there the same reasoning is translated into a hard limit for deadlines to be < 20sec, while this limit here will only be triggered if the current block duration has been lowered to the OVERLOAD_LIMIT
See also
scheduler-commutator.hpp

Definition at line 114 of file block-flow.hpp.