Lumiera  0.pre.03
»edit your freedom«
TestChainLoad< maxFan >::ScheduleCtx Class Reference

#include "/Werk/devel/lumi/tests/vault/gear/test-chain-load.hpp"

Description

template<size_t maxFan = DEFAULT_FAN>
class vault::gear::test::TestChainLoad< maxFan >::ScheduleCtx

Setup and wiring for a test run to schedule a computation structure as defined by this TestChainLoad instance.

This context is linked to a concrete TestChainLoad and Scheduler instance and holds a memory block with actual schedules, which are dispatched in batches into the Scheduler. It is crucial to keep this object alive during the complete test run, which is achieved by a blocking wait on the callback triggered after dispatching the last batch of calculation jobs. This process itself is meant for test usage and not thread-safe (while obviously the actual scheduling and processing happens in the worker threads). Yet the instance can be re-used to dispatch further test runs.

Definition at line 1703 of file test-chain-load.hpp.

Public Member Functions

 ScheduleCtx (TestChainLoad &mother, Scheduler &scheduler)
 
double calcRuntimeReference ()
 
ScheduleCtx && deactivateLoad ()
 
double determineEmpiricFormFactor (uint concurrency=0)
 
double getExpectedEndTime ()
 
auto getInvocationStatistic ()
 
auto getScheduleSeq ()
 
double getStressFac ()
 
double launch_and_wait ()
 dispose one complete run of the graph into the scheduler More...
 
ScheduleCtx && withAdaptedSchedule (double stressFac=1.0, uint concurrency=0, double formFac=1.0)
 Establish a differentiated schedule per level, taking node weights into account. More...
 
ScheduleCtx && withAnnouncedLoadFactor (uint factor_on_levelSpeed)
 
ScheduleCtx && withBaseExpense (microseconds fixedTime_per_node)
 
ScheduleCtx && withChunkSize (size_t nodes_per_chunk)
 
ScheduleCtx && withInstrumentation (bool doWatch=true)
 
ScheduleCtx && withJobDeadline (microseconds deadline_after_start)
 
ScheduleCtx && withLevelDuration (microseconds fixedTime_per_level)
 
ScheduleCtx && withLoadMem (size_t sizeBase=LOAD_DEFAULT_MEM_SIZE)
 
ScheduleCtx && withLoadTimeBase (microseconds timeBase=LOAD_DEFAULT_TIME)
 
ScheduleCtx && withManifestation (ManifestationID manID)
 
ScheduleCtx && withPlanningStep (microseconds planningTime_per_node)
 
ScheduleCtx && withPreRoll (microseconds planning_headstart)
 
ScheduleCtx && withSchedDepends (bool explicitly)
 
ScheduleCtx && withSchedNotify (bool doSetTime=true)
 
ScheduleCtx && withUpfrontPlanning ()
 

Private Member Functions

Time anchorSchedule ()
 
std::future< void > attachNewCompletionSignal ()
 push away any existing wait state and attach new clean state
 
void awaitBlocking (std::future< void > signal)
 
Job calcJob (size_t idx, size_t level)
 
FrameRate calcLoadHint ()
 
size_t calcNextChunkEnd (size_t lastNodeIDX)
 
Time calcPlanScheduleTime (size_t lastNodeIDX)
 
void continuation (size_t chunkStart, size_t lastNodeIDX, size_t levelDone, bool work_left)
 continue planning: schedule follow-up planning job
 
void disposeStep (size_t idx, size_t level)
 Callback: place a single job into the scheduler.
 
void fillAdaptedSchedule (double stressFact, uint concurrency)
 
void fillDefaultSchedule ()
 
microseconds guessPlanningPreroll ()
 
Time jobStartTime (size_t level, size_t nodeIDX=0)
 
auto lastExitNodes (size_t lastChunkStartIDX)
 
std::future< void > performRun ()
 
Job planningJob (size_t endNodeIDX)
 
void setDependency (Node *pred, Node *succ)
 Callback: define a dependency between scheduled jobs.
 
Job wakeUpJob ()
 
- Private Member Functions inherited from MoveOnly
 MoveOnly (MoveOnly &&)=default
 
 MoveOnly (MoveOnly const &)=delete
 
MoveOnlyoperator= (MoveOnly &&)=delete
 
MoveOnlyoperator= (MoveOnly const &)=delete
 

Private Attributes

uint blockLoadFac_ {2}
 
std::unique_ptr< RandomChainCalcFunctor< maxFan > > calcFunctor_
 
TestChainLoadchainLoad_
 
size_t chunkSize_ {DEFAULT_CHUNKSIZE}
 
std::unique_ptr< ComputationalLoadcompuLoad_
 
microseconds deadline_ {STANDARD_DEADLINE}
 
FrameRate levelSpeed_ {1, SCHEDULE_LEVEL_STEP}
 
ManifestationID manID_ {}
 
TimeVar nodeExpense_ {SCHEDULE_NODE_STEP}
 
std::unique_ptr< RandomChainPlanFunctor< maxFan > > planFunctor_
 
FrameRate planSpeed_ {1, SCHEDULE_PLAN_STEP}
 
microseconds preRoll_ {guessPlanningPreroll()}
 
bool schedDepends_ {SCHED_DEPENDS}
 
bool schedNotify_ {SCHED_NOTIFY}
 
lib::UninitialisedDynBlock< ScheduleSpecschedule_
 
Schedulerscheduler_
 
std::promise< void > signalDone_ {}
 
TimeVar startTime_ {Time::ANYTIME}
 
std::vector< TimeVarstartTimes_ {}
 
double stressFac_ {1.0}
 
std::unique_ptr< lib::IncidenceCountwatchInvocations_
 

Member Function Documentation

◆ launch_and_wait()

double launch_and_wait ( )
inline

dispose one complete run of the graph into the scheduler

Returns
observed runtime in µs

Definition at line 1821 of file test-chain-load.hpp.

◆ withAdaptedSchedule()

ScheduleCtx&& withAdaptedSchedule ( double  stressFac = 1.0,
uint  concurrency = 0,
double  formFac = 1.0 
)
inline

Establish a differentiated schedule per level, taking node weights into account.

Parameters
stressFacfurther proportional tightening of the schedule times
concurrencythe nominally available concurrency, applied per level
formFacfurther expenses to take into account (reducing the stressFac);

Definition at line 1953 of file test-chain-load.hpp.

+ Inheritance diagram for TestChainLoad< maxFan >::ScheduleCtx:
+ Collaboration diagram for TestChainLoad< maxFan >::ScheduleCtx:

The documentation for this class was generated from the following file: