Lumiera
0.pre.03
»edit your freedom«
|
Go to the source code of this file.
Scheduler resource usage coordination.
Operating the render activities in the engine involves several low-level support systems, which must be actively guided to remain within sustainable limits. While all parts of the engine are tuned towards typical expected scenarios, a wide array of load patterns may be encountered, complicating any generic performance optimisation. Rather, the participating components are designed to withstand a short-term imbalance, expecting that general engine parametrisation will be adjusted based on moving averages.
Scheduling and dispatch of Activities are driven by active workers invoking the Scheduler-Service to retrieve the next piece of work. While this scheme ensures that the scarce resource (computation or IO capacity) is directed towards the most urgent next task, achieving a smooth operation of the engine without wasted capacity requires additionally to control the request cycles of the workers, possibly removing excess capacity. Whenever a worker pulls the next task, an assessment of the timing situation is conducted, and the worker is placed into some partition of the overall available capacity, to reflect the current load and demand. Workers are thus moved between the segments of capacity, preferring to assign work to workers already in the active segment, thereby allowing idle workers to be shut down after some time.
The key element to decide upon the classification of a worker is the current scheduling situation: are some Activities overdue? does the next Activity to be considered reach far into the future? If there is immediately imminent work, then capacity is kept around; otherwise the capacity can be considered to be in excess for now. A worker not required right now can be sent into a targeted sleep delay, in order to shift its capacity into a zone where it will more likely be required. It is essential to apply some randomisation on these capacity shifts, in order to achieve an even distribution of free capacity and avoid contention between workers asking for new assignments.
When a worker becomes available and is not needed at the moment, the first thing to check is the time of the next approaching Activity; this worker can then be directed close to this next task, which thereby has been tended for and can be marked accordingly. Any further worker appearing meanwhile can then be directed into the time zone after the next approaching task. Workers immediately returning from active work are always preferred for assigning new tasks, while workers returning from idle state are typically sent back into idle state, unless there is direct need for more capacity.
A fusion of some operational values is used to build a heuristic indicator of current scheduler load. These values can be retrieved with low overhead.
Definition in file load-controller.hpp.
#include "lib/error.hpp"
#include "lib/time/timevalue.hpp"
#include "lib/nocopy.hpp"
#include "lib/util.hpp"
#include "lib/format-cout.hpp"
#include <cmath>
#include <atomic>
#include <chrono>
#include <utility>
#include <functional>
Classes | |
class | LoadController |
Controller to coordinate resource usage related to the Scheduler. More... | |
struct | LoadController::Wiring |
Functions | |
TimeValue | _uTicks (std::chrono::microseconds us) |
Variables | |
const double | LAG_SAMPLE_DAMPING = 2 |
smoothing factor for exponential moving average of lag; | |
Duration | NEAR_HORIZON {_uTicks (50us)} |
what counts as "imminent" (e.g. for spin-waiting) | |
Duration | SLEEP_HORIZON {_uTicks (20ms)} |
schedules beyond that horizon justify going idle | |
Duration | STANDARD_LAG {_uTicks(200us)} |
Experience shows that on average scheduling happens with 200µs delay. | |
Duration | WORK_HORIZON {_uTicks ( 5ms)} |
the scope of activity currently in the works | |
Namespaces | |
vault | |
Vault-Layer implementation namespace root. | |
vault::gear | |
Active working gear and plumbing. | |