State |
Idea |
Date |
Fri Jul 23 19:34:29 2010 |
Proposed by |
Christian Thaeter <ct@pipapo.org> |
Right from the beginning of the project we planned some kind of profiling to adapt dynamically to current workload and machine capabilities. I describe here how statistic data can be gathered in a generic way. This will later work together with other components tuning the system automatically.
Description
I just introduce some ideas about the planned profiling framework here, nothing is defined/matured yet this is certainly subject for further discussion and refinement.
- Requirements/Evaluation generic
-
Profiling should be sufficiently abstracted to have a single set of datastructures and algorithms to work on a broad range of subjects being profiled. Moreover the profiling core just offers unitless counters, semantic will be added on top of that on a higher level.
- least possible overhead
-
Profiling itself must not cost much, it must not block and should avoid expensive operations. Simple integer arithmetic without divisions is suggested.
- accurate
-
We may sample data on in stochastic way to reduce the overhead, nevertheless data which gets sampled must be accurately stored and processed without rounding losses and drifts.
- transient values
-
It’s quite common that some values can be far off either in maximum or in minimum direction, the system should adapt to this and recover from such false alarms. Workload also changes over time we need to find some way to measure the current/recent workload an grand total over the whole application runtime is rather uninteresting. While it is also important that we adapt slow enough not to get into some oscillating cycle.
- active or passive system
-
Profiling can be only passive collecting data and let it be analyzed by some other component or active triggering some action when some limits are reached. I am yet a bit undecided and keep it open for both.
Tasks
-
develop a concept to outline what we actually want to observe …
-
refine this into a model to describe the observables
-
derive an assessment of the specific requirements and challenges of this measurement
-
use this as the starting-point for an implementation draft …
Discussion
Pros
Cons
Alternatives
Rationale
Comments
typedef int64_t profile_value; struct profile { ProfileVTable vtable; /* Using trylock for sampling makes it never contend on the lock but some samples are lost. Should be ok. */ mutex_t lock; /* with trylock? */ /* statistics / running averages */ /* n being a small number 2-5 or so */ profile_value max[n]; /* n maximum values seen so far, decreased by recovery */ profile_value min[n]; /* n minimum values seen so far, increased by recovery */ /* store sum & count, but average calculation implies a division and will be only done on demand */ profile_value count; /* count profile calls */ profile_value sum; /* sum up all calls, average = sum/count */ /* current is the sampled value to be integrated */ /* trend is caclulated before theb new run_average */ profile_value trend; /* trend = (trend + (run_average-current))>>1 */ /* we may need some slower diverging formula for running average */ profile_value run_average; /* run_average = (run_average + current)>>1) */ /* active limits, define whats good and whats bad, calls back to vtable when limit is hit */ profile_value max_limit; profile_value min_limit; /* do we want limits for trends too? */ /* we count how often we hit limits, a hit/miss ratio will give a good value for optimization */ profile_value hit_cnt; profile_value high_miss_cnt; profile_value low_miss_cnt; /* recovery state */ int rec_init; int rec_current; int rec_percent; void* extra; };
- ct
-
2010-07-23 20:33:13 CEST
Many years later, I’d like to confirm that we need and want some kind of profiling in the engine, for the purpose described in this RfC, which is to fine-tune our operation parameters dynamically.
We should however be careful not to settle down onto an implementation in our mind before having conducted a proper requirement analysis. Starting from the valid idea what can be done with profiling and statistic, the first thing to map out is what we actually want to achieve and to which degree this can be reasonably achieved. From there we should head towards a model to describe what we want to observe, and only from there we’ll be able to pick suitable measurement methods.
(I have just added these step to the “Tasks” section)
|
we should drop the pre-established conception that what we capture here is simple, and can be squeezed into a single data representation. |
If it turns out that we’re headed towards high-throughput processing, we should keep possible solutions at the architecture level in mind, before focusing on details of the implementation. Messaging systems and lock-free data structures have made significant progress over the last decade.
→ see also my remark to RfC:Budgeting
- Ichthyostega
-
2025-09-17
Back to Lumiera Design Process overview