Lumiera 0.pre.04~rc.1
»edit your freedom«
Loading...
Searching...
No Matches
media-weaving-pattern.hpp File Reference

Construction set to assemble and operate a data processing scheme within a Render Node. More...

Go to the source code of this file.

Description

Construction set to assemble and operate a data processing scheme within a Render Node.

Together with turnout.hpp, this header provides the "glue" which holds together the typical setup of a Render Node network for processing media data. A MediaWeavingPattern implements the sequence of steps — as driven by the Turnout — to combine the invocation of media processing operations from external Libraries with the buffer- and parameter management provided by the Lumiera Render Engine. Since these operations are conducted concurrently, all invocation state has to be maintained in local storage on the stack.

Integration with media handling Libraries

A Render invocation originates from a Render Job, which first establishes a TurnoutSystem and then enters into the recursive Render Node activation by invoking Port::weave() for the »Exit Node«, as defined by the job's invocation parameters. The first step in the processing cycle, as established by the Port implementation (Turnout), is to build a »Feed instance«, from the invocation of mount(TurnoutSystem&).

Generally speaking, a Feed fulfils the role of an Invocation Adapter and a Manifold of data connections. The standard implementation, as given by MediaWeavingPattern, relies on a combination of both into a FeedManifold. This is a flexibly configured data adapter, directly combined with an embedded adapter functor to wrap the invocation of processing operations provided by an external library.

Usually some kind of internal systematics is assumed and applied within such a library. Operations can be exposed as plain function to invoke, or through some configuration and builder notion. Function arguments tend to follow a common arrangement and naming scheme, also assuming specific arrangement and data layout for input and output data. This kind of schematism is rooted in something deeper: exposing useful operations as a library collection requires a common ground, an understanding about the order of things to be treated — at least for those kind of things, which fall into a specific domain, when tasks related to such a domain shall be supported by the Library. Such an (implicit or explicit) framework of structuring is usually designated as a Domain Ontology (in contrast to the questions pertaining Ontology in general, which are the subject of philosophy proper). Even seemingly practical matters like processing media data do rely on fundamental assumptions and basic premises regarding what is at stake and what shall be subject to treatment, what fundamental entities and relationships to consider within the domain. Incidentally, many of these assumptions are positive in nature and not necessarily a given — which is the root of essential incompatibilities between Libraries targeting a similar domain: due to such fundamental differences, they just can not totally agree upon what kinds of things to expect and where to draw the line of distinction.

The Lumiera Render Engine and media handling framework is built in a way fundamentally agnostic to the specific presuppositions of this or that media handling library. By and large, decisions, distinctions and qualifications are redirected back into the scope of the respective library, by means of a media-library adapter plug-in. Assuming that the user in fact understands the meaning of and reasoning behind employing a given library, the mere handling of the related processing can be reduced to a small set of organisational traits. For sake of consistency, you may label these as a »Render Engine Ontology«. In all brevity,

  • We assume that the library provides distinguishable processing operations that can be structured and classified and managed as processing assets,
  • we assume that processing is applied to sources or »media« and that the result of processing is again a source that can be processed further;
  • specific operations can thus be conceptualised as processing-stages or Nodes, interconnected by media streams, which can be tagged with a stream type.
  • At implementation level, such streams can be represented in entirety as data buffers of a specific layout, filled with some »frame« or chunk of data
  • and the single processing step or operation can be completely encapsulated as a pure function (referentially transparent, without side effects);
  • all state and parametrisation can be represented either as some further data stream in/out, or as parameters-of-processing, which can be passed as a set of values to the function prior of invocation, thereby completely determining the observable behaviour.

composition of the Invocation State

By means of this positing, the attachment point to an external library can be reduced into a small number of connector links. Handling the capabilities of a library within the Session and high-level Model will require some kind of registration, which is beyond the scope of this discussion here. As far as the Render Engine and the low-level-Model is concerned, any usage of the external libraries capabilities can be reduced into...

  • Preparing an adapter functor, designated as processing-functor. This functor takes three kinds of arguments, each packaged as a single function call argument, which may either be a single item, or possibly be structured as a tuple of heterogeneous elements, or an array of homogeneous items.
    • an output data buffer or several such buffers are always required
    • (optionally) an input buffer or several such buffers need to be supplied
    • (optionally) also a parameter value, tuple or array can be specified
  • Supplying actual parameter values (if necessary); these are drawn from the invocation of a further functor, designated as parameter-functor, and provided from within the internal framework of the Lumiera application, either to deliver fixed parameter settings configured by the user in the Session, or by evaluating Parameter Automation, or simply to supply some technically necessary context information, most notably the frame number of a source to retrieve.
  • Preparing buffers filled with input data, in a suitable format, one for each distinct item expected in the input data section of the processing-functor; filling these input buffers requires the recursive invocation of further Render Nodes...
  • Allocating buffers for output data, sized and typed accordingly, likewise one for each distinct item detailed in the output data argument of the processing-functor.

The FeedManifold template, which (as mentioned above) is used by this standard implementation of media processing in the form of the MediaWeavingPattern, is configured specifically for each distinct signature of a processing-functor to match the implied structural requirements. If a functor is output-only, no input buffer section is present; if it expects processing parameters, storage for an appropriate data tuple is provided and a parameter-functor can be configured. A clone-copy of the processing-functor itself is also stored as clone-copy alongside within the FeedManifold, and thus placed into stack memory, where it is safe even during deeply nested recursive invocation sequences, while rendering in general is performed massively in parallel.

In the end, the actual implementation code of the weaving pattern has to perform the connection and integration between the »recursive weaving scheme« and the invocation structure implied by the FeedManifold. It has to set off the recursive pull-invocation of predecessor ports, retrieve the result data buffers from these and configure the FeedManifold with the BuffHandle entries retrieved from these recursive calls. Buffer handling in general is abstracted and codified thorough the Buffer Provider framework, which offers the means to allocate further buffers and configure them into the FeedManifold for the output data. The »buffer handling protocol« also requires to invoke BuffHandle::emit() at the point when result data can be assumed to be placed into the buffer, and to release buffers not further required through a BuffHandle::release() call; Notably this applies to the input buffers are completion of the processing-functor invocation, and is required also for secondary (and in a way, superfluous) result buffers, which are sometimes generated as a by-product of the processing function invocation, but are actually not passed as output up the node invocation chain.

See also
feed-manifold.hpp
weaving-pattern-builder.hpp
Overview of Render Node structures
Warning
WIP as of 12/2024 first complete integration round of the Render engine ////////////////////////////TICKET #1367

Definition in file media-weaving-pattern.hpp.

Namespaces

namespace  steam
 Steam-Layer implementation namespace root.
 
namespace  steam::engine
 Lumiera's render engine core and operational control.
 

Classes

struct  MediaWeavingPattern< INVO >
 Standard implementation for a Weaving Pattern to connect the input and output data feeds (buffers) into a processing function. More...