Background: #fff
Foreground: #000
PrimaryPale: #8cf
PrimaryLight: #18f
PrimaryMid: #04b
PrimaryDark: #014
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
<!--{{{-->
<div class='toolbar' macro='toolbar [[ToolbarCommands::EditToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='editor' macro='edit title'></div>
<div macro='annotations'></div>
<div class='editor' macro='edit text'></div>
<div class='editor' macro='edit tags'></div><div class='editorFooter'><span macro='message views.editor.tagPrompt'></span><span macro='tagChooser excludeLists'></span></div>
<!--}}}-->
To get started with this blank [[TiddlyWiki]], you'll need to modify the following tiddlers:
* [[SiteTitle]] & [[SiteSubtitle]]: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* [[MainMenu]]: The menu (usually on the left)
* [[DefaultTiddlers]]: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
<<importTiddlers>>
<!--{{{-->
<link rel='alternate' type='application/rss+xml' title='RSS' href='index.xml' />
<!--}}}-->
These [[InterfaceOptions]] for customising [[TiddlyWiki]] are saved in your browser

Your username for signing your edits. Write it as a [[WikiWord]] (eg [[JoeBloggs]])

<<option txtUserName>>
<<option chkSaveBackups>> [[SaveBackups]]
<<option chkAutoSave>> [[AutoSave]]
<<option chkRegExpSearch>> [[RegExpSearch]]
<<option chkCaseSensitiveSearch>> [[CaseSensitiveSearch]]
<<option chkAnimate>> [[EnableAnimations]]

----
Also see [[AdvancedOptions]]
<!--{{{-->
<div class='header' role='banner' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
<div class='headerShadow'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
<div class='headerForeground'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
</div>
<div id='mainMenu' role='navigation' refresh='content' tiddler='MainMenu'></div>
<div id='sidebar'>
<div id='sidebarOptions' role='navigation' refresh='content' tiddler='SideBarOptions'></div>
<div id='sidebarTabs' role='complementary' refresh='content' force='true' tiddler='SideBarTabs'></div>
</div>
<div id='displayArea' role='main'>
<div id='messageArea'></div>
<div id='tiddlerDisplay'></div>
</div>
<!--}}}-->
/*{{{*/
body {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}

a {color:[[ColorPalette::PrimaryMid]];}
a:hover {background-color:[[ColorPalette::PrimaryMid]]; color:[[ColorPalette::Background]];}
a img {border:0;}

h1,h2,h3,h4,h5,h6 {color:[[ColorPalette::SecondaryDark]]; background:transparent;}
h1 {border-bottom:2px solid [[ColorPalette::TertiaryLight]];}
h2,h3 {border-bottom:1px solid [[ColorPalette::TertiaryLight]];}

.button {color:[[ColorPalette::PrimaryDark]]; border:1px solid [[ColorPalette::Background]];}
.button:hover {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::SecondaryLight]]; border-color:[[ColorPalette::SecondaryMid]];}
.button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::SecondaryDark]];}

.header {background:[[ColorPalette::PrimaryMid]];}
.headerShadow {color:[[ColorPalette::Foreground]];}
.headerShadow a {font-weight:normal; color:[[ColorPalette::Foreground]];}
.headerForeground {color:[[ColorPalette::Background]];}
.headerForeground a {font-weight:normal; color:[[ColorPalette::PrimaryPale]];}

.tabSelected {color:[[ColorPalette::PrimaryDark]];
	background:[[ColorPalette::TertiaryPale]];
	border-left:1px solid [[ColorPalette::TertiaryLight]];
	border-top:1px solid [[ColorPalette::TertiaryLight]];
	border-right:1px solid [[ColorPalette::TertiaryLight]];
}
.tabUnselected {color:[[ColorPalette::Background]]; background:[[ColorPalette::TertiaryMid]];}
.tabContents {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::TertiaryPale]]; border:1px solid [[ColorPalette::TertiaryLight]];}
.tabContents .button {border:0;}

#sidebar {}
#sidebarOptions input {border:1px solid [[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel {background:[[ColorPalette::PrimaryPale]];}
#sidebarOptions .sliderPanel a {border:none;color:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:hover {color:[[ColorPalette::Background]]; background:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:active {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::Background]];}

.wizard {background:[[ColorPalette::PrimaryPale]]; border:1px solid [[ColorPalette::PrimaryMid]];}
.wizard h1 {color:[[ColorPalette::PrimaryDark]]; border:none;}
.wizard h2 {color:[[ColorPalette::Foreground]]; border:none;}
.wizardStep {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];
	border:1px solid [[ColorPalette::PrimaryMid]];}
.wizardStep.wizardStepDone {background:[[ColorPalette::TertiaryLight]];}
.wizardFooter {background:[[ColorPalette::PrimaryPale]];}
.wizardFooter .status {background:[[ColorPalette::PrimaryDark]]; color:[[ColorPalette::Background]];}
.wizard .button {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryLight]]; border: 1px solid;
	border-color:[[ColorPalette::SecondaryPale]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryPale]];}
.wizard .button:hover {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Background]];}
.wizard .button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::Foreground]]; border: 1px solid;
	border-color:[[ColorPalette::PrimaryDark]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryDark]];}

.wizard .notChanged {background:transparent;}
.wizard .changedLocally {background:#80ff80;}
.wizard .changedServer {background:#8080ff;}
.wizard .changedBoth {background:#ff8080;}
.wizard .notFound {background:#ffff80;}
.wizard .putToServer {background:#ff80ff;}
.wizard .gotFromServer {background:#80ffff;}

#messageArea {border:1px solid [[ColorPalette::SecondaryMid]]; background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]];}
#messageArea .button {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::SecondaryPale]]; border:none;}

.popupTiddler {background:[[ColorPalette::TertiaryPale]]; border:2px solid [[ColorPalette::TertiaryMid]];}

.popup {background:[[ColorPalette::TertiaryPale]]; color:[[ColorPalette::TertiaryDark]]; border-left:1px solid [[ColorPalette::TertiaryMid]]; border-top:1px solid [[ColorPalette::TertiaryMid]]; border-right:2px solid [[ColorPalette::TertiaryDark]]; border-bottom:2px solid [[ColorPalette::TertiaryDark]];}
.popup hr {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::PrimaryDark]]; border-bottom:1px;}
.popup li.disabled {color:[[ColorPalette::TertiaryMid]];}
.popup li a, .popup li a:visited {color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:active {background:[[ColorPalette::SecondaryPale]]; color:[[ColorPalette::Foreground]]; border: none;}
.popupHighlight {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
.listBreak div {border-bottom:1px solid [[ColorPalette::TertiaryDark]];}

.tiddler .defaultCommand {font-weight:bold;}

.shadow .title {color:[[ColorPalette::TertiaryDark]];}

.title {color:[[ColorPalette::SecondaryDark]];}
.subtitle {color:[[ColorPalette::TertiaryDark]];}

.toolbar {color:[[ColorPalette::PrimaryMid]];}
.toolbar a {color:[[ColorPalette::TertiaryLight]];}
.selected .toolbar a {color:[[ColorPalette::TertiaryMid]];}
.selected .toolbar a:hover {color:[[ColorPalette::Foreground]];}

.tagging, .tagged {border:1px solid [[ColorPalette::TertiaryPale]]; background-color:[[ColorPalette::TertiaryPale]];}
.selected .tagging, .selected .tagged {background-color:[[ColorPalette::TertiaryLight]]; border:1px solid [[ColorPalette::TertiaryMid]];}
.tagging .listTitle, .tagged .listTitle {color:[[ColorPalette::PrimaryDark]];}
.tagging .button, .tagged .button {border:none;}

.footer {color:[[ColorPalette::TertiaryLight]];}
.selected .footer {color:[[ColorPalette::TertiaryMid]];}

.error, .errorButton {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Error]];}
.warning {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryPale]];}
.lowlight {background:[[ColorPalette::TertiaryLight]];}

.zoomer {background:none; color:[[ColorPalette::TertiaryMid]]; border:3px solid [[ColorPalette::TertiaryMid]];}

.imageLink, #displayArea .imageLink {background:transparent;}

.annotation {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border:2px solid [[ColorPalette::SecondaryMid]];}

.viewer .listTitle {list-style-type:none; margin-left:-2em;}
.viewer .button {border:1px solid [[ColorPalette::SecondaryMid]];}
.viewer blockquote {border-left:3px solid [[ColorPalette::TertiaryDark]];}

.viewer table, table.twtable {border:2px solid [[ColorPalette::TertiaryDark]];}
.viewer th, .viewer thead td, .twtable th, .twtable thead td {background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::Background]];}
.viewer td, .viewer tr, .twtable td, .twtable tr {border:1px solid [[ColorPalette::TertiaryDark]];}

.viewer pre {border:1px solid [[ColorPalette::SecondaryLight]]; background:[[ColorPalette::SecondaryPale]];}
.viewer code {color:[[ColorPalette::SecondaryDark]];}
.viewer hr {border:0; border-top:dashed 1px [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::TertiaryDark]];}

.highlight, .marked {background:[[ColorPalette::SecondaryLight]];}

.editor input {border:1px solid [[ColorPalette::PrimaryMid]];}
.editor textarea {border:1px solid [[ColorPalette::PrimaryMid]]; width:100%;}
.editorFooter {color:[[ColorPalette::TertiaryMid]];}
.readOnly {background:[[ColorPalette::TertiaryPale]];}

#backstageArea {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::TertiaryMid]];}
#backstageArea a {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstageArea a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; }
#backstageArea a.backstageSelTab {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
#backstageButton a {background:none; color:[[ColorPalette::Background]]; border:none;}
#backstageButton a:hover {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstagePanel {background:[[ColorPalette::Background]]; border-color: [[ColorPalette::Background]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]];}
.backstagePanelFooter .button {border:none; color:[[ColorPalette::Background]];}
.backstagePanelFooter .button:hover {color:[[ColorPalette::Foreground]];}
#backstageCloak {background:[[ColorPalette::Foreground]]; opacity:0.6; filter:alpha(opacity=60);}
/*}}}*/
/*{{{*/
* html .tiddler {height:1%;}

body {font-size:.75em; font-family:arial,helvetica; margin:0; padding:0;}

h1,h2,h3,h4,h5,h6 {font-weight:bold; text-decoration:none;}
h1,h2,h3 {padding-bottom:1px; margin-top:1.2em;margin-bottom:0.3em;}
h4,h5,h6 {margin-top:1em;}
h1 {font-size:1.35em;}
h2 {font-size:1.25em;}
h3 {font-size:1.1em;}
h4 {font-size:1em;}
h5 {font-size:.9em;}

hr {height:1px;}

a {text-decoration:none;}

dt {font-weight:bold;}

ol {list-style-type:decimal;}
ol ol {list-style-type:lower-alpha;}
ol ol ol {list-style-type:lower-roman;}
ol ol ol ol {list-style-type:decimal;}
ol ol ol ol ol {list-style-type:lower-alpha;}
ol ol ol ol ol ol {list-style-type:lower-roman;}
ol ol ol ol ol ol ol {list-style-type:decimal;}

.txtOptionInput {width:11em;}

#contentWrapper .chkOptionInput {border:0;}

.externalLink {text-decoration:underline;}

.indent {margin-left:3em;}
.outdent {margin-left:3em; text-indent:-3em;}
code.escaped {white-space:nowrap;}

.tiddlyLinkExisting {font-weight:bold;}
.tiddlyLinkNonExisting {font-style:italic;}

/* the 'a' is required for IE, otherwise it renders the whole tiddler in bold */
a.tiddlyLinkNonExisting.shadow {font-weight:bold;}

#mainMenu .tiddlyLinkExisting,
	#mainMenu .tiddlyLinkNonExisting,
	#sidebarTabs .tiddlyLinkNonExisting {font-weight:normal; font-style:normal;}
#sidebarTabs .tiddlyLinkExisting {font-weight:bold; font-style:normal;}

.header {position:relative;}
.header a:hover {background:transparent;}
.headerShadow {position:relative; padding:4.5em 0 1em 1em; left:-1px; top:-1px;}
.headerForeground {position:absolute; padding:4.5em 0 1em 1em; left:0; top:0;}

.siteTitle {font-size:3em;}
.siteSubtitle {font-size:1.2em;}

#mainMenu {position:absolute; left:0; width:10em; text-align:right; line-height:1.6em; padding:1.5em 0.5em 0.5em 0.5em; font-size:1.1em;}

#sidebar {position:absolute; right:3px; width:16em; font-size:.9em;}
#sidebarOptions {padding-top:0.3em;}
#sidebarOptions a {margin:0 0.2em; padding:0.2em 0.3em; display:block;}
#sidebarOptions input {margin:0.4em 0.5em;}
#sidebarOptions .sliderPanel {margin-left:1em; padding:0.5em; font-size:.85em;}
#sidebarOptions .sliderPanel a {font-weight:bold; display:inline; padding:0;}
#sidebarOptions .sliderPanel input {margin:0 0 0.3em 0;}
#sidebarTabs .tabContents {width:15em; overflow:hidden;}

.wizard {padding:0.1em 1em 0 2em;}
.wizard h1 {font-size:2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizard h2 {font-size:1.2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizardStep {padding:1em 1em 1em 1em;}
.wizard .button {margin:0.5em 0 0; font-size:1.2em;}
.wizardFooter {padding:0.8em 0.4em 0.8em 0;}
.wizardFooter .status {padding:0 0.4em; margin-left:1em;}
.wizard .button {padding:0.1em 0.2em;}

#messageArea {position:fixed; top:2em; right:0; margin:0.5em; padding:0.5em; z-index:2000; _position:absolute;}
.messageToolbar {display:block; text-align:right; padding:0.2em;}
#messageArea a {text-decoration:underline;}

.tiddlerPopupButton {padding:0.2em;}
.popupTiddler {position: absolute; z-index:300; padding:1em; margin:0;}

.popup {position:absolute; z-index:300; font-size:.9em; padding:0; list-style:none; margin:0;}
.popup .popupMessage {padding:0.4em;}
.popup hr {display:block; height:1px; width:auto; padding:0; margin:0.2em 0;}
.popup li.disabled {padding:0.4em;}
.popup li a {display:block; padding:0.4em; font-weight:normal; cursor:pointer;}
.listBreak {font-size:1px; line-height:1px;}
.listBreak div {margin:2px 0;}

.tabset {padding:1em 0 0 0.5em;}
.tab {margin:0 0 0 0.25em; padding:2px;}
.tabContents {padding:0.5em;}
.tabContents ul, .tabContents ol {margin:0; padding:0;}
.txtMainTab .tabContents li {list-style:none;}
.tabContents li.listLink { margin-left:.75em;}

#contentWrapper {display:block;}
#splashScreen {display:none;}

#displayArea {margin:1em 17em 0 14em;}

.toolbar {text-align:right; font-size:.9em;}

.tiddler {padding:1em 1em 0;}

.missing .viewer,.missing .title {font-style:italic;}

.title {font-size:1.6em; font-weight:bold;}

.missing .subtitle {display:none;}
.subtitle {font-size:1.1em;}

.tiddler .button {padding:0.2em 0.4em;}

.tagging {margin:0.5em 0.5em 0.5em 0; float:left; display:none;}
.isTag .tagging {display:block;}
.tagged {margin:0.5em; float:right;}
.tagging, .tagged {font-size:0.9em; padding:0.25em;}
.tagging ul, .tagged ul {list-style:none; margin:0.25em; padding:0;}
.tagClear {clear:both;}

.footer {font-size:.9em;}
.footer li {display:inline;}

.annotation {padding:0.5em; margin:0.5em;}

* html .viewer pre {width:99%; padding:0 0 1em 0;}
.viewer {line-height:1.4em; padding-top:0.5em;}
.viewer .button {margin:0 0.25em; padding:0 0.25em;}
.viewer blockquote {line-height:1.5em; padding-left:0.8em;margin-left:2.5em;}
.viewer ul, .viewer ol {margin-left:0.5em; padding-left:1.5em;}

.viewer table, table.twtable {border-collapse:collapse; margin:0.8em 1.0em; font-size:1em;}
.viewer th, .viewer td, .viewer tr,.viewer caption,.twtable th, .twtable td, .twtable tr,.twtable caption {padding:3px;}
table.listView {font-size:0.85em; margin:0.8em 1.0em;}
table.listView th, table.listView td, table.listView tr {padding:0 3px 0 3px;}

.viewer pre {padding:0.5em; margin-left:0.5em; font-size:1.2em; line-height:1.4em; overflow:auto;}
.viewer code {font-size:1.2em; line-height:1.4em;}

.editor {font-size:1.1em;}
.editor input, .editor textarea {display:block; width:100%; box-sizing: border-box; font:inherit;}
.editorFooter {padding:0.25em 0; font-size:.9em;}
.editorFooter .button {padding-top:0; padding-bottom:0;}

.fieldsetFix {border:0; padding:0; margin:1px 0px;}

.zoomer {font-size:1.1em; position:absolute; overflow:hidden;}
.zoomer div {padding:1em;}

* html #backstage {width:99%;}
* html #backstageArea {width:99%;}
#backstageArea {display:none; position:relative; overflow: hidden; z-index:150; padding:0.3em 0.5em;}
#backstageToolbar {position:relative;}
#backstageArea a {font-weight:bold; margin-left:0.5em; padding:0.3em 0.5em;}
#backstageButton {display:none; position:absolute; z-index:175; top:0; right:0;}
#backstageButton a {padding:0.1em 0.4em; margin:0.1em;}
#backstage {position:relative; width:100%; z-index:50;}
#backstagePanel {display:none; z-index:100; position:absolute; width:90%; margin-left:3em; padding:1em;}
.backstagePanelFooter {padding-top:0.2em; float:right;}
.backstagePanelFooter a {padding:0.2em 0.4em;}
#backstageCloak {display:none; z-index:20; position:absolute; width:100%; height:100px;}

.whenBackstage {display:none;}
.backstageVisible .whenBackstage {display:block;}
/*}}}*/
/***
StyleSheet for use when a translation requires any css style changes.
This StyleSheet can be used directly by languages such as Chinese, Japanese and Korean which need larger font sizes.
***/
/*{{{*/
body {font-size:0.8em;}
#sidebarOptions {font-size:1.05em;}
#sidebarOptions a {font-style:normal;}
#sidebarOptions .sliderPanel {font-size:0.95em;}
.subtitle {font-size:0.8em;}
.viewer table.listView {font-size:0.95em;}
/*}}}*/
/*{{{*/
@media print {
#mainMenu, #sidebar, #messageArea, .toolbar, #backstageButton, #backstageArea {display: none !important;}
#displayArea {margin: 1em 1em 0em;}
noscript {display:none;} /* Fixes a feature in Firefox 1.5.0.2 where print preview displays the noscript content */
}
/*}}}*/
<!--{{{-->
<div class='toolbar' role='navigation' macro='toolbar [[ToolbarCommands::ViewToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='subtitle'><span macro='view modifier link'></span>, <span macro='view modified date'></span> (<span macro='message views.wikified.createdPrompt'></span> <span macro='view created date'></span>)</div>
<div class='tagging' macro='tagging'></div>
<div class='tagged' macro='tags'></div>
<div class='viewer' macro='view text wikified'></div>
<div class='tagClear'></div>
<!--}}}-->
//Monads are of questionable usefulness//

Monads are a concept and theoretical framework from category theory.
But the use of Monads as a programming construct is much touted in the realm of functional programming -- which unfortunately is riddled with some kind of phobia towards ''State''. And this is ill-guided in itself, since //not State is the problem,// ''Complexity'' is. Complexity arises from far reaching non local interdependencies and coupling. Some complexity is essential.

A telltale sign is that people constantly ask »What is a Monad?«
And they don't get an answer, rather they get a thousand answers.
The term //»Monad« fails to evoke an image// once mentioned.

What remains is a set of clever technicalities. Such can be applied and executed without understanding. The same technicality can be made to work on vastly distinct subjects. And a way to organise such technicalities can be a way to rule and control subjects. The exertion of control is not in need of invoking images, once its applicability is ensured. //That// state is indifferent towards complexity, at least.

When we care for complexity, we do so while we care for matters tangible to humans. Related to software, this is the need to maintain it, to adjust it, to adapt and remould it, to keep it relevant. At the bottom of all of these lies the need to understand software. And this Understanding mandates to use terms and notions, even patterns, which evoke meaning -- even to the uninitiated. How can „Monads“ even be helpful with that?

!To make sensible usage of Monads
Foremost, they should be kept as what they are: technicalities. For the understanding, they must be subordinate to a real concept or pattern. One with the power to reorient our view of matters at hand. Thus we ask: //what can be said about Monads?//
!!!Formalism
Regarding the formalism, it should be mentioned
* that a Monad is a //type constructor// -- it takes a type parameter and generates a new instant of monadic kind.
* and assuming such an instance, there is a //constructor// <br>{{{
   unit: A → M<A>
   }}}
* and once we've obtained a concrete entity of type {{{M<A>}}}, we can //bind a function// to transform it in another monadic entity<br/>{{{
   M<A>::bind( A → M<B> ) → M<B>
   }}}
At this point, also the ''Monad Axioms'' should be mentioned
;neutrality
:{{{unit}}} (the monad constructor) is neutral with respect to the {{{bind}}} operation
{{{
unit(a)::bind(f)  ≡  f(a)
M::bind(unit)  ≡  M
}}}
;composition
:we can define a //composition// of monadic functions,
:which is equivalent to consecutive binding
{{{
(M::bind(f) )::bind(g)  ≡  M::bind( f ∘ g )     with f ∘ g  defined as λ.x → { f(x)::bind(g) }
}}}

!!!Distinct properties
The obvious first observation is that a Monad must be some kind of //container// -- or //object,// for that.

The next observation to note is the fact that the {{{bind}}}-operation is notoriously hard to understand.
Why is this so? Because it //intermingles// the world of monads with the things from the value domain. You can not just write a monadic function for use with {{{bind}}}, without being aware of the world of monads. It is not possible to "lift" an operation from the world of values automatically into the world of monads. Please contrast this to the ''map operation'', which is easy to understand for that very reason: if you map an operation onto a container of values, the operation itself does not need to be aware of the existence of containers as such. It just operates on values.

This observation can be turned into a positive use-case for monadic structures -- whenever there is an ''additional cross-cutting concern'' within the world of values, to necessitate our constant concern all over the place. The Monad technique can then be used to re-shuffle this concern in such a way that it becomes possible to keep it //mentally separate.// Thus, while it is not possible to //remove// that concern, re-casting it as Monad creates a more regular structure, which is easier to cope with. A Monad can thus be seen as an ''augmented or „amplified“ value''. A value outfitted with additional capabilities. This view might be illustrated by looking at some of the most prominent applications
;failure handling
:when we mark an expression as prone to failure by wrapping it into some //exception handling monad,//
:it becomes easy to combine several such unreliable operations into a single program unit with just one failure handler
;partially defined operations
:when an operation can only work on some part of the value domain, we can mark the result as //optional//
:and combining several tasks with optional result into one chain of operations can then be done completely generic
;multiple results
:when some tasks might possibly produce multiple results, we can mark those with a //container monad,//
:and collecting and combining of such multifold results can then be dealt with completely separate from the generating actions
;secondary state attributions
:when we have to observe, extend and communicate additional data beyond what is involved in the basic computation
:the collection and management of such additional attributions can be externalised with a //state monad//
;local technicalities
:when some algorithm or computation involves specific technicalities to get at the underlying data
:a monad can be used to encapsulate those details and expose only abstracted higher-level //operation primitives//

!!!Building with monads
As pointed out above, while all those disparate usages might share some abstract structural similarities, no overarching theme can be found to span them all. If we tend to distil an essence from all those usages, we're bound to end up with nothing. The reason is, //we do apply// monadic techniques while coping with the problem, but there is //nothing inherent „monadic“ in the nature// of things at hand.

Yet to guide our thinking and doing, when we deal with matters, other concepts, notions and patterns are better suited to guide our actions, because they concur with the inherent nature of things:
;builder
:a builder allows to assemble a very specific structure from building blocks,
:when done, the structure is self-contained and can be operated and used devoid all intricacies involved in building.
:Monads are //exceptionally viable// for performing this process of assembly within the builder.
;domain specific language
:a DSL can found a higher level of abstraction, where local handling details are stashed away.
:Monads can be useful as supporting backbone below a DSL, especially in cases where a full language system and runtime engine would be overblown
;abstract composite value
:an abstract data type or complex value can be used to represent computations beyond simple numbers,
:prominent examples being colours with colour space or measurement values with numerical precision and unit of measurement.
:Monads can be used to build the basic combining operations on such custom types, and their interplay.
;engine
:an engine guides and enacts ongoing computations, thereby transitioning internal state
:Monads can be used as blueprint for managing these internal state transitions, and to prevent them from leaking out into the target of processing
;context
:a (possibly global) context can be used to represent all further concerns, which need to be there as well.
:Monads can be used to create a distinct, secondary system of communication (»wormhole«) and they can be used to capture and replay events.
:beyond that, for most practical purposes, simple plain mutable variables plus nested scopes //are a solution superior to the application of Monads.//
PageTemplate
|>|SiteTitle - SiteSubtitle|
|>|MainMenu|
|DefaultTiddlers<<br>><<br>><<br>>ViewTemplate<<br>><<br>>EditTemplate|SideBarOptions|
|~|OptionsPanel|
|~|SideBarTabs|
|~|AdvancedOptions|
|~|<<tiddler Configuration.SideBarTabs>>|

''StyleSheet:'' StyleSheetColors - StyleSheetLayout - StyleSheetPrint

ColorPalette

SiteUrl
//pattern of collaboration for loosely coupled entities, to be used for various purposes within the Application...//
Expecting Advice and giving Advice &mdash; this collaboration ranges somewhere between messaging and dynamic properties, but cross-cutting the primary, often hierarchical relation of dependencies. Always happening at a certain //point of advice,// this exchange of data gains a distinct, static nature -- it is more than just a convention or a protocol. On the other hand, Advice is deliberately kept optional and received synchronously (albeit possibly within an continuation), this way allowing for loose coupling. This indirect, cross-cutting nature allows to build integration points into library facilities, without enforcing a strong dependency link.

!Specification
''Definition'': Advice is an optional, mediated collaboration between entities taking on the roles of advisor and advised, thereby passing a custom piece of advice data, managed by the advice support system. The possibility of advice is created by both of the collaborators entering the system, where the advised entity exposes a point of advice, while the advising entity provides an actual advice value.
[>img[Entities for Advice collaboration|uml/fig141445.png]]


!!Collaborators
* the ''advised'' entity 
* the ''advisor''
* ''point of advice''
* ''advice system''
* the ''binding''
* the ''advice''
Usually, the ''advised'' entity opens the collaboration by requesting an advice. The ''advice'' itself is a piece of data of a custom type, which needs to be //copyable.// Obviously, both the advised and the advisor need to share knowledge about the meaning of this advice data. (in a more elaborate version we might allow the advisor to provide a subclass of the advice interface type). The actual advice collaboration happens at a ''point of advice'', which needs to be derived first. To this end, two prerequisites are to be fulfilled (without fixed sequence): The advised puts up an ''advice request'' by specifying his ''binding'', which is a pattern for matching. An entity about to give advice attaches a possible ''advice provision'', combined with an advisor binding, which similarly is a pattern. The ''advice system'' as mediator resolves both sides, by matching (which in the most general case could be an unification). This process creates an ''advice point solution'' &mdash; allowing the advisor to fed the piece of advice into a kind of communication channel (&raquo;advice channel&laquo;), causing the advice data to be placed into the point of advice. After passing a certain (implementation defined) break point, the advice leaves the influence of the advisor and gets exposed to the advised entities. Especially, this involves copying the advice data into a location managed by the advice system. In the standard case, the advised entity picks up the advice synchronously (and non-blocking). Of course, there could be a status flag to find out if there is new advice. Moreover, typically the advice data type is default constructible and thus there is always a basic form of advice available, thereby completely decoupling the advised entity from the timings related to this collaboration.

!!extensions
In a more elaborate scheme, the advised entity could provide a signal to be invoked either in the thread context of the advisor (being still blocked in the advice providing call), or in a completely separate thread. A third solution would be to allow the advised entity to block until receiving new advice. Both of these more elaborate schemes would also allow to create an advice queue &mdash; thereby developing the advice collaboration into a kind of messaging system. Following this route seems questionable though.

&rarr; AdviceSituations
&rarr; AdviceRequirements
&rarr; AdviceImplementation
[<img[Advice solution|uml/fig141573.png]]





The advice system is //templated on the advice type// &mdash; so basically any collaboration is limited to a distinct advice type. But currently (as of 5/2010), this typed context is kept on the interface level, while the implementation is built on top of a single lookup table (which might create contention problems in the future and thus may be changed without further notice). The advice system is a system wide singleton service, but it is never addressed directly by the participants. Rather, instances of ~AdviceProvision and ~AdviceRequest act as point of access. But these aren't completely symmetric; while the ~AdviceRequest is owned by the advised entity, the ~AdviceProvision is a value object, a uniform holder used to introduce new advice into the system. ~AdviceProvision is copied into an internal buffer and managed by the advice system, as is the actual advice item, which is copied alongside.

In order to find matches and provide advice solutions, the advice system maintains an index data structure called ''~Binding-Index''. The actual binding predicates are represented by value objects stored within this index table. The matching process is triggered whenever a new possibility for an advice solution enters the system, which could be a new request, a new provision or a change in the specified bindings. A successful match causes a pointer to be set within the ~AdviceRequest, pointing to the ~AdviceProvision acting as solution. Thus, when a solution exists, the advised entity can access the advice value object by dereferencing this pointer. A new advice solution just results in setting a different pointer, which is atomic and doesn't need to be protected by locking. But note, omitting the locking means there is no memory barrier; thus the advised entity might not see any changed advice solution, until the corresponding thread(s) refresh their CPU cache. This might or might not be acceptable, depending on the context, and thus is configurable as policy. Similarly, the handling of default advice is configurable. Usually, advice is a default constructible value object. In this case, when there isn't any advice solution (yet), a pseudo solution holding the default constructed advice value is used to satisfy any advice access by the client (advised entity). The same can be used when the actual ~AdviceProvision gets //retracted.// As an alternative, when this default solution approach doesn't work, we can provide a policy either to throw or to wait blocking &mdash; but this alternative policy is similarly implemented with a //null object// (a placeholder ~AdviceProvision). Anyway, this implementation technique causes the advice system to collect some advice provisions, bindings and advice objects over time. It should use a pooling custom allocator in the final version. As the number of advisors is expected to be rather small, the storage occupied by these elements, which is effectively blocked until application exit, isn't considered a problem.

!organising the advice solution
This is the tricky part of the whole advice system implementation. A naive implementation will quickly degenerate in performance, as costs are of order ~AdviceProvisions * ~AdviceRequests * (average number of binding terms). But contrary to the standard solutions for rules based systems (either forward or backward chaining), in this case here always complete binding sets are to be matched, which allows to reduce the effort.

!!!solution mechanics
The binding patterns are organised by //predicate symbol and the lists are normalised.// A simple normalisation could be lexicographic ordering of the predicate symbols. Then the resulting representation can be //hashed.// When all predicates are constant, a match can be detected by hashtable lookup, otherwise, in case some of the predicates contain variable arguments ({{red{planned extension}}}), the lookup is followed by an unification. For this to work, we'll have to include the arity into the predicate symbols used in the first matching stage. Moreover, we'll create a //matching closure// (functor object), internally holding the arguments for unification. This approach allows for //actual interpretation of the arguments.// It is conceivable that in special cases we'll get multiple instances of the same predicate, just with different arguments. The unification of these terms needs to consider each possible pairwise combination (cartesian product) &mdash; but working out the details of the implementation can safely be deferred until we'll actually hit such a special situation, thanks to the implementation by a functor.

Fortunately, the calculation of this normalised patterns can be separated completely from the actual matching. Indeed, we don't even need to store the binding patterns at all within the binding index &mdash; storing the hash value is sufficient (and in case of patterns with arguments we'll attach the matching closure functor). Yet still we need to store a marker for each successful match, together with back-links, in order to handle changing and retracting of advice.

!!!storage and registrations
We have to provide dedicated storage for the actual advice provisions and for the index entries. Mostly, these objects to be managed are attached through a single link &mdash; and moreover the advice system is considered performance critical, so it doesn't make sense to implement the management of these entries by smart-ptr. This rules out ~TypedAllocationManager and prompts to write a dedicated storage frontend, later to be backed by Lumiera's mpool facility.
* both the advice provision and the advice requests attach to the advice system after fulfilling some prerequisites; they need to detach automatically on destruction.
* in case of the provision, there is a cascaded relation: the externally maintained provision creates an internal provision record, which in turn attaches an index entry.
* both in case of the provision and the request, the relation of the index bears some multiplicity:
** a multitude of advice provisions can attach with the same binding (which could be the same binding pattern terms, but different variable arguments). Each of them could create a separate advice solution (at least when variable arguments are involved); it would be desirable to establish a defined LIFO order for any search for possibly matching advice.
** a multitude of advice requests can attach with the same binding, and each of them needs to be visited in case a match is detected.
* in both cases, any of these entries could be removed any time on de-registration of the corresponding external entity
* we need to track existing advice solutions, because we need to be able to overwrite with new advice and to remove all solutions bound to a given pattern about to leave the system. One provision could create a large number of solutions, while each registration always holds onto exactly one solution (which could be a default/placeholder solution though)

!!!!subtle variations in semantics
While generally advice has value semantics and there is no ownership or distinguishable identity, the actual implementation technique creates the possibility for some subtle semantic variations. At the time of this writing (5/2010) no external point of reference was available to decide upon the correct implementation variant. These variations get visible when advice is //retracted.// Ideally, a new advisor would re-attach to an existing provision and supersede the contained advice information with new data. Thus, after a chain of such new provisions all attaching with the identical binding, when finally the advice gets retracted, any advice provisions would be gone and we'd fall back onto the default solution. Thus, "retracting" would mean to void any advice given with this binding.
But there is another conceivable variation of semantics, which yields some benefits implementation-wise: Advice can be provided as "I don't care what was said, but here is new information". In this case, the mechanism of resolving and finding a match would be responsible to pick the latest addition, while the provisions would just be dumped into the system. In this case, "retracting" would mean just to cancel //one specific//&nbsp; piece of information and might cause in an older advice solution to be uncovered and revived. The default (empty) solution would be used in this case only after retracting all advice provisions.

!!!!implementation variants with respect to attachment and memory management
Aside from the index, handling of the advice provisions turns out to be tricky.
* management by ref-count was ruled out due to contention and locality considerations
* the most straight forward implementation would be for the ~AdviceProvision within the advisor to keep kind of an "inofficial" link to "its" provision, allowing to modify and retract it during the lifetime of the advisor. When going away without retracting (the default behaviour), the provision, as added into the system would remain there as a dangling entry. It is still reachable via the index, but not maintained in any further way. If memory usage turns out to be a problem, we'd need to enqueue these entries for clean-up.
* but as this simple solution contradicts the general advice semantics in a subtle way (see previous paragraph), we could insist on really re-capturing and retracting previous advice automatically on each new advice provision or modification. In this case, due to the requirement of thread safety, each addition, binding modification, placing of new advice or retraction would require to do an index search to find an existing provision with equivalent binding (same binding definition, not just a matching binding pattern). As a later provision could stomp upon an existing provision without the original advisor noticing this, we can't use the internal references anymore; we really need to search each time and also need a global lock during the modification transaction.
* an attempt to reduce this considerable overhead would be to use a back-link from the provision as added to the system to the original source (the ~AdviceProvision owned by the advisor). On modification, this original source would be notified and thus detached. Of course this is tricky to implement correctly, and also requires locking.
The decision for the initial implementation is to use the first variant and just accept the slightly imprecise semantics.
When copying a Provision, the hidden link to existing advice data is //not shared.//

!!!!de-allocation of advice data
It is desirable that the dtors of each piece of advice data be called eventually. But ensuring this reliably is tricky, because advice 
data may be of various types and is added to the system to remain available, even after the original {{{advice::Provision}}} went out of scope. Moreover, the implementation decision was //not//&nbsp; to employ a vtable for the advice collaborators and data holders, so we're bound to invoke the dtor with the correct specific type.
There are some special cases when de-allocation happens while the original provision is still alive (new advice, changed binding, retracting). But in any other case, responsibility for de-allocation has to be taken by the ~AdviceSystem, which unfortunately can't handle the specific type information. Thus the original provision needs to provide a deleter function, and there is no way to avoid storing a function pointer to this deleter within the ~AdviceSystem, together with the advice data holder.
It seems reasonable to create this deleter right away and not to share the link to advice data, when copying a provision, to keep responsibilities straight. {{red{Question: does this even work?? }}} to be verified: does the address of the advice data buffer really determine alone what is found as "existing" provision?

!!!lifecycle considerations
Behind the scenes, hidden within the {{{advice.cpp}}} implementation file, the ~AdviceSystem is maintained as singleton. According to a general lifecycle policy within Lumiera, no significant logic is allowed to execute in the shutdown phase of the application, once the {{{main()}}} has exited. Thus, any advice related operations might throw {{{error::Logic}}} after that point. The {{{~AdviceSystem()}}} also is a good place to free any buffers holding incorporated advice data, after having freed the index datastructure referring to these buffer storage, of course.

!!!!handling of default advice
Basically, the behaviour when requesting non-existing advice may be configured by policy. But the default policy is to return ref to a default constructed instance of the advice type in that case. Just the (implementation related) problem is that we return advice by {{{const&}}}, not by value, so we're bound to create and manage this piece of default advice during the lifetime of the ~AdviceSystem. The way the ~AdviceSystem is accessed (only through the frontend of {{{advice::Request}}} and {{{advice::Provision}}} objects, in conjunction with the desire to control this behaviour by policy, creates a tricky implementation situation.
* regarding the lifecycle (and also from the logical viewpoint) it would be desirable to handle this "default" or "no solution" case similar to accessing an existing solution. But unfortunately doing so would require a fully typed context; thus basically on inserting a new request, when returning from the index search without a dedicated solution, we'd need to fabricate a fallback solution to insert it into the provision index, while still holding the index lock. At that point it is not determined if we ever need that fallback solution. Alternatively we could consider to fabricate this fallback solution on first unsuccessful advice fetch. But this seems sill worse, as it turns a (possibly even lock free) ptr access into an index operation. Having a very cheap advice access seems like an asset.
* on the other hand, using some separate kind of singleton bundle just for these default advice data (e.g. a templated version of //Meyer's Singleton...//), the fallback solution can be settled independent from the ~AdviceSystem, right in the {{{advice.hpp}}} and using static memory, but the downside is now the fallback solution might be destroyed prior to shutdown of the ~AdviceSystem, as it lives in another compilation unit.
Thus the second approach looks favourable, but we should //note the fact that it is hard to secure this possible access to an already destroyed solution,// unless we decline using the advice feature after the end of {{{main()}}}. Such a policy seems to be reasonable anyway, as the current implementation also has difficulties to prevent accessing an already destroyed {{{advice::Provision}}}, being incorporated in the ~AdviceSystem, but accessed through a direct pointer in the {{{advice::Request}}}.

!!!locking and exception safety
The advice system is (hopefully) written such as not to be corrupted in case an exception is thrown. Adding new requests, setting advice data on a provision and any binding change might fail due to exhausted memory. The advice system remains operational in this case, but the usual reaction would be //subsystem shutdown,// because the Advice facility typically is used in a very low-level manner, assuming it //just works.// As far as I can see, the other mutation operations can't throw.

The individual operations on the interface objects are //deliberately not thread-safe.// The general assumption is that {{{advice::Request}}} and {{{advice::Provision}}} will be used in a safe environment and not be accessed or modified concurrently. A notable exception to this rule is accessing Advice: as this just includes checking and dereferentiating a pointer, it might be done concurrently. But note, //the advice system does nothing to ensure visibility of the solution within a separate thread.// If this thread still has the old pointer value in his local cache, it won't pick up the new solution. In case the old solution got retracted, this even might cause access to already released objects. You have been warned. So it's probably a good idea to ensure a read barrier happens somewhere in the enclosing usage context prior to picking up a possibly changed advice solution concurrently.
''Note'': the underlying operations on the embedded global {{{advice::Index}}} obviously need to be protected by locking the whole index table on each mutation, which also ensures a memory barrier and thus propagates changed solutions. While this settles the problem for the moment, we might be forced into a more fine grained locking due to contention prolems later on...

!!!index datastructure
It is clear by now that the implementation datastructure has to serve as a kind of //reference count.// Within this datastructure, any constructed advice solution needs to be reflected somehow, to prevent us from discarding an advice provision still accessible. Allowing lock-free access to the advice solution (planned feature) adds a special twist, because in this case we can't even tell for sure if an overwritten old solution is actually gone (or if its still referred from some thread's cached memery). This could be addressed by employing a transactional approach (which might be good anyway) -- but I tend to leave this special concern aside for now.

To start with, any advice matching and solution will //always happen within matching buckets of a hash based pattern organisation.// The first stage of each access involves using the correct binding pattern, and this binding pattern can be represented within the index data structure by the binding pattern's hash value. Since the advice typing can be translated at interface level into a further predicate within the binding, the use of these binding hashes as first access step also limits access to advice with proper type. All further access or mutation logic takes place within a sub structure corresponding to this top-level hash value. The binding index thus relies on two hashtables (one for the requests and one for the provisions), but using specifically crafted datastructures as buckets. The individual entries within these bucket sub structures in both cases will be comprised of a binding matcher (to determine if a match actually happens) and a back-link to the registered entitiy (provision or request). Given the special pattern of the advice solutions, existing solutions could be tracked within the entries at the request side.
* Advice provisions are expected to happen only in small numbers; they will be searched stack-like, starting from the newes provisions, until a match is found.
* Each advised entity basically creates an advice request, so there could be a larger number of request entries. In the typical search triggered from the provision side, each request entry will be visited and checked for match, which, if successful, causes a pointer to be set within the ~AdviceRequest object (located outside the realm of the advice system). While -- obviously -- multiple requests with similar binding match could be folded into a sub-list, we need actual timing measurements to determine the weight of these two calculation steps of matching and storing, which together comprise the handling of an advice solution.

The above considerations don't fully solve the question how to represent a computed solution within the index data structure, candidates being to use the index within the provision list, or a direct pointer to the provision or even just to re-use the pointer stored into the ~AdviceRequest. My decision is to do the latter. Besides solutions found by matching, we need //fallback solutions// holding a default constructed piece of advice of the requested type. As these defaults aren't correlated at all to the involved bindings, but only to the advice type as such, it seems reasonable to keep them completely apart, like e.g. placing them into static memory managed by the ~AdviceProvision template instantiations.

!!!interactions to be served by the index
[>img[Advice solution|draw/adviceBindingIndex1.png]]

;add request
:check existing provisions starting from top until match; use default solution in case no match is found; publish solution into the new request; finally attach the new request entry
;remove request
:just remove the request entry
;modify request
:handle as if newly added
;add provision
:push new provision entry on top; traverse all request entries and check for match with this new provision entry, publish new solution for each match
;retract provision
:remove the provision entry; traverse all request entries to find those using this provision as advice solution, treat these as if they where newly added requests
;modify provision
:add a new (copy of the) provision, followed by retracting the old one; actually these two traversals of all requests can be combined, thus treating a request which used the old provision but doesn't match the new one is treated like a new request
<<<
__Invariant__: each request has a valid solution pointer set (maybe pointing to a default solution). Whenever such a solution points to a registered provision, there is a match between the index entries and this is the top-most possible match to any provision entry for this request entry
<<<
Clearly, retracting advice (and consequently also the modification) is expensive. After finishing these operations, the old/retracted provision can be discarded (or put aside in case of non-locking advice access). Other operations don't cause de-allocation, as provisions remain within the system, even if the original advising entity is gone.
From analysing a number of intended AdviceSituations, some requirements for an Advice collaboration and implementation can be extracted.

* the piece of advice is //not shared// between advisor and the advised entities; rather, it is copied into storage managed by the advice system
* the piece of advice can only be exposed {{{const}}}, as any created advice point solution might be shared
* the actual mode of advice needs to be configurable by policy &mdash; signals (callback functors) might be used on both sides transparently
* the client side (the advised entity) specifies initially, if a default answer is acceptable. If not, retrieving advice might block or fail
* on both sides, the collaboration is initiated specifying an advice binding, which is an conjunction of predicates, --optionally dynamic--^^no!^^
* there is a tension between matching performance and flexibility. The top level should be entirely static (advice type)
* the analysed usage situations provide no common denominator on the preferences regarding the match implementation.
* some cases require just a match out of a small number of tokens, while generally we might get even a double dispatch
* later, possible and partial solutions could be cached, similar to the rete algorithm. Dispatching a solution should work lock-free
* advice can be replaced by new advice, which causes all matching advice solutions to behave as being overwritten.
* when locking is left out, we can't give any guarantee as to when a given advice gets visible to the advised entity
* throughput doesn't seem to be an issue, but picking up existing advice should be as fast as possible
* we expect a small number of advisors collaborating with and a larger number of advised entities.

!!questions
;when does the advice collaboration actually happen?
:when there is both a client (advised) and a server (advisor) and their advice bindings match
;can there be multiple matches?
:within the system as a whole there can be multiple solutions
:but the individual partners never see more than one connection
:each point of advice has exactly one binding and can establish one advice channel
;but when an attempt is made to transfer more information?
:both sides don't behave symmetrically, and thus the consequences are different
:on the client side, advice is just //available.// When there is newer one, the previous advice is overwritten
:the server side doesn't //contain// advice &mdash; rather, it is placed into the system. After that, the advisor can go away
:thus, if an advisor places new advice into an existing advice provision, this effectively initiates a new collaboration
:if the new advice reaches the same destination, it overwrites; but it may as well reach a different destination this time
;can just one advice provision create multiplicity?
:yes, because of the matching process there could be multiple solutions. But neither the client nor the server is aware of that.
;can advice be changed?
:No. When inserted into the system, the advisor looses any direct connection to the piece of advice (it is copied)
:But an advisor can put up another piece of advice into the same advice provision, thereby effectively overwriting at the destination
;if advice is copied, what about ownership and identity?
:advice has //value semantics.// Thus it has no distinguishable identity beyond the binding used to attach it
:a provision does not "own" advice. It is a piece of information, and the latest information is what counts
;can the binding be modified dynamically?
:this is treated as if retracting the existing point of advice and opening a new one.
;what drives the matching?
:whenever a new point of advice is opened, search for a matching solution happens.
:thus, the actual collaboration can be initiated from both sides
:when a match happens, the corresponding advice point solution gets added into the system
;what about the lifetime of such a solution?
:it is tied to the //referral// &mdash; but there is an asymmetry between server and client
:referral is bound to the server sided / client sided point of advice being still in existence
:but the server sided point of advice is copied into the system, while the client sided is owned by the client
:thus, when an advisor goes away without explicitly //retracting//&nbsp; the advice, any actual solution remains valid
:on the client side there is an asymmetry: actually, a new advice request can be opened, with an exactly identical binding
:in this case, existing connections will be re-used. But any differences in the binding will require searching a new solution
;is the search for an advice point solution exhaustive?
:from the server side, when a new advice provision / binding is put up, //any// possible advice channel will be searched
:contrary to this, at the client side, the first match found wins and will establish an advice channel.

!decisions
After considering the implementation possibilities, some not completely determined requirements can be narrowed down.
* we //do// support the //retracting of advice.//
* there is always an implicit //default advice solution.//
* advice //is not an messaging system// &mdash; no advice queue
* signals (continuations) are acceptable as a extension to be provided later
* retracting advice means to retreat a specific solution. This might or might not uncover earlier solutions (undefined behaviour)
* we don't support any kind of dynamic re-evaluation of the binding match (this means not supporting the placement use case)
* the binding pattern is //interpreted strictly as a conjunction of logic predicates// &mdash; no partial match, but arguments are allowed
* we prepare for a later extension to //full unification of arguments,// and provide a way of accessing the created bindings as //advice parameters.//

Combining all these requirements and properties provides the foundation for the &rarr; AdviceImplementation

[[Advice]] is a pattern extracted from several otherwise unrelated constellations.
For the initial analysis in 2010, several use cases were investigated -- ironically, as of 2019, none of these initial use cases became actually relevant. Non the less, the AdviceImplementation turned out to be viable as a ''whiteboard system'' to exchange dynamic facts without coupling.

!Actual usages of the Advice System
* for access to time grid and timecode format definitions
* for access to custom defined UI style elements

!Historical / Theoretical use-cases

!!!Proxy media in the engine
Without rebuilding the engine network, we need the ability to reconfigure some parts to adapt to low resolution place-holder media temporarily. The collaboration required to make this happen seems to ''cross-cut'' the normal processing logic. Indeed, the nature of the adjustments is highly context dependent &mdash; not every processing node needs to be adjusted. There is a dangerous interference with the ongoing render processes, prompting for the possibility to pick up this information synchronously.
* the addressing and delivery of the advice is based on a mix of static (type) and dynamic information
* it is concievable that the actual matching may even include a token present in the direct invocation context (but this possibility was ruled out by later decision)
* the attempt to recieve and pick up advice needs to be failsafe
* locking should be avoided by design

!!!Dependency injection for testing
While inversion of control is a guiding principle on all levels, the design of the Lumiera application deliberately stays just below the level of employing a dependency injection container. Instead, common services are accessible //by type// and the builder pattern is used more explicitly at places. Interestingly, the impact on writing unit tests was by far not so serious as one might expect, based on the usual reasoning of D.I. proponents. But there remain some situations, where sharing a common test fixture would come in handy
* here the test depending on a fixture puts up a hard requirement for the actual advice to be there.
* thus, the advice support system can be used to communicate a need for advice
* but it seems unreasonable to extend it actually to transmitt a control flow

!!!properties of placement
The placement concept plays a fundamental role within Lumiera's HighLevelModel. Besides just being a way of sticking objects together and defining the common properties of //temporal position and output destination,// we try to push this approach to enable a more general, open and generic use. "Placement" is understood as locating within a grid comprised of various degrees of freedom &mdash; where locating in a specific way might create additional dimensions to be included into the placement. The standard example is an output connection creating additional adjustable parameters controlling the way the connected object is embedded into a given presentation space (consider e.g. a sound object, which &mdash; just by connection, gains the ability of being //panned// by azimuth, elevation and distance)
* in this case, obviously the colaboration is n:m, while each partner preferrably should only see a single advice link.
* advice is used here to negotiate a direct colaboration, which is then handed off to another facility (wiring a control connection)
* the possibility of an advice colaboration in this case is initiated rather from the side of the advisor
* deriving an advice point solution includes some kind of negotioation or active re-evaluation
* the possible adivsors have to be queried according to their placement scope relations
* this queriying might even trigger a resolution process within the advising placement.
__Note__: after detailed analysis, this use case was deemed beyond the scope of the [[Advice]] core concept and idea.
//As a use case, it was dropped.// But we retain some of the properties discovered by considering this scenario, especially the n:m relation, the symmetry in terms of opening the collaboration, and the possibility to have a specially implemented predicate in the binding pattern.

&rarr; AdviceRequirements
Memory management facility for the low-level model (render nodes network). The model is organised into temporal segments, which are considered to be structurally constant and uniform. The objects within each segment are strongly interconnected, and thus each segment is being built in a single build process and is replaced or released as a whole. __~AllocationCluster__ implements memory management to support this usage pattern. He owns a number of object families of various types.[>img[draw/AllocationCluster.png]]
* [[processing nodes|ProcNode]] &mdash; probably with several subclasses (?)
* [[wiring descriptors|WiringDescriptor]]
* the input/output descriptor arrays used by the latter

To Each of those families we can expect an initially undetermined (but rather large) number of individual objects, which can be expected to be allocated within a short timespan and which are to be released cleanly on destruction of the AllocationCluster.

''Problem of calling the dtors''
Even if the low-level memory manager(s) may use raw storage, we require that the allocated object's destructors be called. This means keeping track at least of the number of objects allocated (without wasting too much memory for bookkeeping). Besides, as the objects are expected to be interconnected, it may be dangerous to destroy a given family of objects while another family of objects may rely on the former in its destructor. //If we happen do get into this situation,// we need to define a priority order on the types and assure the destruction sequence is respected.

&rarr; see MemoryManagement
Asset management is a subsystem on its own. Assets are "things" that can be loaded into a session, like Media, Clips, Effects, Transitions. It is the "bookkeeping view", while the Objects in the Session relate to the "manipulation and process view". Some Assets can be //loaded// and a collection of Assets is saved with each Session. Besides, there is a collection of basic Assets always available by default.

The Assets are important reference points holding the information needed to access external resources. For example, an Clip asset can reference a Media asset, which in turn holds the external filename from which to get the media stream. For Effects, the situation is similar. Assets thus serve two quite distinct purposes. One is to load, list, group search and browse them, and to provide an entry point to create new or get at existing MObject in the Session, while the other purpose is to provide attribute and property information to the inner parts of the engine, while at the same time isolating and decoupling them from environmental details. 

We can distinguish several different Kinds of Assets, each one with specific properties. While all these Kinds of Assets implement the basic Asset interface, they in turn are the __key abstractions__ of the asset management view. Mostly, their interfaces will be used directly, because they are quite different in behaviour. Thus it is common to see asset related operations being templated on the Asset Kind. 
&rarr; see also [[Creating and registering Assets|AssetCreation]]
[img[Asset Classess|uml/fig130309.png]]

!Media Asset
Some piece of Media Data accessible at some external Location and able to be processed by Lumiera. A Media File on Harddisk can be considered as the most basic form of Media Asset, with some important derived flavours, like a Placeholder for a currently unavailable Source, or Media available in different Resolutions or Formats.
* __outward interface operations__ include querying properties, creating an Clip MObject, controlling processing policy (low res proxy placeholders, interlacing and other generic pre- and postprocessing)
* __inward interface operations__ include querying filename, codec, offset and any other information necessary for creating a source render node, getting additional processing policy decisions (handling of interlacing, aspect ratio).
&rarr; MediaAsset

!Processing Asset
Some software component able to work on media data in the Lumiera Render engine Framework. This includes all sorts of loadable effects, as well as some of the standard, internal facilities (Mask, Projector). Note that Processing Assets typically provide some attachment Point or means of communication with GUI facilities.
* __outward interface operations__ include getting name and description, investigating the media types the processor is able to handle, cause the underlying module to be acutally loaded...
* __inward interface operations__ include resolving the actual processing function.
&rarr; ProcAsset

!Structural Asset
Some of the building blocks providing the framework for the objects placed into the current Session. Notable examples are [[processing pipes|Pipe]] within the high-level-model, Viewer attachment points, Sequences, Timelines etc.
* __outward interface operations__ include...
* __inward interface operations__ include...
&rarr; StructAsset {{red{still a bit vague...}}}

!Meta Asset
Any resources related to the //reflective recurse of the application on itself,// including parametrisation and customisation aspects and similar metadata, are categorised and tracked apart of the primary entities. Examples being types, scales and quantisation grids, decision rules, control data stores (automation data), annotations attached to labels, inventory entities, error items etc.
* __outward interface operations__ include...
* __inward interface operations__ include...
&rarr; MetaAsset {{red{just emerging as of 12/2010}}}

!!!!still to be worked out..
is how to implement the relationship between [[MObject]]s and Assets. Do we use direct pointers, or do we prefer an ID + central registry approach? And how to handle the removal of an Asset.
&rarr; see also [[analysis of mem management|ManagementAssetRelation]]

  //9/07: currently implementing it as follows: use a refcounting-ptr from Clip-~MObject to asset::Media while maintaining a dependency network between Asset objects. We'll see if this approach is viable//

{{red{NOTE 8/2018}}} there seems to be a fuzziness surrounding the distinction between StructAsset and MetaAsset.
I am suspicious this is a distinction //merely derived from first principles...// &rarr; {{red{Ticket #1156}}}
Assets are created by a Factories returning smart pointers; the Asset creation is bound to specific use cases and //only available// for these specific situations. There is no generic Asset Factory.

For every Asset we generate a __Ident tuple__ and a long ID (hash) derived from this Ident tuple. The constructor of the abstract base class {{{Asset}}} takes care of this step and automatically registeres the new Asset object with the AssetManager. Typically, the factory methods for concrete Asset classes provide some shortcuts providing sensible default values for some of the Ident tuple data fields. They may take additional parameters &mdash; for example the factory method for creating {{{asset::Media}}} takes a filename (and may at some point in the future aply "magic" based on examination of the file &rarr; LoadingMedia)

Generally speaking, assets can be seen as the statical part or view of the session and model. They form a global scope and are tied to the [[model root|ModelRootMO]] &mdash; which means, they're going to be serialised and de-serialised alongside with this model root scope. Especially the de-serialisation triggers (re)-creation of all assets associated with the session to be loaded.
{{red{TODO:}}} //there will be a special factory mechanism for this case, details pending definition as of 2/2010 //
The Asset Manager provides an Interface to an internal Database holding all Assets in the current Session and System state. It may be a real Database at some point (and for the moment it's a Hashtable). Each [[Asset]] is registered automatically with the Asset Manager; it can be queried either by it's //identification tuple// or by it's unique ID.

Conceptually, assets belong to the [[global or root scope|ModelRootMO]] of the session data model. A mechanism for serialising and de-serialising all assets alongside with the session is planned as of 2/2010
Conceptually, Assets and ~MObjects represent different views onto the same entities. Assets focus on bookkeeping of the contents, while the media objects allow manipulation and EditingOperations. Usually, on the implementation side, such closely linked dual views require careful consideration.

!redundancy
Obviously there is the danger of getting each entity twice, as Asset and as ~MObject. While such dual entities could be OK in conjunction with much specialised processing, in the case of Lumiera's Steam-Layer most of the functionality is shifted to naming schemes, configuration and generic processing, leaving the actual objects almost empty and deprived of distinguishing properties. Thus, starting out from the required concepts, an attempt was made to join, reduce and straighten the design.
* type and channel configuration is concentrated to MediaAsset
* the accounting of structural elements in the model is done through StructAsset
* the object instance handling is done in a generic fashion by using placements and object references
* clips and labels appear as ~MObjects solely; on the asset side there is just an generic [[id tracking mechanism|TypedID]].
* tracks are completely deprived of processing functionality and become lightweight containers, also used as clip bins.
* timelines and sequences are implemented as façade to equivalent structures within the model
* this leaves us only with effects requiring both an object and asset implementation

[<img[Fundamental object relations used in the session|uml/fig138885.png]]

Placing an MObject relatively to another object such that it should be handled as //attached//&nbsp; to the latter results in several design and implementation challenges. Actually, such an attachment creates a cluster of objects. The typical use case is that of an effect attached to a clip or processing pipe.
* attachment is not a globally fixed relation between objects, rather, it typically exists only for some limited time span (e.g. the duration of the basic clip the effect is attached to)
* the order of attachment is important and the attached placement may create a fork in the signal flow, so we need a way for specifying reproducibly how the resulting wiring should be
* when building, we access the information in reversed direction: we have the target object and need to query for all attachments

The first step towards an solution is to isolate the problem; obviously we don't need to store the objects differently, we just need //information about attached objects//&nbsp; for some quite isolated tasks (namely for creating a GUI representation and for combining attached objects into a [[Pipe]] when building). Resorting to a query (function call) interface should turn the rest of the problem into an implementation detail. Thus
* for an __attachment head__ (= {{{Placement<MObject>}}} to which other objects have been attached) get the ordered list of attachments
* for an __attached placement__ (member of the cluster) get the placement of the corresponding attachment head
* retrieve and break the attachment when //deleting.//

!!Implementation notes
Attachment is managed within the participating placements, mostly by special [[locating pins|LocatingPin]]. Attachment doesn't necessarily nail down an attached object to a specific position, rather the behaviour depends on the type of the object and the locating pins actually involved, especially on their order and priority. For example, if an {{{Placement<Effect>}}} doesn't contain any locating pin defining a temporal position, then the attachment will result in the placement inheriting the temporal placement of the //attachment head// (i.e. the clip this effect has been attached to). But, if on the contrary the effect in question //does// have an additional locating pin, for example relative to another object or even to a fixed time position, this one will "win" and determine the start position of the effect &mdash; it may even move the effect out of the time interval covered by the clip, in which case the attachment has no effect on the clip's processing pipe.
The attachment relation is hierarchical and has a clearly defined //active// and //passive// side: The attachment head is the parent node in a tree, but plays the role of the passive partner, to which the child nodes attach. But note, this does not mean we are limited to a single attachment head. Actually, each placement has a list of locating pins and thus can attach to several other placements. For example, a transition attaches to at least two local pipes (clips). {{red{TODO: unresolved design problem; seems to contradict the PlacementScope}}}

!!!!Relation to memory management
Attachment on itself does //not// keep an object alive. Rather, it's implemented by an opaque ID entry (&rarr; PlacementRef), which can be resolved by the PlacementIndex. The existence of attachments should be taken into account when deleting an object, preferably removing any dangling attachments to prevent an exception to be thrown later on. On the other hand, contrary to the elements of the HighLevelModel, processing nodes in the render engine never depend on placements &mdash; they always refer directly to the MObject instance or even the underlying asset. In the case of MObject instances, the pointer from within the engine will //share ownership// with the placement (remember: both are derived from {{{boost::shared_ptr}}}).
Automation is treated as a function over time. It is always tied to a specific Parameter (which can thus be variable over the course of the timeline). All details //how// this function is defined are completely abstracted away. The Parameter uses a ParamProvider to get the value for a given Time (point). Typically, this will use linear or bezier interpolation over a set of keyframes internally. Parameters can be configured to have different value ranges and distribution types (on-off, stepped, continuous, bounded)

[img[how to implement Automation|uml/fig129669.png]]
While generally automation is treated as a function over time, defining and providing such a function requires some //Automation Data.// The actual layout and meaning of this data is deemed an implementation detail of the [[parameter provider|ParamProvider]] used, but nevertheless an automation data set has object characteristics within the session (high-level-model), allowing it to be attached, moved and [[placed|Placement]] by the user.
Starting out from the concepts of Objects, Placement to Tracks, render Pipes and connection properties (&rarr; see [[here|TrackPipeSequence]]) within the session, we can identify the elementary operations occuring within the Builder. Overall, the Builder is organized as application of //visiting tools// to a collection of objects, so finally we have to consider some object kind appearing in the working function of the given builder tool, which holds at this moment some //context//. The job now is to organize this context such as to create a predictable build process from this //event driven// approach.
&rarr;see also: BuilderPrimitives for the elementary situations used to cary out the building operations

!Builder working Situations
# any ''Clip'' (which at this point has been reduced already to a part of a simple elementary media stream &rarr; see [[Fixture]])
## yields a source reading node
## which needs to be augmented by the underlying media's [[processing pattern|ProcPatt]]
##* thus inserting codec(s) and source transformations
##* effectively this is an application of effects
## at this point we have to process (and maybe generate on-the-fly) the [[source port of this clip|ClipSourcePort]]
##* the output of the source reading and preprocessing defined thus far is delivered as input to this port, which is done by a ~WiringRequest (see below)
##* as every port, it is the entry point to a [[processing pipe|Pipe]], thus the source port has a processing pattern, typically inserting the camera (transformation effect) at this point
## followed by the application of effects
##* separately for every effect chain rooted (placed) directly onto the clip
##* and regarding the chaining order
## next we have to assess the [[pipes|Pipe]] to which the clip has been placed
## producing a [[wiring request|WiringRequest]] for every pair {{{(chainEndpoint, pipe)}}}
# [>img[draw/Proc.builder1.png]]   attaching an ''Effect'' is actually always an //insertion operation// which is done by //prepending// to the previously built nodes. Effects may be placed as attached to clips and pipes, which causes them to be included in the processing chain at the given location. Effects may as well be placed at an absolute time, which means they are to be applied to every clip that happens to be at this time &mdash; but this usecase will be reolved when creating the Fixture, causing the effect to be attached to the clips in question. The same holds true for Effects put on tracks.
# treating an ''wiring request'' means
## detecting possible and impossible connections
## deriving additional possible "placement dimensions" generated by executing such an connection (e.g. connecting a mono source to a spatial sound system bus creates panning possibilities)
##* deriving parameter sources for this additional degrees of freedom
##* fire off insertion of the necessary effects to satisfy this connection request and implement the additional "placement dimensions" (pan, layer order, overlay mode, MIDI channel selection...)
# processing the effects and further placements ''attached to a Pipe'' is handled identical to the processing done with all attachments to individual clips.
# ''Transitions'' are to be handled differently according to their placement (&rarr; more on [[Transitions|TransitionsHandling]])
#* when placed normally to two (or N) clips, they are inserted at the exit node of the clip's complete effect chain.
#* otherwise, when placed to the source port(s) or when placed to some other pipes they are inserted at the exit side of those pipe's effect chains. (Note: this puts additional requirements on the transition processor, so not every transition can be placed this way)
After consuming all input objects and satisfying all wiring requests, the result is a set of [[exit nodes|ExitNode]] ready for pulling data. We call the network reachable from such an exit node a [[Processor]], together all processors of all segments and output data types comprise the render engine.

!!!dependencies
Pipes need to be there first, as everything else will be plugged (placed) to a pipe at some point. But, on the other hand, for the model as such, pipes are optional: We could create sequences with ~MObjects without configuring pipes (but won't be able then to build any render processor of course). Similarily, there is no direct relation between tracks and pipes. Each sequence is comprised of at least one root track, but this has no implications regarding any output pipe.

Effects can be attached only to already existing pipelines, starting out at some pipes entry port or the source port of some clip. Besides that, all further parts can be built in any order and independent of each other. This is made possible by using [[wiring requests|WiringRequest]], which can be resolved later on. So, as long as we start out with the tracks (to resolve any pipe they are placed to), and further, if we manage to get any effect placed to some clip-MO //after// setting up and treating the clip, we are fine and can do the building quasi event driven.

!!!building and resolving
Building the network for the individual objects thus creates a queue of wiring requests. Some of them may be immediately resolvable, but detecting this correctly can be nontrivial, and so it seems better to group all wiring requests based on the pipe and treat them groupwise. Because &mdash; in the most general case &mdash; connecting includes the use of transforming and joining nodes, which can create additional wiring requests (e.g. for automation parameter data connections). Finally, if the network is complete, we could perform [[optimisations|RenderNetworkOptimisation]]
/***
|Name|BetterTimelineMacro|
|Created by|SaqImtiaz|
|Location|http://tw.lewcid.org/#BetterTimelineMacro|
|Version|0.5 beta|
|Requires|~TW2.x|
!!!Description:
A replacement for the core timeline macro that offers more features:
*list tiddlers with only specfic tag
*exclude tiddlers with a particular tag
*limit entries to any number of days, for example one week
*specify a start date for the timeline, only tiddlers after that date will be listed.

!!!Installation:
Copy the contents of this tiddler to your TW, tag with systemConfig, save and reload your TW.
Edit the ViewTemplate to add the fullscreen command to the toolbar.

!!!Syntax:
{{{<<timeline better:true>>}}}
''the param better:true enables the advanced features, without it you will get the old timeline behaviour.''

additonal params:
(use only the ones you want)
{{{<<timeline better:true  onlyTag:Tag1 excludeTag:Tag2 sortBy:modified/created firstDay:YYYYMMDD maxDays:7 maxEntries:30>>}}}

''explanation of syntax:''
onlyTag: only tiddlers with this tag will be listed. Default is to list all tiddlers.
excludeTag: tiddlers with this tag will not be listed.
sortBy: sort tiddlers by date modified or date created. Possible values are modified or created.
firstDay: useful for starting timeline from a specific date. Example: 20060701 for 1st of July, 2006
maxDays: limits timeline to include only tiddlers from the specified number of days. If you use a value of 7 for example, only tiddlers from the last 7 days will be listed.
maxEntries: limit the total number of entries in the timeline.


!!!History:
*28-07-06: ver 0.5 beta, first release

!!!Code
***/
//{{{
// Return the tiddlers as a sorted array
TiddlyWiki.prototype.getTiddlers = function(field,excludeTag,includeTag)
{
          var results = [];
          this.forEachTiddler(function(title,tiddler)
          {
          if(excludeTag == undefined || tiddler.tags.find(excludeTag) == null)
                        if(includeTag == undefined || tiddler.tags.find(includeTag)!=null)
                                      results.push(tiddler);
          });
          if(field)
                   results.sort(function (a,b) {if(a[field] == b[field]) return(0); else return (a[field] < b[field]) ? -1 : +1; });
          return results;
}



//this function by Udo
function getParam(params, name, defaultValue)
{
          if (!params)
          return defaultValue;
          var p = params[0][name];
          return p ? p[0] : defaultValue;
}

window.old_timeline_handler= config.macros.timeline.handler;
config.macros.timeline.handler = function(place,macroName,params,wikifier,paramString,tiddler)
{
          var args = paramString.parseParams("list",null,true);
          var betterMode = getParam(args, "better", "false");
          if (betterMode == 'true')
          {
          var sortBy = getParam(args,"sortBy","modified");
          var excludeTag = getParam(args,"excludeTag",undefined);
          var includeTag = getParam(args,"onlyTag",undefined);
          var tiddlers = store.getTiddlers(sortBy,excludeTag,includeTag);
          var firstDayParam = getParam(args,"firstDay",undefined);
          var firstDay = (firstDayParam!=undefined)? firstDayParam: "00010101";
          var lastDay = "";
          var field= sortBy;
          var maxDaysParam = getParam(args,"maxDays",undefined);
          var maxDays = (maxDaysParam!=undefined)? maxDaysParam*24*60*60*1000: (new Date()).getTime() ;
          var maxEntries = getParam(args,"maxEntries",undefined);
          var last = (maxEntries!=undefined) ? tiddlers.length-Math.min(tiddlers.length,parseInt(maxEntries)) : 0;
          for(var t=tiddlers.length-1; t>=last; t--)
                  {
                  var tiddler = tiddlers[t];
                  var theDay = tiddler[field].convertToLocalYYYYMMDDHHMM().substr(0,8);
                  if ((theDay>=firstDay)&& (tiddler[field].getTime()> (new Date()).getTime() - maxDays))
                     {
                     if(theDay != lastDay)
                               {
                               var theDateList = document.createElement("ul");
                               place.appendChild(theDateList);
                               createTiddlyElement(theDateList,"li",null,"listTitle",tiddler[field].formatString(this.dateFormat));
                               lastDay = theDay;
                               }
                  var theDateListItem = createTiddlyElement(theDateList,"li",null,"listLink",null);
                  theDateListItem.appendChild(createTiddlyLink(place,tiddler.title,true));
                  }
                  }
          }

          else
              {
              window.old_timeline_handler.apply(this,arguments);
              }
}
//}}}
Binding-~MObjects are used to associate two entities within the high-level model.
More specifically, such a binding serves 
* to outfit any top-level [[Timeline]] with real content, which is contained within a [[Sequence]]
* to build a VirtualClip, that is to link a complete sequence into another sequence, where it appears like a new virtual media or clip.

!Properties of a Binding
Binding is a relation entity, maintaining a link between parts of the session. Actually this link is achieved somewhat indirect: The binding itself is an MObject, but it points to a [[sequence asset|Sequence]]. Moreover, in case of the (top-level) timelines, there is a timeline asset acting as a frontend for the ~BindingMO. 
* the binding exposes special functions needed to implement the timeline {{red{planned as of 11/10}}}
* similarly, the binding exposes functions allowing to wrap up the bound sequence as VirtualMedia (when acting as VirtualClip).
* the Binding holds an OutputMapping -- allowing to specify, resolve and remember [[output designations|OutputDesignation]]

Note: there are other binding-like entities within the model, which are deliberately not subsumed below this specification, but rather implemented stand alone.
&rarr; see also SessionInterface



!Implementation
On the implementation side, we use a special kind of MObject, acting as an anchor and providing an unique identity. Like any ~MObject, actually a placement establishes the connection and the scope, and typically constitutes a nested scope (e.g. the scope of all objects //within// the sequence to be bound into a timeline)

Binding can be considered an implementation object, rarely to be created directly. Yet it is part of the high-level model
{{red{WIP 11/10}}}: it is likely that -- in case of creating a VirtualClip -- BindingMO will be hooked up behind another façade asset, acting as ''virtual media''

!!!channel / output mapping {{red{WIP 11/10}}}
The Binding-~MObject stores an OutputMapping. Basically this, together with OutputDesignation, implements the mapping behaviour
* during the build process, output designation(s) will be retrieved for each pipe.
* indirect and relative designations are to resolved; the relative ones are forwarded to the next enclosing binding
* in any case, the result is an direct WiringPlug, which can then be matched up and wired
* but in case of implementing a virtual clip, in addition to the direct wiring...
** a relative output designation (the N^^th^^ channel of this kind) is carried over to the target scope to be re-resolved there.
** any output designation specification yields a summation pipe at the binding, i.e. a position corresponding to the global pipes when using the same sequence as timeline.
** The output of these summation pipes is treated like a media channels
** but for each of those channels, an OutputDesignation is //carried over//&nbsp; into the target (virtual clip)
*** now, if a distinct output designation can be determined at this placement of the virtual clip, it will be used for the further routing
*** otherwise, we try to re-evaluate the original output designation carried over. <br/>Thus, routing will be done as if the original output designation was given at this placement of the virtual clip.
There is some flexibility in the HighLevelModel, allowing to attach the same [[Sequence]] onto multiple [[timelines|Timeline]] or even into a [[meta-clip|VirtualClip]]. Thus, while there is always an containment relation which can be used to define the current PlacementScope, we can't always establish an unique path from any given location up to the model root. In the most general case, we have to deal with a DAG, not a tree.

!solution idea
Transform the DAG into a tree by //focussing//&nbsp; on the current situation and context. Have a state containing the //current path.//  &rarr; QueryFocus

Incidentally, this problem is quite similar to file system navigation involving ''symlinks''.
* under which circumstances shall discovery follow symlinks?
* where do you get by {{{cd ..}}} &mdash; especially when you went down following a symlink?
This leads us to building our solution here to match a similar behaviour pattern, according to the principle of least surprise. That is, disovery shall follow the special [[bindings|BindingMO]] only when requested explicitly, but the current "shell" (QueryFocus) should maintain a virtual/effective path to root scope.

!!detail decisions
!!!!how to represent scoping
We us a 2-layer approach: initially, containment is implemented through a registration in the PlacementIndex. Building on that, scope is layered on top as an abstraction, which uses the registered containment relation, but takes the current access path into account at the  critical points (where a [[binding|BindingMO]] comes into play)

!!!!the basic containment tree
each Placement (with the exception of the root) gets registered as contained in yet another Placement. This registration is entirely a tree, and thus differs from the real scope nesting at the Sequence level: The scopes constituting Sequences and Timelines are registered as siblings, immediately below the root. This has some consequences
# Sequences as well as Timelines can be discovered as contents of the model root
# ScopePath digresses at Sequence level from the basic containment tree

!!!!locating a placement
constituting the effective logical position of a placement poses sort-of a chicken or egg problem: We need already a logical position to start with. In practice, this is done by recurring on the QueryFocus, which is a stack-like state and automatically follows the access or query operations on the session. //Locating a placement//&nbsp; is done by //navigating the current query focus.//

!!!!navigating a scope path location
As the current query focus stack top always holds a ScopePath, the ''navigating operation'' on ScopePath is the key for managing this logical view onto the "current" location.
* first, try to locate the new scope in the same sequence as the current scope, resulting in a common path prefix
* in case the new scope belongs to a different sequence, this sequence might be connected to the current one as a meta-clip, again resulting in a common prefix
* otherwise use the first possible binding according to the ordering of timelines as a prefix
* use the basic containment path as a fallback if no binding exists

!!{{red{WIP 9/10}}}Deficiencies
To buy us some time, analysing and implementing the gory details of scope path navigation and meta-clips was skipped for now.
Please note the shortcomings and logical contradictions in the solution currently in code:
* the {{{ScopePath::navigate()}}}-function was chosen as the location to implement the translation logic. //But actually this translation logic is missing.//
* so basically we're just using the raw paths of the basic containment tree; more specifically, the BindingMO (=Timeline) isn't part of the derived ScopePath
* this will result in problems even way before we implement meta-clips (because the Timeline is assumed to provide output routing information) to the Placements
* QueryFocus, with the help of ScopeLocator exposes the query services of the PlacementIndex. So actually it's up to the client code to pick the right functions. This might get confusing
* The current design rather places the implementation according to the roles of the involved entities, which causes some ping-pong on the implementation level. Especially the ScopeLocator singleton can be accessed multiple times. This is the usual clarity vs. performance tradeoff. Scope resolution is assumed rather to be //not performance critical.//
All rendering, transformations and output of media data requires using ''data buffers'' -- but the actual layout and handling of these buffers is closely related to the actual implementation of these operations. As we're relying heavily on external libraries and plug-ins for performing these operations, there is no hope getting away with just one single {{{Buffer}}} data type definition. Thus, we need to confine ourselves to a common denominator of basic operations regarding data buffers and abstract the access to these operations through a BufferProvider entity. Beyond these basic operations, mostly we just need to assure that //a buffer exists as an distinguishable element// -- which in practice boils down to pushing around {{{void*}}} variables. 

Obviously, overloading a pointer with semantic meaning isn't exactly a brilliant idea -- and the usual answer is to embed this pointer into a smart handle, which also yields the nice side-effect of explaining this design to the reader. Thus a buffer handle
* can only be obtained from a BufferProvider
* can be used to identify a buffer
* can be dereferenced
* can be copied 

!design quest: buffer type information
To perform anything useful with such a buffer handle, the client code needs some additional information, which can be generalised into a //type information:// Either, the client needs to know the size and kind of data to expect in the buffer, maybe just assume to get a specific buffer with suitably dimensions, or the client needs to know which buffer provider to contact for any management operations on that buffer (handle). And, at some point there needs to be a mechanism to verify the validity of a handle. But all of this doesn't mean that it's necessary to encode or embedd this information directly into the handle -- it might also be stored into a registration table (which has the downside of creating contention), or it might just be attached implicitly to the invocation context.

Just linking this type information to the context is certainly the most elegant solution, but also by far the most difficult to achieve -- not to mention the implicit dependency on a very specific invocation situation. So for now (9/2011) it seems best to stick to the simple and explicit implementation, just keeping that structural optimisation in mind. And the link to this buffer type information should be made explicit within the definition anyway, even if we choose to employ another design tradeoff later.
* thus the conclusion is: we introduce a ''descriptor object'', which will be stored within the handle
* each BufferProvider exposes a ''descriptor prototype''; it can be specialised and used by to [[organise implementation details|BufferMetadata]]


!sanity checks
there are only limited sanity checks, and they can be expected to be optimised away for production builds.
Basically the client is responsible for sane buffer access.
Buffers are used to hold the media data for processing and output. Within the Lumiera RenderEngine and [[Player]] subsystem, we use some common concepts to handle the access and allocation of working buffers. Yet this doesn't imply having only one central authority in charge of every buffer -- such an approach wouldn't be possible (due to collaboration with external systems) and wouldn't be desirable either. Rather, there are some common basic usage //patterns// -- and there are some core interfaces used throughout the organisation of the rendering process.

Mostly, the //client code,// i.e. code in need of using buffers, can access some BufferProvider, thereby delegating the actual buffer management. This binds the client to adhere to kind of a //buffer access protocol,// comprised of the ''announcing'', ''locking'', optionally ''attaching'' and finally the ''releasing'' steps. Here, the actual buffer management within the provider is a question of implementation and will be configured during build-up of the scope in question.

!usage situations
;rendering
:any calculations and transformations of media data typically require an input- and output buffer. To a large extent, these operations will be performed by specialised libraries, resulting in a call to some plain-C function receiving pointers to the required working buffers. Our invocation code has the liability to prepare and provide those pointers, relying on a BufferProvider in turn.
;output
:most any of the existing libraries for handling external output require the client to adhere to some specific protocol. Often, this involves some kind of callback invoked at the external library's discretion, thus forcing our engine to prepare data within an intermediary buffer. Alternatively, the output system might provide some mechanism to gain limited direct access to the output buffers, and such an access can again be exposed to our internal client code through the BufferProvider abstraction.

!primary implementations
;memory pool
:in all those situations, where we just need a working buffer for some time, we can rely on our internal custom memory allocator.
:{{red{~Not-Yet-Implemented as of 9/11}}} -- as a fallback we just rely on heap allocations through the language runtime
;frame cache
:whenever a calculated result may be of further interest, beyond the immediate need triggering the calculation, it might be eligible for caching.
:The Lumiera ''frame cache'' is a special BufferProvider, maintaining a larger pool of buffers which can be pinned and kept around for some time,
:accomodating limited resources and current demand for fresh result buffers.

the generic BufferProvider implementation exposes a service to attach and maintain additional metadata with individual buffers. Using this service is not mandatory -- a concrete buffer provider implementation may chose to maintain more specific metadata right on the implementation level, especially if more elaborate management is necessary within the implementation anyway (e.g. the frame index). We can expect most buffer provider implementations to utilise at least the generic buffer type id service though.

!buffer types and descriptors
Client code accesses buffer through [[smart buffer handles|BuffHandle]], including some kind of buffer type information, encoded into a type ID within the ''buffer descriptor''. These descriptors are used like prototypes, relating the type-~IDs hierarchically. Obviously, the most fundamental distinction is the BufferProvider in charge for that specific buffer. Below that, the next mandatory level of distinction is the ''buffer size''. In some cases, additional distinctions can be necessary. Each BufferProvider exposes a service to yield unique type ~IDs governed by such a hierarchical scheme.

!state and metadata for individual buffers
Beyond that, it can be necessary to associate at least a state flag with //individual buffers.// Doing so requires the buffer to be in //locked state,// otherwise it wouldn't be distinguishable as an separate entity (a client is able to access the buffer memory address only after "locking" this buffer). Especially when using a buffer provider in conjunction with an OutputSlot, these states and transitions are crucial for performing an orderly handover of generated data from the producer (render engine) to the consumer (external output sink).

__Note__: while the API to access this service is uniform, conceptually there is a difference between just using the (shared) type information and associating individual metadata, like the buffer state. Type-~IDs, once allocated, will never be discarded (within the lifetime of an Lumiera application instance -- buffer associations aren't persistent). To the contrary, individual metadata //will be discarded,// when releasing the corresponding buffer. According to the ''prototype pattern'', individual metadata is treated as a one-way-off specialisation.
It turns out that --  throughout the render engine implementation -- we never need direct access to the buffers holding actual media data. Buffers are just some entity to be //managed,// i.e. "allocated", "locked" and "released"; the //actual meaning of these operations can be left to the implementation.// The code within the render engine just pushes around ''smart-prt like handles''. These [[buffer handles|BuffHandle]] act as a front-end, being created by and linked to a buffer provider implementation. There is no need to manage the lifecycle of buffers automatically, because the use of buffers is embedded into the render calculation cycle, which follows a rather strict protocol anyway. Relying on the [[capabilities of the scheduler|SchedulerRequirements]], the sequence of individual jobs in the engine ensures...
* that the availability of a buffer was ensured prior to planning a job ("buffer allocation")
* that a buffer handle was obtained ("locked") prior to any operation requiring a buffer
* that buffers are marked as free ("released") after doing the actual calculations.

!operations
While BufferProvider is an interface meant to be backed by various and diverse kinds of buffer and memory management approaches, there is a common set of operations to be supported by any of them
;announcing
:client code may announce beforehand that it expects to get a certain amount of buffers. Usually this causes some allocations to happen right away, or it might trigger similar mechanisms to ensure availability; the BufferProvider will then return the actual number of buffers guaranteed to be available. This announcing step is optional an can happen any time before or even after using the buffers and it can be repeated with different values to adjust to changing requirements. Thus the announced amount of buffers always denotes //additional buffers,// on top of what is actively used at the moment. This safety margin of available buffers usually is accounted separately for each distinct kind of buffer (buffer type). There is no tracking as to which specific client requested buffers, beyond the buffer type.
;locking
:this operation actually makes a buffer available for a specific client and returns a [[buffer handle|BuffHandle]]. The corresponding buffer is marked as used and can't be locked again unless released. If necessary, at that point the BufferProvider might allocate memory to accommodate (especially when the buffers weren't announced beforehand). The locking may fail and raise an exception. Such a failure will be unlikely when buffers have been //announced beforehand.// To support additional sanity checks, the client may provide a token-ID with the lock-operation. This token may be retrieved later and it may be used to ensure the buffer is actually locked for //this token.//
;attaching
:optionally the client may attach an object to a locked buffer. This object is placement-constructed into the buffer and will be destroyed automatically when releasing the buffer. Alternatively, the client may provide a pair of constructor- / destructor-functors, to be invoked in a similar way. This allows e.g. to install descriptor structures within the buffer, as required by an external media handling library.
;emitting
:the client //may optionally mark a state transition// -- whose precise meaning remains implicit and implementation dependent. From the client's perspective, emitting and releasing may seem equivalent, since the buffer content should not be altered after that point. However, conceivably there are usages where it matters for //the consumer// to be sure an expected result was actually achieved, since the producer may well acquire the buffer and then fail to complete the required work, prompting some clean-up safety mechanism to merely release the resources.
;releasing
:buffers need to be released explicitly by the client code. This renders the corresponding BuffHandle invalid, (optionally) invokes a destructor function of an attached object and maybe reclaims the buffer memory

!!type metadata service
In addition to the basic operations, clients may associate BufferMetadata with individual buffers;
in the basic form, this means just maintaining a type tag describing the kind of buffer, while optionally this service might be extended to e.g. associating a state flag.

__see also__
&rarr; OutputSlot relying on a buffer provider to deal with frame output buffers
&rarr; more about BufferManagement within the RenderEngine and [[Player]] subsystem
&rarr; RenderMechanics for details on the buffer management within the node invocation for a single render step
{{red{⚠ In-depth rework underway as of 7/2024...}}}
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
The invocation of individual [[render nodes|ProcNode]] uses an ''buffer table'' internal helper data structure to encapsulate technical details of the allocation, use, re-use and feeing of data buffers for the media calculations. Here, the management of the physical data buffers is delegated through a BufferProvider, which typically is implemented relying on the ''frame cache'' in the Vault. Yet some partially quite involved technical details need to be settled for each invocation: We need input buffers, maybe provided as external input, while in other cases to be filled by a recursive call. We need storage to prepare the (possibly automated) parameters, and finally we need a set of output buffers. All of these buffers and parameters need to be rearranged for invoking the (external) processing function, followed by releasing the input buffers and commiting the output buffers to be used as result.

Because there are several flavours of node wiring, the building blocks comprising such a node invocation will be combined depending on the circumstances. Performing all these various steps is indeed the core concern of the render node -- with the help of BufferTable to deal with the repetitive, tedious and technical details.

!requirements
The layout of the buffer table will be planned beforehand for each invocation, allongside with planning the individual invocation jobs for the scheduler. At that point, a generic JobTicket for the whole timeline segment is available, describing the necessary operations in an abstract way, as determined by the preceeding planning phase. Jobs are prepared chunk wise, some time in advance (but not all jobs of at once). Jobs will be executed concurrently. Thus, buffer tables need to be created repeatedly and placed into a memory block accessed and owned exclusively by the individual job.
* within the buffer table, we need a working area for the output handles, the input handles and the parameter descriptors
* actually, these can be seen as pools holding handle objects which might even be re-used, especially for a chain of effects calculated in-place.
* each of these pools is characterised by a common //buffer type,// represented as buffer descriptor
* we need some way to integrate with the StateProxy, because some of the buffers need to be marked especially, e.g. as result
* there should be convenience functions to release all pending buffers, forwarding the release operation to the individual handles
//Building the fixture is actually at the core of the [[builder's operation|Builder]]//
{{red{WIP as of 11/10}}} &rarr; see also the [[planning page|PlanningBuildFixture]]

;Resolving the DAG[>img[Steps towards creating a Segmentation|draw/SegmentationSteps1.png]]
Because of the possibility of binding a Sequence multiple times, and maybe even nested as virtual clip, the [[high-level model|HighLevelModel]] actually constitutes a DAG, not a tree. This leds to quite some tricky problems, which we try to resolve by //rectifying the DAG into N virtual trees.// (&rarr; BindingScopeProblem)

Relying on this transformation, each Timeline spans a sub-tree virtually separated from all other timelines; the BuildProcess is driven by [[visiting|VisitorUse]] all the //tangible// objects within this subtree. In the example shown to the right, Sequence-β is both bound as VirtualClip into Sequence-α, as well as bound independently as top-level sequence into Timeline-2. Thus it will be visited twice, but the QueryFocus mechanism ensures that each visitation »sees« the proper context.

;Explicit Placements
Each tangible object placement (relevant for rendering), which is encountered during that visitation, gets //resolved// into an [[explicit placement|ExplicitPlacement]]. If we see [[Placement]] as a positioning within a multi dimensional configuration space, then the resolution into an explicit placement is like the creation of an ''orthogonal base'': Within the explicit placement, each LocatingPin corresponds exactly to one degree of freedom and can be considered independently from all other locating pins. This resolution step removes any fancy dynamic behaviour and all scoping and indirect references. Indeed, an explicit placement is a mere //value object;// it isn't part of the session core (PlacementIndex), isn't typed and can't be referred indirectly.

;Segmentation of Time axis
This simple and explicit positioning thus allows to arrange all objects as time intervals on a single axis. Any change and especially any overlap is likely to create a different wiring configuration. Thus, for each such configuration change, we fork off a new //segment// and //copy over// all partially touched placements. The resulting seamless sequence of non-overlapping time intervals provides the backbone of the datastructure called [[Fixture]].

;Building the Network
From this backbone, the actual [[building mechanism|BuilderMechanics]] proceeds as a ongoing visitation and resolution, resulting in the gowth of a network of [[render nodes|ProcNode]] starting out from the source reading nodes and proceeding up through the local pipes, the transitions and the global pipes. When this build process is exhausted, besides the actual network, the result is a //residuum of nodes not connected any further.// Any of these [[exit nodes|ExitNode]] can be associated to a ~Pipe-ID in the high-level model. Within each segment, there should be one exit node per pipe-ID at max. These are the [[model ports|ModelPort]] resulting from the build process, keyed by their corresponding ~Pipe-ID.
&rarr; see [[Structure of the Fixture|Fixture]]
All decisions on //how // the RenderProcess has to be carried out are concentrated in this rather complicated Builder Subsystem. The benefit of this approach is, besides decoupling of subsystems, to keep the actual performance-intensive video processing code as simple and transparent as possible. The price, in terms of increased complexity &mdash; to pay in the Builder &mdash; can be handled by making the Build Process generic to a large degree. Using a Design By Contract approach we can decompose the various decisions into small decision modules without having to trace the actual workings of the Build Process as a whole.

[>img[Outline of the Build Process|uml/fig129413.png]]
The building itself will be broken down into several small tool application steps. Each of these steps has to be mapped to the MObjects found on the [[Timeline]]. Remember: the idea is that the so called "[[Fixture]]" contains only [[ExplicitPlacement]]s which in turn link to MObjects like Clips, Effects and [[Automation]]. So it is sufficient to traverse this list and map the build tools to the elements. Each of these build tools has its own state, which serves to build up the resulting Render Engine. So far I see two steps to be necessary:
* find the "Segments", i.e. the locations where the overall configuration changes
* for each segment: generate a ProcNode for each found MObject and wire them accordingly
Note, //we still have to work out how exactly building, rendering and playback work// together with the Vault-design. The build process as such doesn't overly depend on these decisions. It is easy to reconfigure this process. For example, it would be possible as well to build for each frame separately (as Cinelerra2 does), or to build one segment covering the whole timeline (and handle everything via [[Automation]]

&rarr;see also: [[Builder Overview|Builder]]
&rarr;see also: BasicBuildingOperations
&rarr;see also: BuilderStructures
&rarr;see also: BuilderMechanics
&rarr;see also: PlanningBuildFixture
&rarr;see also: PlanningSegementationTool
&rarr;see also: PlanningNodeCreatorTool

[img[Colaborations in the Build Process|uml/fig128517.png]]
Actually setting up and wiring a [[processing node|ProcNode]] involves several issues and is carried out at the lowest level of the build process.
It is closely related to &rarr; [[the way nodes are operated|NodeOperationProtocol]] and the &rarr; [[mechanics of the render process|RenderMechanics]]

!!!object creation
The Nodes are small polymorphic objects, carrying configuration data, but no state. They are [[specially allocated|ManagementRenderNodes]], and the object creation is accessible by means of the NodeFactory solely. They //must not be deallocated manually.// The decision of what concrete node type to create depends on the actual build situation and is worked out by the combination of [[mould|BuilderMould]] and [[processing pattern|ProcPatt]] at the current OperationPoint, issuing a call to one of NodeFactory's {{{operator()}}}

!!!node, plugin and processing function
Its a good idea to distinguish clearly between those concepts. A plugin is a piece of (possibly external) code we use to carry out operations. We have to //discover its properties and capabilities.// We don't have to discover anything regarding nodes, because we (Lumiera builder and renderengine) are creating, configuring and wiring them to fit the specific purpose. Both are to be distinguished from processing functions, which do the actual calculations on the media data. Every node typically encompasses at least one processing function, which may be an internal function in the node object, a library function from Lumiera or GAVL, or external code loaded from a plugin.

!!!node interfaces
As a consequence of this distinctions, in conjunction with a processing node, we have to deal with three different interfaces
* the __build interface__ is used by the builder to set up and wire the nodes. It can be full blown C++ (including templates)
* the __operation interface__ is used to run the calculations, which happens in cooperation of Steam-Layer and Vault-Layer. So a function-style interface is preferable.
* the __inward interface__ is accessed by the processing function in the course of the calculations to get at the necessary context, including in/out buffers and param values.

!!!wiring data connections
A node //knows its predecessors, but not its successors.// When being //pulled//&nbsp; in operation, it can expect to get a frame provider for accessing the in/out buffer locations (some processing functions may be "in-place capable", but that's only a special case of the former). At this point, the ''pull principle'' comes into play: the node may request input frames from the frame provider, passing its predecessors as a ''continuation''.
With regard to the build process, the wiring of data connections translates into providing the node with its predecessors and preconfiguring the possible continuations. While in the common case, a node has just one input/output and pulls from its predecessor a frame for the same timeline position, the general case can be more contrived. A node may process N buffers in parallel and may require several different time positions for it's input, even at a differing framerate. So the actual source specification is (predNode,time,frameType). The objective of the wiring done in the build process is to factor out the parts known in advance, while in the render process only the variable part need to be filled in. Or to put it differently: wiring builds a higher order function (time)->(continuation), where continuation can be invoked to get the desired input frame.

!!!wiring control conections
In many cases, the parameter values provided by these connections aren't frame based data, rather, the processing function needs a call interface to get the current value (value for a given time), which is provided by the parameter object. Here, the wiring needs to link to the suitable parameter instance, which is located within the high-level model (!). As an additional complication, calculating the actual parameter value may require a context data frame (typically for caching purposes to speed up the interpolation). While these parameter context data frames are completely opaque for the render node, they have to be passed in and out similar to the state needed by the node itself, and the wiring has to prepare for accessing these frames too.
The Builder takes some MObject/[[Placement]] information (called Timeline) and generates out of this a Render Engine configuration able to render this Objects. It does all decisions and retrieves the current configuration of all objects and plugins, so the Render Engine can just process them stright forward.

The Builder is the central part of the [[Builder Pattern|http://en.wikipedia.org/wiki/Builder_pattern]]
<br/>
As the builder [[has to create a render node network|BuilderModelRelation]] implementing most of the features and wiring possible with the various MObject kinds and placement types, it is a rather complicated piece of software. In order to keep it manageable, it is broken down into several specialized sub components:
* clients access builder functionality via the BuilderFacade
* the [[Steam-Layer-Controller|SteamDispatcher]] initiates the BuildProcess and does the overall coordination of scheduling edit operations, rebuilding the fixture and triggering the Builder
* to carry out the building, we use several primary tools (SegmentationTool, NodeCreatorTool,...),  together with a BuilderToolKit to be supplied by the [[tool factory|BuilderToolFactory]]
* //operating the Builder// can be viewed at from two different angles, either emphasizing the [[basic building operations|BasicBuildingOperations]] employed to assemble the render node network, or focussing rather at the [[mechanics|BuilderMechanics]] of cooperating parts while processing.
* besides, we can identify a small set of elementary situations we call [[builder primitives|BuilderPrimitives]], to be covered by the mentioned BuilderToolKit; by virtue of [[processing patterns|ProcPatt]] they form an [[interface to the rule based configuration|BuilderRulesInterface]].
* the actual building (i.e. the application of tools to the timeline) is done by the [[Assembler|BuilderAssembler]], which is basically a collection of functions (but has a small amount of global configuration state)
* any non-trivial wiring of render nodes, forks, pipes and [[automation|Automation]] is done by the services of the [[connection manager|ConManager]]
The cooperation of several components creates a context of operation for the primary builder working tool, the [[node creator|PlanningNodeCreatorTool]]:
* the BuilderToolFactory acts as the "builder for the builder tools", i.e. we can assume to be able to retrive all needed primary tools and elementary tools from this factory, completely configured and ready to use.
* the [[Assembler|BuilderAssembler]] has the ability to consume objects from the high level model and feed them to the node creator (which translates into a dispatch of individual operations suited to the objects to be treated). This involves some sort of scheduling or ordering of the operaions, which is the only means to direct the overall process such as to create a sensible and usable result. //This is an fundamental design decision:// the actual working tools have no hard wired knowledge of the "right process", which makes the whole Builder highly configurable ("open").
* the [[connection manager|ConManager]] on the contrary is a passive service provider. Fed with [[wiring requests|WiringRequest]], he can determine if a desired connection is possible, and what steps to take to implement it; the latter recursively creates further building requests to satisfy by the assembler, and possibly new wiring requests.

!!pattern of operation
The working pattern of this builder mechanics can be described as triggering, enqueuing, priorizing, recursing and exhausting. Without the priorizing part, it would be a depth-first call graph without any context state, forcing us to have all cross reference information available at every node or element to be treated. We prefer to avoid this overhead by ordering the operations into several phases and within these phases into correlated entities with the help of a ''weighting function'' and scheduling with a ''priority queue''

!!call chain
After preparing the tools with the context state of this build process, the assembler drives the visitation process in the right order. The functions embedded within the visitor (NodeCreatorTool) for treating specific kinds of objects in turn use the toolkit (=the fully configured tool factory) to get the mould(s) for the individual steps they need to carry out. This involves preparing the mould (with the high-level object currently in-the-works, a suitable processing pattern and additional references), followed by operating the mould. The latter "plays" the processing pattern in the context of the mould, which, especially with the help of the operation point, carries out the actual building and/or connecting step. While doing so, the node factory will be invoked, which in turn invokes the wiring factory and thus pre-determines the node's prospective mode of operation when later called for rendering.
[>img[Builder creating the Model|uml/fig132868.png]]
The [[Builder]] uses different kinds of tools for creating a network of render nodes from a given high-level model. When breaking down this (necessarily complex) process into small manageable chunks, we arrive at [[elementary building situations|BuilderPrimitives]]. For each of these there is a specialized tool. We denote these tools as "moulds" because they are a rather passive holder for the objects to be attached and wired up. They are shaped according to the basic form the connections have to follow for each of these basic situations:
* attaching an effect to a pipe
* combining pipes via a transition
* starting out a pipe from a source reader
* general connections from the exit node of a pipe to the port of another pipe
In all those cases, the active part is provided by [[processing patterns|ProcPatt]] &mdash; sort of micro programs executed within the context of a given mould: the processing pattern defines the steps to take (in the standard/basic case this is just "attach"), while the mould holds and provides the location where these steps will operate. Actually, this location is represented as a OperationPoint, provided by the mould and abstracting the details of making multi-channel connections.
While assembling and building up the render engines node network, a small number of primitive building situations is encountered repeatedly. The BuilderToolKit provides a "[[mould|BuilderMould]]" for each of these situations, typically involving parametrisation and the application of a [[processing pattern|ProcPatt]].

The ''Lifecycle'' of such a mould starts out by arming it with the object references involved into the next building step. After conducting this building step, the resulting render nodes can be found &mdash; depending on the situation &mdash; attached either to the same mould, or to another kind of mould, but in any case ready to be included in the next building step. Thus, //effectively//&nbsp; the moulds are //used to handle the nodes being built,// due to the fact that the low-level model (nodes to be built) and the high-level model (objects directing what is to be built) are //never connected directly.//

!List of elementary building situations
!!!inserting an Effect or Plugin
[>img[draw/builder-primitives1.png]]
The __~PipeMould__ is used to chain up the effects attached to a clip (=local pipe) or global pipe (=bus)
* participating: a Pipe and an Effect
* point of reference: current exit node of the pipe
* result: Effect appended at the pipe's exit node
* returns: ~PipeMould holding onto the new exit node

@@clear(right):display(block):@@


!!!attaching a transition 
[>img[draw/builder-primitives2.png]]
After having completed N pipe's node chains, a __~CombiningMould__ can be used to join them into a [[transition|TransitionsHandling]]
* participating: N pipe's exit nodes, transition
* point of reference: N exit nodes corresponding to (completed) pipes
* result: transition has been attached with the pipe's exit nodes, new wiring requests created attached to the transition's exit node(s)
* returns: ~WiringMould, connected with the created wiring request
Using this mould implicitly "closes" the involved pipes, which means that we give up any reference to the exit node and can't build any further effect attached to this pipes. Generally speaking, "exit node" isn't a special kind of node, rather it's a node we are currently holding on. Similarly, there is nothing directly correlated to a pipe within the render nodes network after we are done with building the part of the network corresponding to the pipe; the latter serves rather as a blueprint for building, but isn't an entity in the resulting low-level model.
Actually, there is {{red{planned}}} a more general (and complicated) kind of transition, which can be inserted into N data connections without joining them together into one single output, as the standard transitions do. The ~CombiningMould can handle this case too by just returning N wiring moulds as a result.

@@clear(right):display(block):@@

!!!building a source connection
[>img[draw/builder-primitives3.png]]
The __~SourceChainMould__ is used as a starting point for any further building, as it results in a local pipe (=clip) rooted at the clip source port. This reflects the fact that the source readers (=media access points) are the //leaf nodes// in the node graph we are about to build.
* participating: source port of a clip, media access point, [[processing pattern|ProcPatt]]
* point of reference: //none//
* result: processing pattern has been //executed//, resulting in a chain of nodes from the source reader to the clip source port
* returns: ~PipeMould holding onto the new exit node (of a yet-empty pipe)

@@clear(right):display(block):@@

!!!wiring a general connection
Any wiring (outside the chain of effects within a pipe) is always done from exit nodes to the port of another pipe, requiring an [[wiring request|WiringRequest]] already checked and deemed resolvable. Within the __~WiringMould__ the actual wiring is conducted, possibly adding a summation node (called "overlayer" in case of video) and typically a fader element (the specific setup to be used is subject to configuration by processing patterns)
* participating: already verified connection request, providing a Pipe and an exit node; a processing pattern and a Placement
* points of reference: exit node and (optionally) starting point of a pipe's chain (in case there are already other connections)
* result: summation node prepended to the port of the pipe, processing pattern has been //executed// for building the connection from the exit node to the pipe's port, ParamProvider has been setup in [[accordance|PlacementDerivedDimension]] to the Placement.
* returns: ~PipeMould holding onto the destination pipe's exit node, ~WiringMould holding onto the port side of the same pipe, i.e. the destination where further connections will insert summation nodes. {{red{TODO how to handle the //empty//-case?}}}
[>img[draw/builder-primitives4.png]]

@@clear(right):display(block):@@
* the MObjects implement //Buildable//
* each Buildable can "receive" a Tool object and apply it
* the different Tool objects are iterated/mapped onto the list of MObjects in the [[Timeline]]
* __Rationale__
** the MObject class hierarchy is rather fixed (it is unlikely the we will be adding much new MObject subclasses)
** so this design makes it easy to add new Tool subclasses, and within each Tool subclass, all operations on the different MObject classes are grouped together, so it is easy to see what is going on.
** a given Tool instance can carry state while being iterated, so we don't need any global (or object-global) variables to hold the result of the build process

This programming technique is often referred to as [["double dispatch" or "visitor"|VisitorUse]]. We use a specialized library implementation of this pattern &mdash; heavily inspired by the [[Loki library|http://loki-lib.sourceforge.net/]]. We use this approach not only for the builder, but also for carrying out operations on the objects in the session in a typesafe manner. 
It is the low level foundation of the actual [[building operations|BasicBuildingOperations]] necessary to create render nodes starting from the given high level model. 
[img[Entities cooperating in the Builder|uml/fig129285.png]]

!Colaborations

While building, the application of such a visiting tool (especially the [[NodeCreatorTool|PlanningNodeCreatorTool]]) is embedded into an execution context formed by the BuilderToolFactory providing our BuilderToolKit, the [[Assembler|BuilderAssembler]] and [[connection manager|ConManager]]. The colaboration of these parts can be seen as the [[mechanics of the builder|BuilderMechanics]] &mdash; sort of the //outward view//, contrary to the //invard aspects// visible when focussing on how the nodes are put together.
    
[img[Colaborations in the Build Process|uml/fig128517.png]]

Besides the primary working tool within the builder (namely the [[Node Creator Tool|PlanningNodeCreatorTool]]), on a lower level, we encounter several [[elementary building situations|BuilderPrimitives]] &mdash; and for each of these elementary situations we can retrieve a suitable "fitting tool" or [[mould|BuilderMould]]. The palette of these moulds is called the ''tool kit'' of the builder. It is subject to configuration by rules.


!!addressing a mould
All mould instances are owned and managed by the [[tool factory|BuilderToolFactory]], and can be referred to by their type (PipeMould, CombiningMould, SourceChainMould, WiringMould) and a concrete object instance (of suitable type). The returned mould (instance) acts as a handle to stick together the given object instance (from the high-level model) with the corresponding point in the low-level node network under construction. As consequence of this approach, the tool factory instance holds a snapshot of the current building state, including all the active spots in the build process. As the latter is driven by objects from the high-level model appearing (in a sensible order &rarr; see BuilderMechanics) within the NodeCreatorTool, new moulds will be created and fitted as necessary, and existing moulds will be exhausted when finished, until the render node network is complete.

!!configuring a mould
As each mould kind is different, it has a {{{prepare(...)}}} function with suitably typed parameters. The rest is intended to be  self-configuring (for example, a ~CombiningMould will detect the actual kind of Transition and select the internal mode of operation), so that it's sufficient to just call {{{operate()}}}

!!sequence of operations
When {{{operate()}}} doesn't throw, the result is a list of //successor moulds// &mdash; you shouldn't use the original mould after triggering its operation, because it may have been retracted as a result and reused for another purpose by the tool factory. It is not necessary to store these resulting moulds either (as they can be retrieved as described above), but they can be used right away for the next building step if applicable. In the state they are returned from a successful building step (mould operation = execution of a contained [[processing pattern|ProcPatt]]), they are usually already holding a reference to the part of the network just created and need to be configured only with the next high-level object (effect, placement, pipe, processing pattern or similar, depending on the concrete situation) in order to carry out the next step.

!!single connection step
at the lowest level within the builder there is the step of building a //connection.// This step is executed by the processing pattern with the help of the mould. Actually, making such a connection is more complicated, because in the standard case it will connect N media streams simultaneously (N=2 for stereo sound or 3D video, N=6 for 5.1 Surround, N=9 for 2nd order Ambisonics). These details are encapsulated within the OperationPoint, which is provided by the mould and exhibits a common interface for the processing pattern to express the connecting operation.

&rarr;see also: BuilderPrimitives for the elementary working situations corresponding to each of these [[builder moulds|BuilderMould]]
''Bus-~MObjects'' create a scope and act as attachment point for building up [[global pipes|GlobalPipe]] within each timeline. While [[Sequence]] is a frontend -- actually implemented by attaching a [[Fork]]-root object (»root track«) -- for //each global pipe// a BusMO is attached as child scope of the [[binding object|BindingMO]], which in turn actualy implements either a timeline or a [[meta-clip|VirtualClip]].
* each global pipe corresponds to a bus object, which thus refers to the respective ~Pipe-ID
* bus objects may be nested, forming a //subgroup//
* the placement of a bus holds a WiringClaim, denoting that this bus //claims to be the corresponding pipe.//
* by default, a timeline is outfitted with one video and one sound master bus
Calculation stream is an organisational unit used at the interface level of the Lumiera engine.
Representing a //stream of calculations,// to deliver generated data within //timing constraints,// it is used
*by the [[play process(es)|PlayProcess]] to define and control properties of the output generation
*at the engine backbone to feed the [[Scheduler]] with individual [[render jobs|RenderJob]] to implement this stream of calculations
Calculation stream objects are stateless, constant chunks of definition -- any altering of playback or rendering parameters just causes the respective descriptors to be superseeded. The presence of a CalcStream (being alive within the denoted time span) implies that using any of the associated jobs, dispatcher tables, node and wiring descriptors is safe

!lifecycle
Calculation stream descriptors can be default constructed, representing a //void calculation.// You can't do anything with these.
Any really interesting calculation stream needs to be retrieved from the EngineFaçade. Additionally, an existing calculation stream can be chained up or superseded, yielding a new CalcStream based on the parameters of the existing one, possibly with some alterations.

!purpose
When a calculation stream is retrieved from the EngineFaçade it is already registered and attached there and represents an ongoing activity. Under the hood, several further collaborators will hold a copy of that calculation stream descriptor. While, as such, a CalcStream has no explicit state, at any time it //represents a current state.// In case the running time span of that stream is limited, it becomes superseded automatically, just by the passing of time.

Each calculation stream refers a relevant [[frame dispatcher table|FrameDispatcher]]. Thus, for the engine (interface level), the calculation stream allows to produce the individual [[render jobs|RenderJob]] to enqueue with the [[Scheduler]]. This translation step is what links and relates nominal time with running wall clock time, thereby obeying the [[timing constraints|Timings]] established initially together with the calculation stream.

Additionally, each calculation stream knows how to access a //render environment closure,// allowing to re-schedule and re-adjust the setup of this stream. Basically, this closure is comprised of several functors (callbacks), which could be invoked to perform management tasks later on. Amongst others, this allows the calculation stream to redefine, supersede or "cancel itself", without the need to access a central registration table at the engine interface level.

&rarr; NodeOperationProtocol
//A clip represents some segment of media, which is arranged to appear at some time point within the edit.//
Lumiera agrees to this common understanding (of most film editing and sound handling applications), to the degree that a clip within Lumiera is largely an abstract entity, avoiding implicit or explicit further assumptions. A clip has a //temporal extension,// (start point and a duration) and we assume it features some media content. Yet the underlying media need not be uniform, it might be structured, a compound of several sources (e.g.  sound and image) -- it might even be //virtual,// part of another sequence, in which case we'll get a VirtualClip.

//For the user,// clips are the most relevant entities encountered when working with and building the edit. As such, it is represented in the UI as [[clip widget|GuiClipWidget]], which can appear as arranged within the [[fork ("tracks")|Fork]] or as item within a //media bin.// But for the internal processing, clips are not conceived as primary entities; rather, they are translated by the [[Builder]] into an arrangement of interconnected [[pipes|Pipe]], which in turn are wired into a data processing network. In the end, this translates into a [[stream of processing jobs|CalcStream]], which are [[scheduled|Scheduler]] for calculation "just in time".

&rarr; SessionInterface
&rarr; TrackHandling
&rarr; PipeHandling
//mediating entity used to guide and control the presentation of a clip in the UI.//

The clip representation in the UI links together two distinct realms and systems of concerns. For one, there are the properties and actions performed on the clip for sake of editing a movie. Everything which has a tangible effect on the resulting render. These information and operations are embodied into the HighLevelModel and can be manipulated script driven, without relying on an UI. But beyond that, there is also the local mechanics of the interface, everything which makes working on the edit and interacting with the model into a smooth experience and workflow. With respect to these concerns, a clip is a self-contained entity which owns its own state and behaviour. The ClipPresenter is the pivotal element to link together those two realms.

[>img[Clip presentation control|uml/Timeline-clip-display.png]]
Regarding the global angle, the ClipPresenter is an UI-Element connected to the UI-Bus, and it is added as child element to some parent entity, a TrackPresenter, which likewise serves as UI representation of a [[fork ("track")|Fork]], and controls widgets to render a track like working scope (in the header pane and in the timeline contents pane). Commands manipulating the clip can be sent via the embedded bus terminal, and status regarding clip properties and nested child elements (effects, transitions) is received as messages via the bus, which insofar plays the role of model and controller.

But regarding the local UI behaviour, the ClipPresenter acts autonomous. It controls an actual {{{ClipWidget}}} for presentation, negotiating the display strategy with some overarching presentation manager. Active parts of the widget are wired back to handling methods of the presenter.

Moreover, the ClipPresenter acts as ''Mediator''
* for commands and actions
* for connection to a »property Box« or [[Placement-UI|GuiPlacementDisplay]]
* to support content preview rendering for the Clip
Background: #fefefd
Foreground: #000
PrimaryPale: #8fb
PrimaryLight: #4dc9a7
PrimaryMid: #16877a
PrimaryDark: #0f3f56
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eef
TertiaryLight: #ccd
TertiaryMid: #99a
TertiaryDark: #667
Error: #f88
Within Steam-Layer, a Command is the abstract representation of a single operation or a compound of operations mutating the HighLevelModel.
Thus, each command is a ''Functor'' and a ''Closure'' ([[command pattern|http://en.wikipedia.org/wiki/Command_pattern]]), allowing commands to be treated uniformly, enqueued in a [[dispatcher|SteamDispatcher]], logged to the SessionStorage and registered with the UndoManager.

Commands are //defined// using a [[fluent API|http://en.wikipedia.org/wiki/Fluent_interface]], just by providing apropriate functions. Additionally, the Closure necessary for executing a command is built by binding to a set of concrete parameters. After reaching this point, the state of the internal representation could be serialised by plain-C function calls, which is important for integration with the SessionStorage.

&rarr; see CommandDefinition
&rarr; see CommandHandling
&rarr; see CommandLifecycle
&rarr; see CommandUsage
Commands can be identified and accessed //by name// &mdash; consequently there needs to be an internal command registry, including a link to the actual implementing function, thus allowing to re-establish the connection between command and implementing functions when de-serialising a persisted command. To create a command, we need to provide the following information
* operation function actually implementing the command
* function to [[undo|UndoManager]] the effect of the command
* function to capture state to be used by UNDO.
* a set of actual parameters to bind into these functions (closure).

!Command definition object
The process of creating a command by providing these building blocks is governed by a ~CommandDef helper object. According to the [[fluent definition style|http://en.wikipedia.org/wiki/Fluent_interface]], the user is expected to invoke a chain of definition functions, finally leading to the internal registration of the completed command object, which then might be dispatched or persisted. For example
{{{
CommandDefinition ("test.command1")
     .operation (command1::operate)          // provide the function to be executed as command
     .captureUndo (command1::capture)        // provide the function capturing Undo state
     .undoOperation (command1::undoIt)       // provide the function which might undo the command
     .bind (obj, val1,val2)                  // bind to the actual command parameters (stores command internally)
     .executeSync();                         // convenience call, forwarding the Command to dispatch.
}}}

!Operation parameters
While generally there is //no limitation// on the number and type of parameters, the set of implementing functions and the {{{bind(...)}}} call are required to match. Inconsistencies will be detected by the compiler. In addition to taking the //same parameters as the command operation,// the {{{captureUndo()}}} function is required to return (by value) a //memento//&nbsp; type, which, in case of invoking the {{{undo()}}}-function, will be provided as additional parameter. To summarise:
|!Function|>|!ret(params)|
| operation| void |(P1,..PN)|
| captureUndo| MEM |(P1,..PN)|
| undoOperation| void |(P1,..PN,MEM)|
| bind| void |(P1,..PN)|
Usually, parameters should be passed //by value// &mdash; with the exception of target object(s), which are typically bound as MObjectRef, causing them to be resolved at commad execution time (late binding).

!Actual command definition scripts
The actual scripts bound as functors into the aforementioned command definitions are located in translation units in {{{steam/cmd}}}
These definitions must be written in a way to ensure that just compiling those translation units causes registration of the corresponding command-~IDs
This is achieved by placing a series of CommandSetup helper instances into those command defining translation units.
To organise any ''mutating'' operation executable by the user (via GUI) by means of the [[command pattern|http://en.wikipedia.org/wiki/Command_pattern]] can be considered //state of the art//&nbsp; today. First of all, it allows to discern the specific implementation operations to be called on one or several objects within the HighLevelModel from the operation requested by the user, the latter being rather a concept. A command can be labeled clearly, executed under controlled circumstances, allowing transactional behaviour.


!Defining commands
[>img[Structure of Commands|uml/fig134021.png]] Basically, a command could contain arbitrary operations, but we'll assume that it causes a well defined mutation within the HighLevelModel, which can be ''undone''. Thus, when defining (&rarr;[[syntax|CommandDefinition]]) a command, we need to specify not only a function to perform the mutation, but also another function which might be called later to reverse the effect of the action. Besides, the action operates on a number of ''target'' objects and additionally may require a set of ''parameter'' values. These are to be stored explicitly within the command object, thus creating a ''closure'' -- the operation //must not//&nbsp; rely on other hidden parameters (with the exception of generic singleton system services).

Theoretically, providing an "undo" functionality might draw on two different approaches:
* to specify an //inverse operation,// known to cancel out the effect of the command
* capturing of a //state memento,// which can later be played back in order to restore the state found prior to command execution.
While obviously the first solution looks elegant and is much simpler to implement on behalf of the command framework, the second solution has distinct advantages, especially in the context of an editing application: there might be rounding or calculation errors, the inverse might be difficult to define correctly, the effect of the operation might depend on circumstances, be random, or might even trigger a query resolution operation to yield the final result. Thus for Lumiera the decision is to //favour state capturing// -- but in a modified, semi-manual and not completely exclusive way.

!Undo state
While the usual »Memento« implementation might automatically capture the whole model (resulting in a lot of data to be stored and some uncertainty about the scope of the model to be captured), in Lumiera we rely instead on the client code to provide a ''capture function''&nbsp;and a ''playback function'' alongside with the actual operation. To help with this task, we provide a set of standard handlers for common situations. This way, operations might capture very specific information, might provide an "intelligent undo" to restore a given semantic instead of just a fixed value -- and moreover the client code is free actually to employ the "inverse operation" model in special cases where it just makes more sense than capturing state.

!Handling of commands
A command may be [[defined|CommandDefinition]] completely from scratch, or it might just serve as a CommandPrototype with specific targets and parameters. The command could then be serialised and later be recovered and re-bound with the parameters, but usually it will be handed over to the SteamDispatcher, pending execution. When ''invoking'', the handling sequence is to [[log the command|SessionStorage]], then call the ''undo capture function'', followed from calling the actual ''operation function''. After success, the logging and [[undo registration|UndoManager]] is completed. In any case, finally the ''result signal'' (a functor previously stored within the command) is emitted. {{red{10/09 WIP: not clear if we indeed implement this concept}}}

By design, commands are single-serving value objects; executing an operation repeatedly requires creating a collection of command objects, one for each invocation. While nothing prevents you from invoking the command operation functor several times, each invocation will overwrite the undo state captured by the previous invocation. Thus, each command instance should be seen as the promise (or later the trace) of a single operation execution. In a similar vein, the undo capturing should be defined as to be self sufficient, so that invoking just the undo functor of a single command performs any necessary steps to restore the situation found before invoking the corresponding mutation functor -- of course only //with respect to the topic covered by this command.// So, while commands provide a lot of flexibility and allow to do a multitude of things, certainly there is an intended CommandLifecycle.
&rarr; command [[definition|CommandDefinition]] and [[-lifecycle|CommandLifecycle]]
&rarr; more on possible [[command usage scenarios|CommandUsage]]
&rarr; more details regarding [[command implementation|CommandImpl]]
Commands are separated in a handle (the {{{control::Command}}}-object), to be used by the client code, and an implementation level, which is managed transparently behind the stages. Client code is assumed to build a CommandDefinition at some point, and from then on to access the command ''by ID'', yielding the command handle.
Binding of arguments, invocation and UNDO all are accessible through this frontend.

!Infrastructure
To support this handling scheme, some infrastructure is in place:
* a command registry maintains the ID &harr; Command relation.
* indirectly, through a custom alloctaor, the registry is also involved into allocation of the command implementation frame
* this implementation frame combines
** an operation mutation and an undo mutation
** a closure, implemented through an argument holder
** an undo state capturing mechanism, based on a capturing function provided on definition
* performing the actual execution is delegated to a handling pattern object, accessed by name.
;~Command-ID
:this ID is the primary access key for stored command definitions within the registry. When a command is //activated,// the command implementation record is also tagged with that ID; this is done for diagnostic purposes, e.g. to find out what commands in the command log of the session can be undone.
;prototypes
:while, technically, any command record in the registry can be outfitted with arguments and executed right away, the standard usage pattern is to treat the //globally known, named entries// in this registry as prototype objects, from which the actual //instances for execution// are created by cloning. This is done to circumvent concurrency problems with argument binding.
;named and anonymous instances
:any command entry in the registry can be clone-copied. There are two flavours of this functionality: either, the new entry can be stored under a different name in the global registry, or alternatively just an unnamed copy can be created and returned. Since such an anonymous copy is not tracked in the registry, its lifetime is controlled solely by the ref-count of the handle returned from this {{{Command::newInstance()}}} call. Please note that the {{{CommandImpl}}} record managed by this handle still bears a copy of the original ~Command-ID, which helps with diagnostics when invoking such an instance. But the presence of this ID in the implementation record does not mean the command is known to the registry; to find out about that, use the {{{Command::isAnonymous()}}} predicate.

!Definition and usage
In addition to the technical specification regarding the command, memento and undo functors, some additional conventions are established
* Command scripts are defined in translation units in {{{steam/cmd}}}
* these reside in the corresponding namespace, which is typically aliased as {{{cmd}}}
* both command definition and usage include the common header {{{steam/cmd.hpp}}}
* the basic command-~IDs defined therein need to be known by the UI elements using them
//This page is a scrapbook to collect observations about command invocation in the UI//
{{red{2/2017}}} the goal is to shape some generic patterns of InteractionControl (&rarr; GuiCommandBinding, &rarr; GuiCommandCycle)

!Add Sequence
The intention is to add a new sequence //to the current session.//
As discussed in &rarr; ModelDependencies, there is quite some degree of magic involved into such a simple activity, depending on the circumstances
* a Session always has at least one Timeline; even a default-created Session has
* when the Session is just //closed,// the UI becomes mostly disabled, with the exception of {{{Help}}}, {{{Quit}}} and {{{New Session}}}...
* what this operation //actually means,// depends on the user's expectations. Here "the user" is an abstracted user we conceive, since there are no real-world users of Lumiera yet.
** when just invoking "add Sequence", the user wants a new playground
** when, on the other hand, the user right-clicks a scope-fork ("track" or "media bin"), she wants a new sequence made out of the contents of this fork
* in both cases, after creating the new Sequence, it should become visible in the UI -- <br/>which means we have to set up a new timeline too, taking a copy of the currently active timeline. (When the latter is just pristine, we want the new one to take its place)
At this point, it seems adequate to limit "add Sequence" to the first use case, and name the second one as "make Sequence".
The latter will probably be used within a context menu, but don't forget that actions can be bound to keys...

//add Sequence// thus means a global command, which can be issued against the current session with no further arguments...
* it will fabricate a new fork (appearing as track in this context)
* it will //root-attach// this fork, i.e. it will [[place|Placement]] it as child of the ModelRootMO scope, and this act automagically creates a new [[Sequence]] asset
* it will fabricate a new BindingMO as a prototype copy from the currently active one, and likewise root-attach it, which magically creates a new [[Timeline]] asset
* it will create a new ''focus goal'' ({{red{TODO new term coined 2/2017}}}), which should lead to activating the corresponding tab in the timeline pane, once this exists...

Now, while all of this means a lot of functionality and complexity down in Steam-Layer, regarding the UI this action is quite simple: it is offered as an operation on the InteractionDirector, which most conveniently also corresponds to "the session as such" and thus can fire off the corresponding command without much further ado. On a technical level we just somehow need to know the corresponding command ID in Steam, which is not subject to configuration, but rather requires some kind of well behaved hard-wiring. And, additionally, we need to build something like the aforementioned //focus goal...// {{red{TODO}}}

!Add Track
Here the intention is to add a new scope //close to where we "are" currently.//
If the currently active element is something within a scope, we want the new scope as a sibling, otherwise we want it as a child, but close at hand.
So, for the purpose of this analysis, the "add Track" action serves as an example where we need to pick up the subject of the change from context...
* the fact there is always a timeline and a sequence, also implies there is always a fork root (track)
* so this operation basically adds to a //"current scope"// -- or next to it, as sibling
* this means, the UI logic has to provide a //current model element,// while the details of actually selecting a parent are decided elsewhere (in Steam-Layer, in rules)
[<img[Structure of Commands|uml/fig135173.png]]
While generally the command framework was designed to be flexible and allow a lot of different use cases, employ different execution paths and to serve various goals, there is an ''intended lifecycle'' &mdash; commands are expected to go through several distinct states.

The handling of a command starts out with a ''command ID'' provided by the client code. Command ~IDs are unique (human readable) identifiers and should be organised in a hierarchical fashion. When provided with an ID, the CommandRegistry tries to fetch an existing command definition. In case this fails, we enter the [[command definition stage|CommandDefinition]], which includes specifying functions to implement the operation, state capturing and UNDO. When all of this information is available, the entity is called a ''command definition''. Conceptually, it is comparable to a //class// or //meta object.//

By ''binding'' to specific operation arguments, the definition is //armed up//&nbsp; and becomes a real ''command''. This is similar to creating an instance from a class. Behind the scenes, storage is allocated to hold the argument values and any state captured to create the ability to UNDO the command's effect later on.

A command is operated or executed by passing it to an ''execution pattern'' &mdash; there is a multitude of possible execution patterns to choose from, depending on the situation. 
{{red{WIP... details of ~SteamDispatcher not specified yet}}}

When a command has been executed (and maybe undone), it's best to leave it alone, because the UndoManager might hold a reference. Anyway, a ''clone of the command'' could be created, maybe bound with different arguments and treated separately from the original command.

!State predicates
* fetching an non-existent command raises an ~LUMIERA_ERROR_INVALID_COMMAND
* a command definition becomes //valid// ({{{bool true}}}) when all necessary functions are specified. Technically this coincides with the creation of a CommandImpl frame behind the scenes, which also causes the Command (frontend/handle object) to evaluate to {{{true}}} in bool context from then on.
* when, in addition to the above, the command arguments are bound, it becomes //executable.//
* after the (fist) execution, the command gets also //undo-able.//
State predicates are accessible through the Command (frontend); additionally there are static query functions in class {{{Command}}}
//Helper facility to ease the creation of actual command definitions.//
A [[Steam-Layer command|CommandHandling]] is a functor, which can be parametrised with actual arguments. It needs to be [[defined|CommandDefinition]] beforehand, which means to establish an unique name and to supply three functions, one for the actual command operation, one to capture state and one to [[UNDO]] the effect of the command invocation.

The helper class {{{CommandSetup}}} allows to create series of such definitions with minimal effort. Since any access and mutation from the UI into the Session data must be performed by invoking such commands, a huge amount of individual command definitions need to be written eventually. These are organised into a series of implementation translation units with location {{{poc/cmd/*-cmd.cpp}}}.

Each of these files is specialised to defining a set of thematically related commands, supplying the code for the actual command scripts. Each definition is introduced by a single line with the macro {{{COMMAND_DEFINITION(name)}}}, followed by a code block, which actually ends up as the body of a lambda function, and receives the bare [[CommandDef|CommandDefinition]] as single argument with name {{{cmd}}}. The {{{name}}} argument of the macro ends up both stringified as the value of the command-ID, and as an identifier holding a new {{{CommandSetup}}} instance. It is assumed that a header with //corresponding declarations// (the header {{{cmd.hpp}}}) is included by all UI elements actually to use, handle and invoke commands towards the SessionSubsystem
//for now (7/09) I'll use this page to collect ideas how commands might be used...//

* use a command for getting a log entry and an undo possibility automatically
* might define, bind and then execute a command at once
* might define it and bind it to a standard set of parameters, to be used as a prototype later.
* might just create the definition, leaving the argument binding to the actual call site
* execute it and check the success/failure result
* just enqueue it, without caring for successful execution
* place it into a command sequence bundle
* repeat the execution

!!!a command definition....
* can be created from scratch, by ID
* can be re-accessed, by ID
* can't be modified once defined (this is to prevent duplicate definitions with the same ID)
* but can be dropped, which doesn't influence already existing dependent command instances
* usually will be the starting point for creating an actual command by //binding//

!!!a command instance....
* normally emerges from a definition by binding arguments
* the first such binding will create a named command registration
* but subsequent accesses by ID or new bindings create an anonymous clone
* which then in turn might then be registered explicitly with a new ID
* anonymous command instances are managed by referral and ref-counting

!!!an execution pattern....
* can only be defined in code by a class definition, not at runtime
* subclasses the ~HandlingPattern interface and uses an predefined ID (enum).
* a singleton instance is created on demand, triggered by referring the pattern's ID
* is conceptually //stateless// &mdash; of course there can be common configuration values
* is always invoked providing a concrete command instance to execute
* is configured into the command instance, to implement the command's invocation
* returns a duck-typed //result// object


!command &harr; interaction
The User Interface does not just trigger commands -- rather it performs //user interactions.//
An interaction is formed like a sentence of spoken language, which means, there is a process of forming such a sentence, giving rise to [[interaction state|InteractionState]]. In many cases, the actual arguments of a command invocation are to be drawn from the current context. Thus, GuiCommandBinding is a way more elaborate topic, while it builds upon the fundamentals defined here...
The Connection Manager is a service for wiring connections and for querying information and deriving decisions regarding various aspects of data streams and the possibility of connections. The purpose of the Connection Manager is to isolate the [[Builder]], which is client of this information and decision services, from the often convoluted details of type information and organizing a connection.

!control connections
my intention was that it would be sufficient for the builder to pass an connection request, and the Connection Manager will handle the details of establishing a control/parameter link.
{{red{TODO: handling of parameter values, automation and control connections still need to be designed}}}

!data connections
Connecting data streams of differing type involves a StreamConversion. Mostly, this aspect is covered by the [[stream type system|StreamType]]. The intended implementation will rely partially on [[rules|ConfigRules]] to define automated conversions, while other parts need to be provided by hard wired logic. Thus, regarding data connections, the ConManager can be seen as a specialized Facade and will delegate to the &rarr; [[stream type manager|STypeManager]]
* retrieve information about capabilities of a stream type given by ID 
* decide if a connection is possible
* retrieve a //strategy// for implementing a connection
This index refers to the conceptual, more abstract and formally specified aspects of the Steam-Layer and Lumiera in general.
More often than not, these emerge from immediate solutions, being percieved as especially expressive, when taken on, yielding guidance by themselves. Some others, [[Placements|Placement]] and [[Advice]] to mention here, immediately substantiate the original vision.
Configuration Queries are requests to the system to "create or retrieve an object with //this and that // capabilities". They are resolved by a rule based system ({{red{planned feature}}}) and the user can extend the used rules for each Session. Syntactically, they are stated in ''prolog'' syntax as a conjunction (=logical and) of ''predicates'', for example {{{stream(mpeg), pipe(myPipe)}}}. Queries are typed to the kind of expected result object: {{{Query<Pipe> ("stream(mpeg)")}}} requests a pipe excepting/delivering mpeg stream data &mdash; and it depends on the current configuration what "mpeg" means. If there is any stream data producing component in the system, which advertises to deliver {{{stream(mpeg)}}}, and a pipe can be configured or connected with this component, then the [[defaults manager|DefaultsManagement]] will create/deliver a [[Pipe|PipeHandling]] object configured accordingly.
&rarr; [[Configuration Rules system|ConfigRules]]
&rarr; [[accessing and integrating configuration queries|ConfigQueryIntegration]]
* planning to embed a YAP Prolog engine
* currently just integrated by a table driven mock
* the baseline is a bit more clear by now (4/08)

&rarr; see also ConfigRules
&rarr; see also DefaultsManagement

!Use cases
[<img[when to run config queries|uml/fig131717.png]]

The key idea is that there is a Rule Base &mdash; partly contained in the session (building on a stock of standard rules supplied with the application). Now, whenever there is the need to get a new object, for adding it to the session or for use associated with another object &mdash; then instead of creating it by a direct hard wired ctor call, we issue a ConfigQuery requesting an object of the given type with some //capabilities// defined by predicates. The same holds true when loading an existing session: some objects won't be loaded back blindly, rather they will be re-created by issuing the config queries again. Especially an important use case is  (re)creating a [[processing pattern|ProcPatt]] to guide the wiring of a given media's processing pipeline.

At various places, instead of requiring a fixed set of capabilities, it is possible to request a "default configured" object instead, specifying just those capabilities we really need to be configured in a specific way. This is done by using the [[Defaults Manager|DefaultsManagement]] accessible on the [[Session]] interface. Such a default object query may either retrieve an already existing object instance, run further config queries, and finally result in the invocation of a factory for creating new objects &mdash; guarded by rules to suit current necessities.

@@clear(left):display(block):@@

!Components and Relations
[>img[participating classes|uml/fig131461.png]]

<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>

Access point is the interface {{{ConfigRules}}}, which allowes to resolve a ConfigQuery resulting in an object with properties configured such as to fulfill the query. This whole subsystem employes quite some generic programming, because actually we don't deal with "objects", but rather with similar instantiations of the same functionality for a collection of different object types. For the purpose of resolving these queries, the actual kind of object is not so much of importance, but on the caller side, of course we want to deal with the result of the queries in a typesafe manner.
Examples for //participating object kinds// are [[pipes|Pipe]], [[processing patterns|ProcPatt]], effect instances, [[tags|Tag]], [[labels|Label]], [[automation data sets|AutomationData]],...
@@clear(right):display(block):@@
For this to work, we need each of the //participating object types// to provide the implementation of a generic interface {{{TypeHandler}}}, which allows to access actual C/C++ implementations for the predicates usable on objects of this type within the Prolog rules. The implementation has to make sure that, alongside with each config query, there are additional //type constraints// to be regarded. For example, if the client code runs a {{{Query<Pipe>}}}, an additional //type guard// (implemented by a predicate {{{type(pipe)}}} has to be inserted, so only rules and facts in accordance with this type will be used for resolution.


!when querying for a [["default"|DefaultsManagement]] object
[<img[colaboration when issuing a defaults query|uml/fig131845.png]]

@@clear(left):display(block):@@
Many features can be implemented by specifically configuring and wiring some unspecific components. Rather than tie the client code in need of some given feature to these configuration internals, in Lumiera the client can //query // for some kind of object providing the //needed capabilities. // Right from start (summer 2007), Ichthyo had the intention to implement such a feature using sort of a ''declarative database'', e.g. by embedding a Prolog system. By adding rules to the basic session configuration, users should be able to customize the semi-automatic part of Lumiera's behaviour to great extent.

[[Configuration Queries|ConfigQuery]] are used at various places, when creating and adding new objects, as well when building or optimizing the render engine node network.
* Creating a [[pipe|PipeHandling]] queries for a default pipe or a pipe with a certain stream type
* Adding a new [[fork ("track")|TrackHandling]] queries for some default placement configuration, e.g. the pipe it will be plugged to.
* when processing a [[wiring request|WiringRequest]], connection possibilities have to be evaluated.
* actually building such a connection may create additional degrees of freedom, like panning for sound or layering for video.

!anatomy of a Configuration Query
The query is given as a number of logic predicates, which are required to be true. Syntactically, it is a string in prolog syntax, e.g. {{{stream(mpeg)}}}, where "stream" is the //predicate, // meaning here "the stream type is...?" and "mpeg" is a //term // denoting an actual property, object, thing, number etc &mdash; the actual kind of stream in the given example. Multible comma separated predicates are combined with logical "and". Terms may be //variable// at start, which is denoted syntactically by starting them with a uppercase letter. But, through the computation of a solution, any variable term needs to be //bound//&nbsp;  to some known fixed value, otherwise the counts as failed. A failed query is treated as a local failure, which may cause some operation being aborted or just some other possibility being chosen.
Queries are represented by instantiations of the {{{Query<TYPE>}}} template, because their actual meaning is "retrieve or create an object of TYPE, configured such that...!". At the C++ side, this ensures type safety and fosters programming against interfaces, while being implemented rule-wise by silently prepending the query with the predicate {{{object(X, type)}}}, which reads as "X is an object with this type"

!!!querying for default
A special kind of configuration query includes the predicate {{{default(X)}}}, which is kind-of an existential quantisation of the variable {{{X}}}. Using this predicate states that a suitable //default-configured// object exists somehow. This behaviour could be used as an fallback within a config query, allowing us always to return a solution. The latter can be crucial when it comes to integrating the query subsystem into an existing piece of implementation logic, which requires us to give some guarantees.
&rarr; see DefaultsManagement on details how to access these captured defaults

But note, the exact relation of configuration queries and query-for-default still needs to be worked out, especially when to use which flavour. 

! {{red{WIP 2012}}} front-end and integration
The overall architecture of this rules based configuration system remains to be worked out
* the generic front-end to unite all kinds of configuration queries
* how to configure the various [[query resolvers|QueryResolver]]
* how to pick and address a suitable resolver for a given query situation
* the taxonomy of queries:
** capability queries
** retrieval queries
** defaults queries
** discovery queries
** connection queries
** ...
{{red{WIP 10/2010 &rArr; see Ticket [[#705|http://issues.lumiera.org/ticket/705]]}}}

!executing a Configuration Query
Actually posing such an configuration query, for example to the [[Defaults Manager in the Sessison|DefaultsManagement]], may trigger several actions: First it is checked against internal object registries (depending on the target object type), which may cause the delivery of an already existing object (as reference, clone, or smart pointer). Otherwise, the system tries to figure out an viable configuration for a newly created object instance, possibly by issuing recursive queries. In the most general case this may silently impose additional decisions onto the //execution context // of the query &mdash; by default the session.

!Implementation
At start and for debugging/testing, there is an ''dummy'' implementation using a map with predefined queries and answers. But for the real system, the idea is to embed a ''YAP Prolog'' engine to run the queries. This includes the task of defining and loading a set of custom predicates, so the rule system can interact with the object oriented execution environment, for example by transforming some capability predicate into virtual calls to a corresponding object interface. We need a way for objects to declare some capability predicates, together with a functor that can be executed on an object instance (and further parameters) in the cause of the evaluation of some configuration query. Type safety and diagnostics play an important role here, because effectively the rule base is a pool of code open for arbitray additions from the user session.
&rarr; [[considerations for a Prolog based implementation|QueryImplProlog]]
&rarr; [[accessing and integrating configuration queries|ConfigQueryIntegration]]
&rarr; see {{{src/common/query/mockconfigrules.cpp}}} for the table with the hard wired (mock) answers

!!!fake implementation guidelines
The fake implementation should follow the general pattern planned for the Prolog implementation. That is, find, query, create, query. Thus, such a config query can be considered existentially quantised. The interplay with the factories is tricky. Because factories might issue config queries themselves, while a factory invocation driven from within this rule pattern //must not// re-invoke the factory. Besides, a factory invocation should create, not fetch an existing solution (?)

{{red{WARN}}} there is an interference with the (planned) Undo function: a totally correct implementation of "Undo" would need to remember and restore the internal state of the query system (similar to backtracking). But, more generally, such an correct implementation is not achievable, because we are never able to capture and control the whole state of a real world system doing such advanced things like video and sound processing. Seemingly we have to accept that after undoing an action, there is no guarantee we can re-do it the same way as it was first time.

The Render Engine is the part of the application doing the actual video calculations. Built on top of system level services and retrieving raw audio and video data through [[Lumiera's Vault Layer|Vault-Layer]], its operations are guided by the objects and parameters edited by the user in [[the session|Session]]. The //middle layer// of the Lumiera architecture, known as the Steam-Layer, spans the area between these two extremes, providing the the (abstract) edit operations available to the user, the representation of [["editable things"|MObjects]] and the translation of those into structures and facilities allowing to [[drive the rendering|Rendering]].

!About this wiki page
|background-color:#e3f3f1;width:96ex;padding:2ex; This TiddlyWiki is the central location for design, planning and documentation of the Core. Some parts are used as //extended brain// &mdash; collecting ideas, considerations and conclusions &mdash; while other tiddlers contain the decisions and document the planned or implemented facilities. The intention is to move over the more mature parts into the emerging technical documentation section on the [[Lumiera website|http://www.lumiera.org]] eventually. <br/><br/>Besides cross-references, content is largely organised through [[Tags|TabTags]], most notably <br/><<tag overview>> &middot; <<tag def>> &middot; <<tag decision>> &middot; <<tag spec>> &middot; <<tag Concepts>> &middot; <<tag Architecture>> &middot; <<tag GuiPattern>> <br/> <<tag Model>> &middot; <<tag SessionLogic>> &middot; <<tag GuiIntegration>> &middot; <<tag Builder>> &middot; <<tag Rendering>> &middot; <<tag Player>> &middot; <<tag Rules>> &middot; <<tag Types>> |

!~Steam-Layer Summary
When editing, the user operates several kinds of //things,// organized as [[assets|Asset]] in the AssetManager, like media, clips, effects, codecs, configuration templates. Within the context of the [[Project or Session|Session]], we can use these as &raquo;[[Media Objects|MObjects]]&laquo; &mdash; especially, we can [[place|Placement]] them in various kinds within the session and relative to one another.

Now, from any given configuration within the session, we create sort or a frozen- and tied-down snapshot, here called &raquo;[[Fixture|Fixture]]&laquo;, containing all currently active ~MObjects, broken down to elementary parts and made explicit if necessary. This Fixture acts as a isolation layer towards the Render Engine. We will hand it over to the  [[Builder]], which in turn will transform it into a network of connected [[render nodes|ProcNode]]. This network //implements//&nbsp; the [[Render Engine|OverviewRenderEngine]].

The system is ''open'' inasmuch every part mirrors the structure of corresponding parts in adjacent subsystems, and the transformation of any given structure from one subsystem (e.g. Asset) to another (e.g. Render Engine) is done with minimal "magic". So the whole system should be able to handle completely new structures mostly by adding new configurations and components, without much need of rewriting basic workings.


!!see also
&rarr; [[Overview]] of Subsystems and Components, and DesignGoals
&rarr; [[An Introduction|WalkThrough]] discussing the central points of this design
&rarr; [[Overview Session (high level model)|SessionOverview]]
&rarr; [[Overview Render Engine (low level model)|OverviewRenderEngine]]
&rarr; BuildProcess and RenderProcess
&rarr; how [[Automation]] works
&rarr; [[Problems|ProblemsTodo]] to be solved and notable [[design decisions|DesignDecisions]]
&rarr; [[Concepts, Abstractions and Formalities|Concepts]]
&rarr; [[Implementation Details|ImplementationDetails]] {{red{WIP}}}

&rarr; ''Help''/Documentation of [[TiddlyWiki-Markup|https://classic.tiddlywiki.com/#HelloThere%20%5B%5BHeadings%20Formatting%5D%5D%20%5B%5BBasic%20Formatting%5D%5D%20%5B%5BCode%20Formatting%5D%5D%20%5B%5BCSS%20Formatting%5D%5D%20%5B%5BHorizontal%20Rule%20Formatting%5D%5D%20%5B%5BHTML%20Entities%20Formatting%5D%5D%20%5B%5BHTML%20Formatting%5D%5D%20HtmlEntities%20%5B%5BImage%20Formatting%5D%5D%20%5B%5BLine%20Break%20Formatting%5D%5D%20%5B%5BLink%20Formatting%5D%5D%20%5B%5BList%20Formatting%5D%5D%20PeriodicTable%20PlainText%20PluginFormatting%20%5B%5BQuotations%20Formatting%5D%5D%20%5B%5BSuppressing%20Formatting%5D%5D%20%5B%5BTables%20Formatting%5D%5D%20TiddlerComments]]

''special service connected to the UI-Bus to handle all messages touching core concerns''
Especially this service handles the {{{act}}} messages to deal with commands [[operating on the Session|CommandHandling]]. And it deals with "uplink" {{{note}}} messages to record ongoing PresentationState changes. For all the other UI-Element nodes connected to the bus, CoreService is just assumed to be there, yet it is not known by ID, nor can it be addressed directly, like regular ~UI-Elements can. 

On an operational level, CoreService additionally serves as lifecycle manager for the whole UI backbone, insofar it maintains the central bus hub, the Nexus. CoreService is instantiated (as a PImpl) when the UI subsystem starts up, and it is destroyed when leaving the UI main loop. At this point, the automatic registration and deregistration mechanism for bus terminals and ~UI-Elements must have cleared all connections in the central routing table (within Nexus).
The question is where to put all the state-like information [[associated with the current session|SessionOverview]]. Because this is certainly "global", but may depend on the session or need to be configured differently when loading another session. At the moment (9/07) Ichthyo considers the following solution:
* represent all configuration as [[Asset]]s
* find a way {{red{TODO}}} how to reload the contents of the [[AssetManager]].
* completely hide the Session object behind a ''~PImpl'' smart pointer, so the session object can be switched when reloading.
* the [[Fixture]] acts as isolation layer, and all objects refered from the Fixture are refcounting smart pointers. So, even when the session gets switched, the old objects remain valid as long as needed.
A ''frame of data'' is the central low-level abstraction when dealing with media data and media processing.

Deliberately we avoid relying on any special knowledge regarding such data frames, beyond the fact
* that a frame resides within a ''memory buffer'' &rarr; [[buffer provider abstraction|BufferProvider]]
* that the frame has a ''frame number'', which can be related to a ''time span'' &rarr; [[time quantisation framework|TimeQuant]]
* that the frame belongs to an abstract media stream, which can be described through our &rarr; [[stream type system|StreamType]]
* and that some external ''media handling library'' knows to deal with the data in this frame
[[CoreDevelopment]]
[[GuiTopLevel]]
[[Session]]
As detailed in the [[definition|DefaultsManagement]], {{{default(Obj)}}} is sort of a Joker along the lines "give me a suitable Object and I don't care for further details". Actually, default objects are implemented by the {{{mobject::session::DefsManager}}}, which remembers and keeps track of anything labeled as "default". This defaults manager is a singleton and can be accessed via the [[Session]] interface, meaning that the memory trail regarding defaults is part of the session state. Accessing an object via the query for an default actually //tagges// this object (storing a weak ref in the ~DefsManager). Alongside with each object successfully queried via "default", the degree of constriction is remembered, i.e. the number of additional conditions contained in the query. This enables us to search for default objects starting with the most unspecific.

!Skeleton
# ''search'': using the predicate {{{default(X)}}} enumerates existing objects of suitable type
#* candidates are delivered starting with the least constrained default
#* the argument is unified
#** if the rest of the query succeeds we've found our //default object// and are happy.
#** otherwise, if all enumerated solutions are exhausted without success, we enter
# ''default creation'': try to get an object fulfilling the conditions and remember this situation
#* we issue an ConfigQuery with the query terms //minus// the {{{default(X)}}} predicate
#* it depends on the circumstances how this query is handled. Typically the query resolution first searches existing objects and then creates a new instance to match the required capabilities. Usually, this process succeeds, but there can be configurations leading to failure.
#** failing the ~ConfigQuery is considered an (non-critical) exception (throws), as defaults queries are supposed to succeed
#** otherwise, the newly created object is remembered (tagged) as new default, together with the degree of constriction

!!!Implementation details
Taken precisely, the "degree of constriction" yields only a partial ordering &mdash; but as the "default"-predicate is sort of a existential quantification anyway, its sole purpose is to avoid polluting the session with unnecessary default objects, and we don't need to care for absolute precision. A suitable approximation is to count the number of predicates terms in the query and use a (sorted) set (separate for each Type) to store weak refs to the the objects tagged as "default"
{{red{WARN}}} there is an interference with the (planned) Undo function. This is a general problem of the config queries; just ignoring this issue seems reasonable.

!!!Problems with the (preliminary) mock implementation
As we don't have a Prolog interpreter on board yet, we utilize a mock store with preconfigured answers. (see MockConfigQuery). As this preliminary solution is lacking the ability to create new objects, we need to resort to some trickery here (please look away). The overall logic is quite broken, not to say outright idiotic, because the system isn't capable to do any real resolution &mdash; if we ignore this fact, the rest of the algorithm can be implemented, tested and used right now.
For several components and properties there is an implicit default value or configuration; it is stored alongside with the session. The intention is that defaults never create an error, instead, they are to be extended silently on demand. Objects configured according to these defaults can be retrieved at the [[Session]] interface by a set of overloaded functions {{{Session::current->default(Query<TYPE> ("query string"))}}}, where the //query string // defines a capability query similar to what is employed for pipes, stream types, codecs etc. This query mechanism is implemented by [[configuration rules|ConfigRules]]

!!!!what is denoted by {{{default}}}?
{{{default(Obj)}}} is a predicate expressing that the object {{{Obj}}} can be considered the default setup under the given conditions. Using the //default// can be considered as a shortcut for actually finding an exact and unique solution. The latter would require to specify all sorts of detailed properties up to the point where only one single object can satisfy all conditions. On the other hand, leaving some properties unspecified would yield a set of solutions (and the user code issuing the query had to provide means for selecting one solution from this set). Just falling back on the //default// means that the user code actually doesn't care for any additional properties (as long as the properties he //does// care for are satisfied). Nothing is said specifically on //how//&nbsp; this default gets configured; actually there can be rules //somewhere,// and, additionally, anything encountered once while asking for a default can be re-used as default under similar circumstances.
&rarr; [[implementing defaults|DefaultsImplementation]]
//Access point to dependencies by-name.//
In the Lumiera code base, we refrain from building or using a full-blown Dependency Injection Container. Rather, we rely on a generic //Singleton Factory// -- which can be augmented into a //Dependency Factory// for those rare cases where we actually need more instance and lifecycle management beyond lazy initialisation. Client code indicates the dependence on some other service by planting an instance of that Dependency Factory (for Lumiera this is {{{lib::Depend<TY>}}}). The //essence of a "dependency"// of this kind is that we ''access a service //by name//''. And this service name or service ID is in our case a //type name.//

&rarr; see the Introductory Page about [[Dependencies|http://lumiera.org/documentation/technical/library/Dependencies.html]] in Lumiera online documentation.

!Requirements
Our DependencyFactory satisfies the following requirements
* client code is able to access some service //by-name// -- where the name is actually the //type name// of the service interface.
* client code remains agnostic with regard to the lifecycle or backing context of the service it relies on
* in the simplest (and most prominent case), //nothing// has to be done at all by anyone to manage that lifecycle.<br/>By default, the Dependency Factory creates a singleton instance lazily (heap allocated) on demand and ensures thread-safe initialisation and access.
* we establish a policy to ''disallow any significant functionality during application shutdown''. After leaving {{{main()}}}, only trivial dtors are invoked and possibly a few resource handles are dropped. No filesystem writes, no clean-up and reorganisation, not even any logging is allowed. For this reason, we established a [[Subsystem]] concept with explicit shutdown hooks, which are invoked beforehand.
* the Dependency Factory can be re-configured for individual services (type names) to refer to an explicitly installed service instance. In those cases, access while the service is not available will raise an exception. There is a simple one-shot mechanism to reconfigure Dependency Factory and create a link to an actual service implementation, including automatic deregistration.

!!!Configuration
The DependencyFactory and thus the behaviour of dependency injection can be reconfigured, ad hoc, at runtime.
Deliberately, we do not enforce global consistency statically (since that would lead to one central static configuration). However, a runtime sanity check is performed to ensure configuration actually happens prior to any use, which means any invocation to retrieve (and thus lazily create) the service instance. The following flavours can be configured
;default
:a singleton instance of the designated type is created lazily, on first access
:define an instance for access (preferably static) {{{Depend<Blah> theBla;}}}
:access the singleton instance as {{{theBla().doIt();}}}
;singleton subclass
:{{{DependInject<Blah>::useSingleton<SubBlah>() }}}
:causes the dependency factory {{{Depend<Bla>}}} to create a {{{SubBlah}}} singleton instance from now on
;attach to service
:{{{DependInject<Blah>::ServiceInstance<SubBlah> service{p1, p2, p3}}}}
:* build and manage an instance of {{{SubBlah}}} in heap memory immediately (not lazily)
:* configure the dependency factory to return a reference //to this instance//
:* the generated {{{ServiceInstance<SubBlah>}}} object itself acts as lifecycle handle (and managing smart-ptr)
:* when it is destroyed, the dependency factory is automatically cleared, and further access will trigger an error
;support for test mocking
:{{{DependInject<Blah>::Local<SubBlah> mock }}}
:temporarily shadows whatever configuration resides within the dependency factory
:the next access will create a (non singleton) {{{SubBlah}}} instance in heap memory and return a {{{Blah&}}}
:the generated object again acts as lifecycle handle and smart-ptr to access the {{{SubBlah}}} instance like {{{mock->doItSpecial()}}}
:when this handle goes out of scope, the original configuration of the dependency factory is restored
;custom constructors
:both the subclass singleton configuration and the test mock support optionally accept a functor or lambda argument with signature {{{SubBlah*()}}}.
:the contract is for this construction functor to return a heap allocated object, which will be owned and managed by the DependencyFactory.
:especially this enables use of subclasses with non default ctor and / or binding to some additional hidden context.
:please note //that this closure will be invoked later, on-demand.//

We consider the usage pattern of dependencies a question of architecture rather -- such can not be solved by any mechanism at implementation level.
For this reason, Lumiera's Dependency Factory prevents reconfiguration after use, but does nothing exceeding such basic sanity checks

!!!Performance considerations
To gain insight into the rough proportions of performance impact, in 2018 we conducted some micro benchmarks (using a 8 core AMD ~FX-8350 64bit CPU running Debian/Jessie and GCC 4.9 compiler)
The following table lists averaged results //in relative numbers,// in relation to a single threaded optimised direct non virtual member function invocation (≈ 0.3ns)
| !Access Technique |>| !development |>| !optimised |
|~| single threaded|multithreaded | single threaded|multithreaded |
|direct invoke on shared local object             |   15.13|   16.30|  ''1.00''|   1.59|
|invoke existing object through unique_ptr  |   60.76|   63.20|     1.20|   1.64|
|lazy init unprotected (not threadsafe)           |   27.29|   26.57|     2.37|   3.58|
|lazy init always mutex protected                    | 179.62| 10917.18| 86.40| 6661.23|
|Double Checked Locking with mutex          |   27.37|   26.27|    2.04|    3.26|
|DCL with std::atomic and mutex for init       |   44.06|   52.27|    2.79|    4.04|

Some observations:
* The numbers obtained pretty much confirm [[other people's measurments|http://www.modernescpp.com/index.php/thread-safe-initialization-of-a-singleton]]
* Synchronisation is indeed necessary; the unprotected lazy init crashed several times randomly during multithreaded tests.
* Contention on concurrent access is very tangible; even for unguarded access the cache and memory hardware has to perform additional work
* However, the concurrency situation in this example is rather extreme and deliberately provokes collisions; in practice we'd be closer to the single threaded case
* Double Checked Locking is a very effective implementation strategy and results in timings within the same order of magnitude as direct access
* Unprotected lazy initialisation performs spurious duplicate initialisations, which can be avoided by DCL
* Naïve Mutex locking is slow even with non-recursive Mutex without contention
* Optimisation achieves access times around ≈ 1ns
Along the way of working out various [[implementation details|ImplementationDetails]], decisions need to be made on how to understand the different facilities and entities and how to tackle some of the problems. This page is mainly a collection of keywords, summaries and links to further the discussion. And the various decisions should allways be read as proposals to solve some problem at hand...

''Everything is an object'' &mdash; yes of course, that's a //no-brainer,// todays. Rather, important is to note what is not "an object", meaning it can't be arranged arbitrarily
* we have one and only one global [[Session]] which directly contains a collection of multiple [[Timelines|Timeline]] and is associated with a globally managed collection of [[assets|Asset]]. We don't utilise scoped variables here (no "mandantisation"); if  a media has been //opened,// it is just plain //globally known//&nbsp; as asset.
* the [[knowledge base|ConfigRules]] is just available globally. Obviously, the session gets a chance to install rules into this knowledge base, but we don't stress ownership here.
* we create an unique [[Fixture]] which acts as isolation layer towards the render engine and is (re)built automatically.
The high-level view of the tangible entities within the session is unified into a ''single tree'' -- with the notable exception of [[external outputs|OutputManagement]] and the [[assets|Asset]], which are understood as representing a //bookkeeping view// and kept separate from the //things to be manipulated// (MObjects).

We ''separate'' processing (rendering) and configuration (building). The [[Builder]] creates a network of [[render nodes|ProcNode]], to be processed by //pulling data // from some [[Pipe]]

''Objects are [[placed|Placement]] rather'' than assembled, connected, wired, attached. This is more of a rule-based approach and gives us one central metaphor and abstraction, allowing us to treat everything in an uniform manner. You can place it as you like, and the builder tries to make sense out of it, silently disabling what doesn't make sense.
An [[Sequence]] is just a collection of configured and placed objects (and has no additional, fixed structure). [["Tracks" (forks)|Fork]] form a mere organisational grid, they are grouping devices not first-class entities (a track doesn't "have" a pipe or "is" a video track and the like; it can be configured to behave in such manner by using placements though). [[Pipes|Pipe]] are hooks for making connections and are the only facility to build processing chains. We have global pipes, and each clip is built around a lokal [[source port|ClipSourcePort]] &mdash; and that's all. No special "media viewer" and "arranger", no special role for media sources, no commitment to some fixed media stream types (video and audio). All of this is sort of pushed down to be configuration, represented as asset of some kind. For example, we have [[processing pattern|ProcPatt]] assets to represent the way of building the source network for reading from some media file (including codecs treated like effect plugin nodes)

The model in Steam-Layer is rather an //internal model.// What is exposed globally, is a structural understanding of this model. In this structural understanding, there are Assets and ~MObjects, which both represent the flip side of the same coin: Assets relate to bookkeeping, while ~MObjects relate to building and manipulation of the model. In the actual data represntation within the HighLevelModel, we settled upon some internal reductions, preferring either the //Asset side// or the //~MObject side// to represent some relevant entities. See &rarr; AssetModelConnection.

Actual ''media data and handling'' is abstracted rigorously. Media is conceived as being stream-like data of distinct StreamType. When it comes to more low-level media handling, we build on the DataFrame abstraction. Media processing isn't the focus of Lumiera; we organise the processing but otherwise ''rely on media handling libraries.'' In a similar vein, multiplicity is understood as type variation. Consequently, we don't build an audio and video "section" and we don't even have audio tracks and video tracks. Lumiera uses tracks and clips, and clips build on media, but we're able to deal with [[multichannel|MultichannelMedia]] mixed-typed media natively.

Lumiera is not a connection manager, it is not an audio-visual real time performance instrument, and it doesn't aim at running presentations. It's an ''environment for assembling and building up'' something (an edit, a session, a piece of media work). This decision is visible at various levels and contexts, like a reserved attitude towards hardware acceleration (it //will// be supported, but reliable proxy editing has a higher priority), or the decision, not to incorporate system level ports and connections directly into the session model (they are mapped to [[output designations|OutputDesignation]] rather)

''State'' is rigorously ''externalized'' and operations are to be ''scheduled'', to simplify locking and error handling. State is either treated similar to media stream data (as addressable and cacheable data frame), or is represented as "parameter" to be served by some [[parameter provider|ParamProvider]]. Consequently, [[Automation]] is just another kind of parameter, i.e. a function &mdash; how this function is calculated is an encapsulated implementation detail (we don't have "bezier automation", and then maybe a "linear automation", a "mask automation" and yet another way to handle transitions)

Deliberately there is an limitaion on the flexibility of what can be added to the system via Plugins. We allow configuration and parametrisation to be extended and we allow processing and data handling to be extended, but we disallow extensions to the fundamental structure of the system by plugins. They may provide new implementations for already known subsystems, but they can't introduce new subsystems not envisioned in the general design of the application.
* these fixed assortments include: the possbile kinds of MObject and [[Asset]], the possible automation parameter data types, the supported kinds of plugin systems
* while plugins may extend: the supported kinds of media, the [[media handling libraries|MediaImplLib]] used to deal with those media, the session storage backends, the behaviour of placments

A strong emphaisis is placed on ''Separation of Concerns'' and especially on ''Open Closed'' design. We resist the temptation to build an //universal model.// Rather, we understand our whole unterdaking a ''transformation of metadata'' -- which, as a whole, remains conceptual and is only ever implemented partially  within context. A [[language of diff exchanges|TreeDiffFundamentals]] serves as integrating backbone. It works against a rather symbolic representation of strucutred matters, conceived as an ExternalTreeDescription. In fact, this desctiption is only materialised in parts, if at all.
This ~Steam-Layer and ~Render-Engine implementation started out as a design-draft by [[Ichthyo|mailto:Ichthyostega@web.de]] in summer 2007. The key idea of this design-draft is to use the [[Builder Pattern|http://en.wikipedia.org/wiki/Builder_pattern]] for the Render Engine, thus separating completely the //building// of the Render Pipeline from //running,// i.e. doing the actual Render. The Nodes in this Pipeline should process Video/Audio and do nothing else. No more decisions, tests and conditional operations when running the Pipeline. Move all of this out into the configuration of the pipeline, which is done by the Builder.

!Why doesn't the current Cinelerra-2 Design succeed?
The design of Cinelerra-2 basically follows a similar design, but [[fails because of two reasons...|https://lumiera.org/project/background/history/CinelerraWoes.html]]
# too much differentiation is put into the class hierarchy instead of configuring Instances differently.<br/>This causes overly heavy use of virtual functions, intricate implementation dependencies, and -- in order to mitigate this -- falling back to hard wired branching in the end.
# far too much coupling and back-coupling to the internals of the »EDL«, forcing a overly rigid structure on the latter

!Try to learn from the Problems of the current Cinelerra-2 codebase
* build up an [[Node Abstraction|ProcNode]] powerful enough to express //all necessary Operations// without the need to recur on the actual object type
* need to redesign the internals of the Session in a far more open manner. Everything is an [[M-Object|MObjects]] which is [[placed|Placement]] in some manner
* strive at a StrongSeparation between Session and Render Engine, encapsulate and always implement against interfaces


!Design Goals
As always, the main goal is //to cut down complexity// by the usual approach to separate into small manageable chunks.

To achieve this, here we try to separate ''Configuration'' from ''Processing''. Further, in Configuration we try to separate the ''high level view'' (users view when editing) from the ''low level view'' (the actual configuration effective for the calculations). Finally, we try to factor out and encapsulate ''State'' in order to make State explicit.

The main tool used to implement this separation is the [[Builder Pattern|http://en.wikipedia.org/wiki/Builder_pattern]]. Here especially we move all decisions and parametrization into the BuildProcess. The Nodes in the render pipeline should process Video/Audio and do nothing else. All decisions, tests and conditional operations are factored out of the render process and handled as configuration of the pipeline, which is done by the Builder. The actual colour model and number of ports is configured in by a pre-built wiring descriptor. All Nodes are of equal footing with each other, able to be connected freely within the limitations of the necessary input and output. OpenGL and renderfarm support can be configured in as an alternate implementation of some operations together with an alternate signal flow (usable only if the whole Pipeline can be built up to support this changed signal flow), thus factoring out all the complexities of managing the data flow between core and hardware accelerated rendering. We introduce separate control data connections for the [[automation data|Automation]], separating the case of true multi-channel-effects from the case where one node just gets remote controlled by another node (or the case of two nodes using the same automation data).

Another pertinent theme is to make the basic building blocks simpler, while on the other hand gaining much more flexibility for combining these building blocks. For example we try to unfold any "internal-multi" effects into separate instances (e.g. the possibility of having an arbitrary number of single masks at any point of the pipeline instead of having one special masking facility encompassing multiple sub-masks. Similarly, we treat the Objects in the Session in a more uniform manner and gain the possibility to [[place|Placement]] them in various ways.
//A self referential intermediary, used to collect and consolidate structural information about some element.//
This structural information can then be emanated as ''population diff'' and is thus the foundation to build a further representation of the same structure //"elsewhere".// This //further representation// could be a model entity within the session's HighLevelModel, or it could be a widget tree in the UI. And, to close the circle, this other representation resulting from that diff could in turn again be a DiffConstituent.
The Dispatcher Tables are hosted within the [[Segmentation]] and serve as a ''strategy'' governing the play process.
For each [[calculation stream|CalcStream]] there is a concrete implementation of the [[dispatcher|FrameDispatcher]] interface, based on a specific configuration of dispatcher tables.
LayerSeparationInterface provided by the GUI.
Access point especially for the playback. A render- or playback process uses the DisplayFacade to push media data up to the GUI for display within a viewer widget of full-screen display. This can be thought off as a callback mechanism. In order to use the DisplayFacade, client code needs a DisplayerSlot (handle), which needs to be set up by the UI first and will be provided when starting the render or playback process.

!Evolving the real implementation {{red{TODO 1/2012}}}
As it stands, the ~DisplayFacade is a placeholder for parts of the real &rarr; OutputManagement, to be implemented in conjunction with the [[player subsystem|Player]] and the render engine. As of 1/2012, the intention is to turn the DisplayService into an OutputSlot instance -- following this line of thought, the ~DisplayFacade might become some kind of OutputManager, possibly to be [[implemented within a generic Viewer element|GuiVideoDisplay]]
A service within the GUI to manage output of frames generated by the lower layers of the application.
*providing the actual implementation of the DisplayFacade
*creating and maintaining of [[displayer slots|DisplayerSlot]], connected to viewer widgets or similar

{{red{TODO 1/2012}}} to be generalised into a service exposing an OutputSlot for display.
An output port wired up to some display facility or viewer widget within the UI. For the client code, each slot is represented by a handle, which can be used to lock into this slot for an continuous output process. Managed by the DisplayService
''EDL'' is a short-hand for __E__dit __D__ecision __L__ist. The use of this term can be confusing; for the usual meaning see the definition in [[Wikipedia|http://en.wikipedia.org/wiki/Edit_decision_list]]

Cinelerra uses this term in a related manner but with a somewhat shifted focus: In Cinelerra the EDL is comprised of the whole set of clips and other media objects arranged onto the tracks by the user. It is the result of the user's //editing efforts.//

In this usage, the EDL in most cases will be almost synonymous to &raquo;the session&laquo;, just the latter emphasizes more the state aspect. While the Lumiera project started out using the same terminology, later on, when support for multiple "containers" within the session and for [[meta-clips|VirtualClip]] was determined to be of much importance, the new term &raquo;[[Sequence]]&laquo; was preferred.
These are the tools provided to any client of the Steam-Layer for handling and manipulating the entities in the Session. When defining such operations, //the goal should be to arrive at some uniformity in the way things are done.// Ideally, when writing client code, one should be able to guess how to achieve some desired result.

!guiding principle
The approach taken to define any operation is based primarily on the ''~OO-way of doing things'': entities operate themselfs. You don't advise some manager, session or other &raquo;god class&laquo; to manipulate them. And, secondly, the scope of each operation will be as large as possible, but not larger. This often means performing quite some elementary operations &mdash; sometimes a convenience shortcut provided by the higher levels of the application may come in handy &mdash; and basically this gives rise to several different paths of doing the same thing, all of which need to be equivalent.

!main tasks
* you ''create a clip'' either from a source media or from another clip (copy, maybe clone?). The new clip always reflects the full size (and other properties) of the source used for creating.
* you can request a clip to ''resize'', which always relates to its current dimensions.
* you can ''place or attach'' the clip to some point or other entity by creating a [[Placement]] from the clip. (&rarr; [[handling of Placements|PlacementHandling]])
* you can ''adjust'' a placement relative to some other placement, meta object (i.e. selection), label or media, and this may cause the placement to resize or even effectively remove the clip.
* you can perform ''adjustments'' on a Sequence as a whole.
All these operations propagate to directly dependant objects and may schedule global reconfigurations.

!state, locking, policies
While performing any manipulative task, the state of the object is considered inconsistent, but it is guaranteed to be consistent after any such operation, irrespective of the result. There is no automatic atomicity and, consequently each propagation is considered a separate operation.

!!parallelism
At the level of fine grained operations on individual entities, there is ''no support for parallelism''. Individual entities don't lock themselves. Tasks perform locking and are to be scheduled. Thus, we need an isolation layer towards all inherently multithreaded parts of the system, like the GUI or the renderengine.

This has some obvious and some subtle consequences. Of course, manipulating //tasks// will be queued and scheduled somewhere. But as we rely on object identity and reference semantics for the entities in the session, some value changes are visible to  operations going on in parallel threads (e.g. rendering) and each operation on the session entities //has to be aware of this.//
Consequently, sometimes there needs to be done sort of a ''read-copy-update'', i.e. self-replacement by copy followed by manipulation of the new copy, while ongoing processes use the unaltered original object until they receive some sort of reset.

!!undo
Basically, each elementary operation has to record the information necessary to be undone. It does so by registering a Memento with some central UndoManager facility. This Memento object contains a functor pre-bound with the needed parameter values. (Besides, the UndoManager is free to implement a second level of security by taking independent state snapshots). 
{{red{to be defined in more detail later...}}}
We have to deal with effects on various different levels. One thing is to decide if an effect is applicable, another question is to find out about variable and automatable parameters, find a module which can be used as an effect GUI and finally to access a processing function which can be executed on a buffer filled with suitable data.

The effect asset (which is a [[processing asset|ProcAsset]]) provides an unified interface for loading external processing modules and querying their properties. As usual with assets, this is the "bookkeeping view" to start with, but we can create a //processor// from such an asset, i.e. an instance of the processing facility which can be used and wired into the model. Such a processor is an MObject and can be placed into the session (high level model); moreover it holds a concrete wiring for the various control parameters and it has an active link to the effect GUI module. As every ~MObject, it could be placed multiple times to affect several pipes, but all those different Placements would behave as being linked together (with respect to the control parameter sources and the GUI)

When building the low-level model, the actual processing code is resolved and a processing function is installed into the processing node representing the effect. This step includes checking of actual [[media type information|StreamType]], which may result in selecting a specifically tailored implementation function. The parameter wiring on the other hand is //shared// between the effect ~MObject, the corresponding GUI module and all low-level processing nodes. Actually, parameters are more of a reference than actually being values: they provide a {{{getValue()}}} function, which also works with automation. This way, any parameter or automation changes are reflected immediately into the produced output.

Initially, only the parameter (descriptors) are present on the effect ~MObject, while the actual [[parameter providers|ParamProvider]] are created or wired (by the ConManager) on demand.
The primary interface used by the upper application layers to interact with the render engine, to create and manage ongoing [[calculation streams|CalcStream]].

Below this facade, there is a thin adaptadion and forwarding layer, mainly talking to
* the individual [[Calculation Streams|CalcStream]] created for each PlayProcess.
* the FrameDispatcher, which translates such streams into a series of RenderJob entries
* the [[Scheduler]], which is responsible to perform these jobs in a timely fashion

!Quality of Service
Within the Facade, there is the definition of the {{{EngineService::Quality}}} tag, alongside with several pre-defined quality settings.
Actually this interface is a strategy, allowing to define quite specific quality levels, in case we need that. Clients can usually just use
these ~QoS-tags like enum values (they are copyable), without caring for the engine implementation related details.
A general identification scheme, combining a human readable symbolic name, unique within a //specifically typed context,// and machine readable hash ID (LUID). ~Entry-IDs allow for asset-like position accounting and for type safe binding between configuration rules and model obects. They allow for creating an entry with symbolic id and distinct type, combined with an derived hash value, without the overhead in storage and instance management imposed by using a full-fledged Asset.
                                                                                      
Similar to an Asset, an identification tuple is available (generated on the fly), as is an unique LUID and total ordering. The type information is attached as template parameter, but included into the hash calculation. All instantiations of the EntryID template share a common baseclass, usable for type erased common registration.

~Entry-IDs as such do not provide any automatic registration or ensure uniqueness of any kind, but they can be used to that end. Especially, they're used within the TypedID registration framework for addressing individual entries {{red{planned as of 12/2010}}}
&rarr; TypedLookup
&rarr; TypeHandler
&rarr; MetaAsset
a special ProcNode which is used to pull the finished output of one Render Pipeline (Tree or Graph). This term is already used in the Cinelerra2 codebase. I am unsure at the moment if it is a distinct subclass or rahter a specially configured ProcNode (a general design rule tells us to err in favour of the latter if in doubt).

The render nodes network is always built separate for each [[timeline segment|Segmentation]], which is //constant in wiring configuration.// Thus, while exit node(s) are per segment, the corresponding exit nodes of consecutive segments together belong to a ModelPort, which in turn corresponds to a global pipe (master bus not connected any further). These relations guide the possible configuration for an exit node: It may still provide multiple channels -- but all those channels are bound to belong to a single logical stream -- same StreamPrototype, always handled as bundle, connected and routed in one step. For example, when there is an 5.1 Audio master bus with a single fader, then "5.1 Audio" would be a prototype and these 6 channels will always be handled together; in such a case it makes perfectly sense to access these 6 audio channels through a single exit node, which is keyed (identified) by the same PipeID as used at the corresponding ModelPort and the corresponding [[global pipe|GlobalPipe]] ("5.1 Audio master bus")
A special kind (subclass) of [[Placement]]. As such it is always linked to a //Subject//, i.e. a MObject. But contrary to the (standard) placements, which may exhibit all kinds of fancy dynamic and scope dependent behaviour, within an explicit placement all properties are resolved and materialised. While the (standard) placement may contain an arbitrary list of LocatingPin objects, the resolution into an explicit placement performs a kind of »orthogonalisation«: each remaining LocatingPin defines exactly one degree of freedom independent of all others. Most notably, the explicit placement always specifies a absolute time and [[output designation|OutputDesignation]] for for locating the Subject. Explicit placements are ''immutable''.

!!Implementation considerations
Explicit placements are just created and never mutated, but copying and storage might become a problem.
It would thus be desirable to have a fixed-sized allocation, able to hold the placement body as well as the (fixed) locating pins inline.

!!!Storage
Explicit placements are value objects and stored at the respective usage site, most notably the [[Segmentation]]. They are //not// attached to the placement index in the session, nor do they bear any referential or indexing semantics. The only dynamic side effect of an explicit placement is to keep the reference count of the corresponding MObject up and thus keep it alive and  accessible.
//to symbolically represent hierarchically structured elements, without actually implementing them.//
The purpose of this »external« description is to remove the need of a central data model to work against. We consider such a foundation data model as a good starting point, yet harmful for the evolution of any larger structure to be built. According to the subsidiarity principle, we prefer to turn the working data representation into a local concern. Which leaves us with the issue of collaboration. Any collaboration requires, as an underlying, some kind of common understanding. Yet any formalised, mechanical collaboration requires to represent that common point of attachment -- at least as a symbolic representation. The ExternalTreeDescription is shaped to fulfil this need: //in theory,// the whole field could be represented, symbolically, as a network of hierarchically structured elements. Yet, //in practice,// all we need is to conceive the presence of such a representation, as a backdrop to work against. And we do so -- we work against that symbolic representation, by describing ''changes'' made to the structure and its elements. Thus, the description of changes, the ''diff language'', refers to and partially embodies such symbolically represented elements and relations.
&rarr; TreeDiffFundamentals
&rarr; TreeDiffModel and [[-Implementation|TreeDiffImplementation]]

!Elements, Nodes and Records
We have to deal with //entities and relationships.// Entities are considered the building blocks, the elements, which are related by directional links. Within the symbolic representation, elements are conceived as [[generic nodes|GenNode]], while the directed relations are impersonated as being attached or rooted at the originating side, so the target of a relation has no traces or knowledge of being part of that relation. Moreover, each of our nodes bears a //relatively clear-cut identity.// That is to say, within the relevant scope in question, this identity is unique. Together, these are the building blocks to represent any ''graph''.

For practical purposes, we have to introduce some distinctions and limitations.
* we have to differentiate the generic node to be either a mere data element, or an object-like ''Record''
* the former, a mere data element, is considered to be "just data", to be "right here" and without further meta information. You need to know what it is to deal with it.
* to the contrary, a Record has an associated, symbolic type-ID, plus it can potentially be associated with and thus relate to further elements, with the relation originating at the Record.
* and we distinguish //two different kinds of relations possibly originating from a Record://
** ''attributes'' are known by-name; they can be addressed through this name-ID as a key, while the value is again a generic node, possibly even another record.
** ''children'' to the contrary can only be enumerated; they are considered to be within (and form) the ''scope'' of the given Record (object).

And there is a further limitation: The domain of possible data is fixed, even hard wired. Implementation-wise, this turns the data within the generic node into a »Variant« (typesafe union). Basically, this opens two different ways to //access// the data within a given GenNode: either you know the type to expect ''beforehand'' (and the validity of this assumption is checked on each access -- please recall, all of this is meant for symbolic representation, not for implementation of high performance computing). Or we offer the ability for //generic access// through a ''variant visitor'' (double dispatch). The latter includes the option not to handle all possible content types (making the variant visitor a //partial function// -- as in any non exhaustive pattern match).

Basically, you can expect to encounter the following kinds of data
*{{{int}}}, {{{int64_t}}}, {{{short}}}, {{{char}}}
*{{{bool}}}
*{{{double}}}
*{{{std::string}}}
*{{{time::Time}}}, {{{time::Duration}}}, {{{time::TimeSpan}}}
*{{{hash::LuidH}}}
*{{{diff::Record<GenNode>}}}
The last option is what makes our representation recursive.
Regarding the implementation, all these data elements are embedded //inline,// as values.
With the exception of the record, which, like any {{{std::vector}}} implicitly uses heap allocations for the members of the collection.
We consider this memory layout an implementation detail, which could be changed without affecting the API

!{{red{open questions 5/2015}}}
!!!the monadic nature of records
A //monad// is an entity which supports the following operations
*construction:{{{a:A -> M<A>(a)}}}
*flatMap:{{{(M<A>, f:A->M<B>) -> M<B>}}}
Operationally, a modad can be constructed to wrap-up and embody a given element. And to flat-map a monad generating function means to apply it to all "content elements" within the given source monad, and then to //join// the produced monads "flat" into a new, uniform monad of the new type.
Now, the interesting question is: //what does "join" mean?// --
*will it drill down?
*will it lift the contents of generated monads into the parent level, or attach them as new subtrees?
*will the function get to see {{{Record}}} elements, or will it immediately see the "contents", the attributes or the children?

!!!names, identity and typing
It was a design decision that the GenNode shall not embody a readable type field, just a type selector within the variant to hold the actual data elements.
This decision more or less limits the usefulness of simple values as children to those cases, where all children are of uniform type, or where we agree to deal with all children through variant visitation solely. Of course, we can still use simple values as //attributes,// since those are known and addressed by name. And we can use filtering by type to limit access to some children of type {{{Record}}}, since every record does indeed embody a symbolic type name, an attribute named {{{"type"}}}. It must be that way, since otherwise, records would be pretty much useless as representation for any object like entity.

The discriminating ID of any GenNode can serve as a name, and indeed will be used as the name of an attribute within a record.
Now, the interesting question is: what constitutes the full identity? Is it the ~ID-string? does it also include some kind of type information, so that two children with the same name, but different types would be considered different? And, moreover, we could even go as far as to make the path part of the identity, so that two otherwise identical elements would be different, when located at different positions within the graph. But since we did not rule out cyclic graphs, the latter idea would necessitate the notion of an //optimal path// --

A somewhat similar question is the ordering and uniqueness of children. While attributes -- due to the usage of the attribute node's ID as name-key -- are bound to be unique within a given Record, children within the scope of a record could be required to be unique too, making the scope a set. And, of course, children could be forcibly ordered, or just retain the initial ordering, or even be completely unordered. On a second thought, it seems wise not to impose any guarantees in that regard, beyond the simple notion of retaining an initial sequence order, the way a »stable« sorting algorithm does. All these more specific ordering properties can be considered the concern of some specific kinds of objects -- which then just happen to "supply" a list of children for symbolic representation as they see fit.
The use of factories separates object creation, configuration and lifecycle from the actual usage context. Hidden behind a factory function
* memory management can be delegated to a centralised facility
* complex internal details of service implementation can be turned into locally encapsulated problems
* the actual implementation can be changed, e.g. by substituting a test mock

!common usage pattern
Within Lumiera, factories are frequently placed as a static field right into the //service interface.// Common variable names for such an embedded factory are {{{instance}}} or {{{create}}}.
The actual fabrication function is defined as function operator -- this way, the use of the factory reads like a simple function call. At usage site, only the reference to the //interface// or //kind of service// is announced.

!flavours
* A special kind of factory used at various places is the ''singleton'' factory for access to application wide services and facade interfaces. All singleton factories for the same target type share a single instance of the target, which is created lazily on first usage. Actually, this is our very special version of [[dependency injection|DependencyFactory]]
* whenever there is a set of objects of some kind, which require registration and connection to a central entity, client code gets (smart) handles, which are emitted by a factory, which connects to the respective manager for registration automatically.
* to bridge the complexities of using an external (plug-in based) interface, proxy objects are created by a factory, wired to invoke the external calls to forward any client invoked operation.
* the ''configurable factory'' is used to define a family of production lines, which are addressed by the client by virtue of some type-ID. Runtime data and criteria are used to form this type-ID and thus pick the suitable factory function
A grouping device within an ongoing [[playback or render process|PlayProcess]].
Any feed corresponds to a specific ModelPort, which in turn typically corresponds to a given GlobalPipe.
When starting playback or render, a play process (with a PlayController front-end for client code) is established to coordinate the processing. This ongoing data production might encompass multiple media streams, i.e. multiple feeds pulled from several model ports and delivered into several [[output slots|OutputSlot]]. Each feed in turn might carry structured MultichannelMedia, and is thus further structured into individual [[streams of calculation|CalcStream]]. Since the latter are //stateless descriptors,// while the player and play process obviously is stateful, it's the feed's role to mediate between a state-based (procedural) and a stateless (functional and parallelised) organisation model -- ensuring a seamless data feed even during modification of the playback parameters.
a specially configured view -- joining together high-level and low-level model.
The Fixture acts as //isolation layer// between the two models, and as //backbone to attach the render nodes.//
* all MObjects have their position, length and configuration set up ready for rendering.
* any nested sequences (or other kinds of indirections) have been resolved.
* every MObject is attached by an ExplicitPlacement, which declares a fixed position (Time, [[Pipe|OutputDesignation]])
* these ~ExplicitPlacements are contained immediately within the Fixture, ordered by time
* besides, there is a collection of all effective, possibly externally visible [[model ports|ModelPortRegistry]]

As the builder and thus render engine //only consults the fixture,// while all editing operations finally propagate to the fixture as well, we get an isolation layer between the high level part of the Steam-Layer (editing, object manipulation) and the render engine. [[Creating the Fixture|BuildFixture]] is an important first step and sideeffect of running the [[Builder]] when createing the [[render engine network|LowLevelModel]].
''Note'': all of the especially managed storage of the LowLevelModel is hooked up behind the Fixture
&rarr; FixtureStorage
&rarr; FixtureDatastructure

!{{red{WIP}}} Structure of the fixture
[<img[Structure of the Fixture|draw/Fixture1.png]]

The fixture is like a grid, where one dimension is given by the [[model ports|ModelPortRegistry]], and the other dimension extends in time. Within the time dimension there is a grouping into [[segments|Segmentation]] of constant structure.

;Model Ports
:The model ports share a single uniform and global name space: actually they're keyed by ~Pipe-ID
:Model ports are derived as a result of the build process, as the //residuum// of all nodes not connected any further
:Each port belongs to a specific Timeline and is associated with the [[Segmentation]] of that timeline, yet this partitioning of the time axis is relevant for all the model ports //of this timeline// -- while //segment// is defined as a time interval with an uniform (non-changing) calculation scheme.

;Segmentation
:The segmentation partitiones the time axis of a single timeline into segments of constant (wiring) configuration. Together, the segments form a seamless sequence of time intervals. They expose an API (»JobTicket«) for generating render jobs for each port -- and they contain a copy of each (explicit) placement of a visible object touching that time interval. Besides that, segments are the top level grouping device of the render engine node graph; individual segments are immutable, they are built and discarded as a whole chunk.
:Segments (and even a different Segmentation) may be //hot swapped// into an ongoing render.

;Exit Nodes
:Each segment holds an ExitNode for each relevant ModelPort of the corresponding timeline.
:Thus the exit nodes are keyed by ~Pipe-ID as well (and consequently have a distinct [[stream type|StreamType]]) -- each model port coresponds to {{{<number_of_segments>}}} separate exit nodes, but of course an exit node may be //mute//&nbsp; in some segmehts.

!!!Dependencies
At architecture level, the Fixture is seen as //interface// between Steam-Layer and Vault-Layer.
This raises the question: {{red{(WIP 4/23) where is the fixture data structure defined, in terms of code dependencies?}}}
Generally speaking, the datastructure to implement the ''Fixture'' (&rarr; see a more general description [[here|Fixture]]) is comprised of a ModelPortRegistry and a set of [[segmentations|Segmentation]] per Timeline.
This page focusses on the actual data structure and usage details on that level. See also &rarr; [[storage|FixtureStorage]] considerations.

!transactional switch
A key point to note is the fact that the fixture is frequently [[re-built|BuildFixture]] by the [[Builder]], while render processes may be going on in parallel. Thus, when a build process is finished, a transactional ''commit'' happens to ''hot swap'' the new parts of the model. This is complemented by a clean-up of tainted render processes; finally, storage can be reclaimed.

To support this usage pattern, the Fixture implementation makes use of the [[PImpl pattern|http://c2.com/cgi/wiki?PimplIdiom]]
Ongoing [[Builder]] activity especially can remould the Segmentation on a copy of the implementation structure, which is then swapped as a whole.

!Collecting usage scenarios {{red{WIP 12/10}}}
* ModelPort access
** get the model port for a given ~Pipe-ID
** enumerate the model ports for a Timeline
* rendering frame dispatch
** get or create the frame dispatcher table
** dispatch a single frame to yield the corresponding ExitNode
* (re)building
** create a new implementation transaction
** create a new segmentation
** establish what segments actually need to be rebuilt
** dispatch a newly built segment into the transaction
** schedule superseded segments and tainted process for clean-up
** commit a transaction
 

!Conclusions about the structure {{red{WIP 12/10 … update 4/23}}}
* the ~PImpl needs to be a single ''atomic pointer''. This necessitates having a monolithic Fixture implementation holder
* consequently we need a tailored memory management -- requiring some (limited) knowledge about the usage pattern
* this kind of information is available within the scheduling process ⟹ the [[Scheduler]] must support triggering on dependency events
The Fixture &rarr; [[data structure|FixtureDatastructure]] acts as umbrella to hook up the elements of the render engine's processing nodes network (LowLevelModel).
Each segment within the [[Segmentation]] of any timeline serves as ''extent'' or unit of memory management: it is built up completely during the corresponding build process and becomes immutable thereafter, finally to be discarded as a whole when superseded by a modified version of that segment (new build process) -- but only after all related render processes (&rarr; CalcStream) are known to be terminated.

Each segment owns an AllocationCluster, which in turn manages all the numerous small-sized objects comprising the render network implementing this segment -- thus the central question is when to //release the segment.//
* for one, we easily detect the point when a segment is swapped out of the segmentation; at this point we also have to detect the //tainted calculation streams.//
* but those render processes (calc streams) terminate asynchronously, and that forces us to do some kind of registration and deregistration.

!!question: is ref-counting acceptable here?
//Not sure yet.// Of course it would be the simplest approach. KISS.
Basically the concern is that each new CalcStream had to access the shared counts of all segments it touches.

''Note'': {{{shared_ptr}}} is known to be implemented by a lock-free algorithm (yes it is, at least since boost 1.33. Don't believe what numerous FUD spreaders have written). Thus lock contention isn't a problem, but at least a memory barrier is involved (and if I&nbsp;judge GCC's internal documentation right, currently their barriers extend to //all// globally visible variables)

!!question: alternatives?
There are. As the builder is known to be run again and again, no one forces us to deallocate as soon as we could. That's the classical argument exploited by any garbage collector too. Thus we could just note the fact that a calculation stream is done and re-evaluate all those noted results on later occasion. Obviously, the [[Scheduler]] is in the best position for notifying the rest of the system when this and that [[job|RenderJob]] has terminated, because the Scheduler is the only facility required to touch each job reliably. Thus it seems favourable to add basic support for either termination callbacks or for guaranteed execution of some notification jobs to the [[Scheduler's requirements|SchedulerRequirements]].

!!exploiting the frame-dispatch step
Irrespective of the decision in favour or against ref-counting, it seems reasonable to make use of the //frame dispatch step,// which is necessary anyway. The idea is to give each render process (maybe even each CalcStream)  a //copy//&nbsp; of a dispatcher table object -- basically just a list of breaking points in time and a pointer to the corresponding relevant exit node. If we keep track of those dispatcher tables, add some kind of back-link to identify the process and require the process in turn to deregister, we might get a tracking of tainted processes for free.

!!assessment {{red{WIP 12/10}}}
But the primary question here is to judge the impact of such an implementation. What would be the costs?
# creating individual dispatcher tables trades memory for simplified parallelism
# the per-frame lookup is efficient and negligible compared with just building the render context (StateProxy) for that frame
# when a process terminates, we need to take out that dispatcher and do deregistration stuff for each touched segment (?)

!!!Estimations
* number of actually concurrent render processes is at or below 30
* depending on the degree of cleverness on behalf of the scheduler, the throughput of processes might be multiplied (dull means few processes)
* the total number of segments within the Fixture could range into several thousand
* but esp. playback is focussed, which makes a number of rather several hundred tainted segments more likely

&rArr; we should try quickly to dispose of the working storage after render process termination and just retain a small notification record
&rArr; so the frame dispatcher table should be allocated //within//&nbsp; the process' working storage; moreover it should be tiled
&rArr; we should spend a second thought about how actually to find and process the segments to be discarded

!!!identifying tainted and disposable segments
Above estimation hints at the necessity of frequently finding some 30 to 100 segments to be disposed, out of thousands, assumed the original reason for triggering the build process was a typically local change in the high-level model. We can only discard when all related processes are finished, but there is a larger number of segments as candidate for eviction. These candidates are rather easy to pinpoint -- they will be uncovered during a linear comparison pass prior to committing the changed Fixture. Usually, the number of candidates will be low (localised changes), but global manipulations might invalidate thousands of segments.
* if we frequently pick the segments actually to be disposed, there is the danger of performance degeneration when the number of segments is high
* the other question is if we can afford just to keep all of those candidates around, as all of them are bound to get discardable eventually
* and of course there is also the question how to detect //when// they're due.

;Model A
:use a logarithmic datastructure, e.g. a priority queue. Possibly together with LRU ordering
:problem here is that the priorities change, which either means shared access or a lot of "obsoleted" entries in this queue
;Model B
:keep all superseded segments around and track the tainted processes instead
:problem here is how to get the tainted processes precisely and with low overhead
//as of 12/10, decision was taken to prefer Model B...//
Simply because the problems caused by Model A seem to be fundamental, while the problems related to Model B could be overcome with some additional cleverness.

But actually, at that point I'm struck here, because of the yet limited knowledge about those render processes....
* how do we //join// an aborted/changed rendering process to his successor, without creating a jerk in the output?
* is it even possible to continue a process when parts of the covered time-range are affected by a build?
If the latter question is answered with "No!", then the problem gets simple in solution, but maybe memory consuming: In that case, //all//&nbsp; the processes related to a timeline are affected and thus get tainted; we'd just dump them onto a pile and delay releasing all of the superseded segments until all of them are known to be terminated.

!!re-visited 1/13
the last conclusions drawn above where confirmed by the further development of the overall design. Yes, we do //supersede// frequently and liberally. This isn't much of a problem, since the preparation of new jobs, i.e. the [[frame dispatch step|FrameDispatcher]] is performed chunk wise. A //continuation job// is added at the end of each chunk, and this continuation will pick up the task of job planning in time.

At the 1/2013 developer's meeting, Cehteh and myself had a longer conversation regarding the topic of notifications and superseding of jobs within the scheduler. The conclusion was to give ''each play process a separate LUID'' and treat this as ''job group''. The scheduler interface will offer a call to supersede all jobs within a given group.

Some questions remain though
* what are those nebulous dispatcher tables?
* do they exist per CalcStream ?
* is it possible, to file these dedicated dispatch informations gradually?
* how to store and pass on the control information for NonLinearPlayback?

since the intention is to have dedicated dispatch tables, these would implement the {{{engine.Dispatcher}}} interface and incorporate some kind of strategy corresponding to the mode of playback. The chunk wise continuation of the job planning process would have to be reformulated in terms of //real wall clock time rather// -- since the relation of the playback process to nominal time can't be assumed to be a simple linear progression in all cases.

!liabilities of fixture storage {{red{WIP 1/14}}}
The net effect of all the facilities related to fixture storage is to keep ongoing memory allocations sane. The same effect could be achieved by using garbage collection -- but the latter solves a much wider area of problems and as such incurs quite some price in terms of lock contention or excessive memory usage. Since our problem here is confined to a very controlled setup, employing a specific hand made solution will be more effective. Anyway, the core conflict is not to hinder parallel execution of jobs and to avoid excessive use of memory.

The management of fixture storage has to deal with some distinct situations
;superseding a CalcStream
:as long as the respective calculations are still going on, the commonly used data structures need to stay put
:* the {{{CalcPlanContinuation}}} closure used by the job planning jobs
:* the {{{RenderEnvironmentClosure}}} shared by all ~CalcStreams for for the same output configuration
:* the -- likewise possibly shared -- specific strategy record to govern the playback mode
;superseding a [[Segment|Segmentation]]
:as long as any of the //tainted// ~CalcStreams is still alive, all of the data structures held by the AllocationCluster of that segment need to stay around
:* the DispatcherTables
:* the JobTicket structure
:* the [[processing nodes|ProcNode]] and accompanying WiringDescriptor records

!!!conclusions for the implementation
In the end, getting the memory management within Segmentation and Playback correct boils down into the following requirements
* the ability to identify ~CalcStreams touching a segment about to be obsoleted
* the ability to track such //tainted ~CalcStreams//
* the ability to react on reaching a pre defined //control point,// after which releasing of resources is safe

The building blocks for such a chain of triggers and reactions are provided by a helper facility, the &rarr; SequencePointManager


__3/2014__: The crucial point seems to be the impedance mismatch between segments and calculation streams. We have a really high number of segments, which change only occasionally. But we have a rather small number of calculation streams, which mutate rapidly. And, over time, any calculation stream might -- occasionally -- touch a large number of segments. Thus, care should be taken not to implement the dependency structure naively. We only need to care about the tainted calculation streams when it comes to discarding a segment.
Within Lumiera, tracks are just a structure used to organize the Media Objects within the Sequence. Tracks are associated allways to a specific Sequence and the Tracks of an Sequence form a //tree of tracks.// They can be considered to be an organizing grid, and besides that, they have no special meaning. They are grouping devices, not first-class entities. A track doesn't "have" a port or pipe or "is" a video track and the like; it can be configured to behave in such manner by using placements.

To underpin this design decision, Lumiera introduces the more generic concept of a ''Fork'' -- to subsume the "tracks" within the timeline, as well as the "media bins" in the asset management section

The ~Fork-IDs are assets on their own, but they can be found within a given sequence. So, several sequences can share a single track or each sequence can hold tracks with their own, separate identity. (the latter is the default)
* Like most ~MObjects, tracks have a asset view: you can find a track asset (actually just a fork ID) in the asset manager.
* and they have an object view: there is an ''Fork'' MObject which can be [[placed|Placement]], thus defining properties of this track within one sequence, e.g. the starting point in time
Of course, we can place other ~MObjects relative to some fork (that's the main reason why we want to have tracks). In this sense, the [[handling of Tracks|TrackHandling]] is somewhat special: the placements forming the fork ("tree of tracks") can be accessed directly through the sequence, and a fork acts as container, forming a scope to encompass all the objects "on" this track. Thus, the placement of a fork defines properties of "the track", which will be inherited (if necessary) by all ~MObjects placed within the scope of this fork. For example, if placing (=plugging) a fork to some global [[Pipe]], and if placing a clip to this fork, without placing the clip directly to another pipe, the associated-to-pipe information of the fork will be fetched by the builder when needed to make the output connection of the clip.
&rarr; [[Handling of Tracks|TrackHandling]]
&rarr; [[Handling of Pipes|PipeHandling]]

&rarr; [[Anatomy of the high-level model|HighLevelModel]]
The situation focussed by this concept is when an API needs to expose a sequence of results, values or objects, instead of just yielding a function result value. As the naive solution of passing an pointer or array creates coupling to internals, it was superseded by the ~GoF [[Iterator pattern|http://en.wikipedia.org/wiki/Iterator]]. Iteration can be implemented by convention, polymorphically or by generic programming; we use the latter approach.

!Lumiera Forward Iterator concept
''Definition'': An Iterator is a self-contained token value, representing the promise to pull a sequence of data
* rather then deriving from an specific interface, anything behaving appropriately //is a Lumiera Forward Iterator.// 
* the client finds a typedef at a suitable, nearby location. Objects of this type can be created, copied and compared.
* any Lumiera forward iterator can be in //exhausted// &nbsp;(invalid) state, which can be checked by {{{bool}}} conversion.
* especially, default constructed iterators are fixed to that state. Non-exhausted iterators may only be obtained by API call.
* the exhausted state is final and can't be reset, meaning that any iterator is a disposable one-way-off object.
* when an iterator is //not//&nbsp; in the exhausted state, it may be //dereferenced// ({{{*i}}}), yielding the "current" value
* moreover, iterators may be incremented ({{{++i}}}) until exhaustion.

!!Discussion
The Lumiera Forward Iterator concept is a blend of the STL iterators and iterator concepts found in Java, C#, Python and Ruby. The chosen syntax should look familiar to C++ programmers and indeed is compatible to STL containers and ranges. To the contrary, while a STL iterator can be thought off as being just a disguised pointer, the semantics of Lumiera Forward Iterators is deliberately reduced to a single, one-way-off forward iteration, they can't be reset, manipulated by any arithmetic, and the result of assigning to an dereferenced iterator is unspecified, as is the meaning of post-increment and stored copies in general. You //should not think of an iterator as denoting a position// &mdash; just a one-way off promise to yield data.

Another notable difference to the STL iterators is the default ctor and the {{{bool}}} conversion. The latter allows using iterators painlessly within {{{for}}} and {{{while}}} loops; a default constructed iterator is equivalent to the STL container's {{{end()}}} value &mdash; indeed any //container-like// object exposing Lumiera Forward Iteration is encouraged to provide such an {{{end()}}}-function, additionally enabling iteration by {{{std::for_each}}} (or Lumiera's even more convenient {{{util::for_each()}}}).

!!!interoperation with the C++11 range-for construct
Lumiera Forward Iterators can be made to work together with the  'range for loop', as introduced with C++11. The preferred solution is to define the necessary free functions {{{begin}}} and {{{end}}} for ADL. This is best done on a per implementation base. We consider a generic solution not worth the effort. See {{{71167be9c9aaa}}}.

This is to say, these two concepts can be made to work together well. While, at a conceptual level, they are not really compatible, and build on a different understanding: The standard for-loop assumes //a container,// while our Forward Iterator Concept deals with //abstract data sources.//. 
The user needs to understand the fine points of that conceptual difference:
* if you apply the range-`for` construct on a container, you can iterate as often as you like. Even if the iterators of that container are  implemented in compliance with the Lumiera Forward Iterator concept.
* but if you apply the range-`for` construct on //a given iterator, //  you can do so only once. There is no way to reset that iterator, other than obtaining a new one.
 
The bridge methods are usually defined so that the {{{end}}}-function just returns a default constructed iterator, which -- by concept -- is the marker for iteration end


!!Implementation notes
''iter-adapter.hpp'' provides some helper templates for building Lumiera Forward Iterators.
* __~IterAdapter__ is the most flexible variant, intended for use by custom facilities. An ~IterAdapter maintains an internal back-link to a facilitiy exposing an iteration control API, which is accessed through free functions as extension point. This iteration control API is similar to C#, allowing to advance to the next result and to check the current iteration state.
* __~RangeIter__ wraps two existing iterators &mdash; usually obtained from {{{begin()}}} and {{{end()}}} of an STL container embedded within the implementation. This allows for iterator chaining.
* __~PtrDerefIter__ works similar, but can be used on an STL container holding //pointers,// to be dereferenced automatically on access

Similar to the STL habits, Lumiera Forward Iterators should expose typedefs for {{{pointer}}}, {{{reference}}} and {{{value_type}}}.
Additionally, they may be used for resource management purposes by embedding a ref-counting facility, e.g. allowing to keep a snapshot or restult set around until it can't be accessed anymore.
This term has //two meanings, //so care has to be taken for not confusing them.
# in general use, a Frame means one full image of a video clip, i.e an array of rows of pixels. For interlaced footage, one Frame contains two halfimages, commonly called Fields. (Cinelerra2 confuses this terms)
# here in this design, we use Frame as an abstraction for a buffer of raw media data to be processed. If in doubt, we should label this "DataFrame".
#* one video Dataframe contains a single video frame
#* one audio Dataframe contains a block of raw audio samples
#* one OpenGL Dataframe could contain raw texture data (but I am lacking expertise for this topic)
An entity within the RenderEngine, responsible for translating a logical [[calculation stream|CalcStream]] (corresponding to a PlayProcess) into a sequence of individual RenderJob entries, which can then be handed over to the [[Scheduler]]. Performing this operation involves a special application of [[time quantisation|TimeQuant]]: after establishing a suitable starting point, a typically contiguous series of frame numbers need to be generated, together with the time coordinates for each of those frames. As a //service// the Dispatcher acts as //bridge// between [[»playback«|Player]] and the [[render nodes network|Rendering]].

The Dispatcher works together with the [[job ticket(s)|JobTicket]] and the [[Scheduler]]; actually these are the //core abstractions//&nbsp; the process of ''job planning'' relies on. While the actual scheduler implementation lives within the Vault, the job tickets and the dispatcher are located within the [[Segmentation]], which is the backbone of the [[low-level model|LowLevelModel]]. More specifically, the dispatcher interface is //implemented//&nbsp; by a set of &rarr; [[dispatcher tables|DispatcherTables]] within the segmentation.

!Collaborations
The purpose of this interface is to support the planning of new jobs, for a given [[stream-of-calculations|CalcStream]]. From time to time, a chunk of new [[frame rendering jobs|RenderJob]] entries will be prepared for the [[Scheduler]]. Each job knows his frame number and the actual [[render node|ProcNode]] to operate. So, to start planning jobs, we need to translate...
⧐ time &rarr; frame number &rarr; segment &rarr; real exit node.
!!!Invocation situation
* our //current starting point//&nbsp; is given as ''anchor frame'', related to the [[Timings]] of a [[render/playback process|PlayProcess]]
* we want to address a specific frame, denominated by an //absolute frame-number// -- interpreted relative to the //frame grid// of these //playback {{{Timings}}}//
* ⟹ the dispatcher establishes the //fundamental characteristics// of that frame, comprised of
** the absolute nominal time
** the absolute frame number
** the ModelPort to draw data from
** additional focussed context information:
*** the relevant [[segment|Segmentation]] responsible for producing this frame
*** the corresponding JobTicket to use at this point
*** a concrete ExitNode to pull for this frame
*** possibly a real-time related [[deadline|JobDeadline]]

!!!Planning Chunks
As such, the whole process of playback or rendering is a continued series of exploration and evaluation. The outline of what needs to be calculated is determined continuously, proceeding in chunks of evaluation. The evaluation structure of the render engine is quite similar to the //fork-join//-pattern, just with the addition of timing constraints. This leads to an invocation pattern, where a partial evaluation happens from time to time. Each of those evaluations establishes a breaking point in time: everything //before// this point is settled and planned thus far. So, this point is an ''anchor'' and starting point for the next partial evaluation. On the implementation level, each CalcStream maintains a {{{RenderDrive}}} (see below), which acts both as //planning pipeline// and JobFunctor for the next planning chunk to come. Moreover, the associated [[Timings]] record establishes a definitive binding between the abstract logical time of the session timeline, and the real wall-clock time forming the deadline for render evaluation.

!!!related timelines and the frame grid
The frame dispatch step joins and combines multiple time axes. Through the process of //scheduling,// the output generation is linked to real ''wall clock time'' and the dispatch step establishes the [[deadlines|JobDeadline]], taking the ''engine latency'' into account. As such, any render or playback process establishes an ''output time grid'', linking frame numbers to nominal output time or timecode, based on the ''output frame rate'' -- where both the framerate and the actual progression (speed) might be changed dynamically. But beyond all of this there is a third, basically independent temporal structure involved: the actual content to render, the ''effective session timeline''. While encoded in nominal, absolute, internal time values not necessarily grid aligned, in practice at least the //breaking points,// the temporal location where the content or structure of the pipeline changes, are aligned //to an implicit grid used while creating the edit.// Yet this ''edit timing'' structure need not be related in any way to the ''playback grid'', nor is it necessarily the same as the ''source grid'' defined by the media data used to feed the pipeline.

These complex relationships are reflected in the invocation structure leading to an individual frame job. The [[calculation stream|CalcStream]] provides the [[render/playback timings|Timings]], while the actual implementation of the dispatcher, backed by the [[Fixture]] and thus linked to the session models, gets to relate the effective nominal time, the frame number, the exit node and the //processing function.//

!!!controlling the planning process
[>img[Structure of the Fixture|draw/Play.Dispatch.png]]New render jobs are planned as an ongoing process, proceeding in chunks of evaluation. Typically, to calculate a single frame, several jobs are necessary -- to find out which and how, we'll have to investigate the model structures corresponding to this frame, resulting in a tree of prerequisites. Basically, the planning for each frame is seeded by establishing the nominal time position, in accordance to the current [[mode of playback|NonLinearPlayback]]. Conducted by the [[play controller|PlayController]], there is a strategy to define the precise way of spacing and sequence of frames to be calculated -- yet for the actual process of evaluating the prerequisites and planning the jobs, those details are irrelevant and hidden behind the dispatcher interface, as is most of the model and context information. The planning operation just produces a sequence of job definitions, which can then be associated with real time (wall clock) deadlines for delivery. The relation between the spacing and progression of the nominal frame time (as controlled by the playback mode) and the actual sequence of deadlines (which is more or less dictated by the output device) is rather loose and established anew for each planning chunk, coordinated by the RenderDrive. The latter in turn uses the [[Timings]] of the [[calculation stream|CalcStream]] currently being planned, and these timings act as a strategy to represent the underlying timing grid and playback modalities.

And while the sequence of frame jobs to be planned is possibly infinite, the actual evaluation is confined to the current planning chunk. When done with planning such a chunk of jobs, the RenderDrive adds itself as additional ''continuation job'' to prepare a re-invocation of the planning operation for the next chunk at some point in the future. Terminating playback is equivalent to not including or not invoking this continuation job. Please note that planning proceeds independently for each [[Feed]] -- in Lumiera the //current playback position//&nbsp; is just a conceptual projection of wall clock time to nominal time, yet there is no such thing like a synchronously proceeding "Playhead"
&rarr; [[relation of models|BuildFixture]]

!!!producing actual jobs
The JobTicket is created on demand, specialised for a single [[segment|Segmentation]] of the timeline and a single [[feed|Feed]] of data frames to be pulled from a ModelPort. Consequently this means that all frames and all channels within that realm will rely on the same job ticket -- which is a //higher order function,// a function producing another function: when provided with the specific frame coordinates (frame number and time point), the job ticket produces a [[concrete job definition|RenderJob]], which itself is a function to be invoked by the [[scheduler|Scheduler]] to carry out the necessary calculations just in time.
&rarr; JobPlanningPipeline
The ''Gmerlin Audio Video Library''. &rarr; see [[homepage|http://gmerlin.sourceforge.net/gavl.html]]
Used within Lumiera as a foundation for working with raw video and audio media data
__G__roup __of__ __P__ictures: several compressed video formats don't encode single frames. Normally, such formats are considered mere //delivery formates// but it was one of the key strenghts of Cinelrra from start to be able to do real non linear editing on such formats (like the ~MPEG2-ts unsed in HDV video). The problem of course is that the data Vault needs to decode the whole GOP to be serve  single raw video frames.

For this Lumiera design, we could consider making GOP just another raw media data frame type and integrate this decoding into the render pipeline, similar to an effect based on several source frames for every calculated output frame.

&rarr;see in [[Wikipedia|http://en.wikipedia.org/wiki/Group_of_pictures]]
//Abstract generic node element to build a ~DOM-like rendering of Lumiera's [[session model|HighLevelModel]].//
GenNode elements are values, yet behave polymorphic. They are rather light-weight, have an well established identity and can be compared. They are //generic// insofar they encompass several heterogeneous ordering systems, which in themselves can not be subsumed into a single ordering hierarchy. The //purpose// of these generic nodes is to build a symbolic representation, known as [[external tree description|ExternalTreeDescription]], existing somewhere "outside", at a level where the fine points of ordering system relations do not really matter. Largely, this external description is not represented or layed down as a whole. Rather, it is used as a conceptual reference frame to describe //differences and changes.// Obviously, this means that parts of the altered structures have to appear in the description of the modifications. So, practically speaking, the prime purpose of GenNode elements is to appear as bits of information within a ''diff language'' to exchange such information of changes.

To be more specific, within the actual model there are [[Placements|Placement]]. These refer to [[M-Objects|MObject]]. Which in turn rely on [[Assets|Asset]]. Moreover, we have some processing rules, and -- last but not least -- the "objects" encountered in the model have state, visible as attributes of atomic value type (integral, floating point, string, boolean, time, time ranges and [[quantised time entities|TimeQuant]]).
A generic node may //represent any of these kind// -- and it may have ~GenNode children, forming a tree. Effectively all of this together makes ~GenNode a ''Monad''.

GenNode elements are conceived as values, and can thus be treated as mutable or immutable; it is up to the user to express this intent through const correctness.
Especially the nested structures, i.e. a GenNode holding an embedded {{{diff::Record}}}, are by default immutable, but expose a object builder API for remoulding. This again places the actual decision about mutations into the usage context, since the remoulded Record has to be assigned explicitly. In fact, this is an underlying theme of the whole design laid out here: Data represented as GenNode graph is //structured, yet opaque.// It is always tied into an usage context, that "happens to know" what to expect. Moreover, the values to be integrated into such a structure are of limited type -- covering just the basic kinds of data plus a recursive structuring device:
* integral values: {{{int}}}, {{{int64_t}}}, {{{short}}}, {{{char}}}
* logic values: {{{bool}}}
* measurement data: {{{double}}}
* textual data: {{{std::string}}}
* time representation: {{{time::Time}}}, {{{time::Duration}}}, {{{time::TimeSpan}}}
* identity and reference: {{{hash::LuidH}}}
* object-like structure:  {{{diff::Record<GenNode>}}}
* for cross-links and performance considerations, we also provide a {{{Record}}}-//reference element.//



!to reflect or not reflect
When dealing with this external model representation, indeed there are some rather global concerns which lend themselves to a generic programming style. Simply because, otherwise, we'd end up explicating and thereby duplicating the structure of the model all over the code. Frequently, such a situation is quoted as the reason to demand introspection facilities on any data structure. We doubt this is a valid conclusion. Since introspection allows to accept just //any element// -- followed by an open-ended //reaction on the received type// -- we might arrive at the impression that our code reaches a maximum of flexibility and "openness". Unfortunately, this turns out to be a self-deception, since code to do any meaningful operation needs pre-established knowledge about the meaning of the data to be processed. More so, when, as in any hierarchical data organisation, the relevant meaning is attached to the structure itself, so consequently this pre-established knowledge tends to be scattered over several, superficially "open" handler functions. What looks open and flexible at first sight is in fact littered with obscure and scattered, non obvious additional presumptions.
This observation from coding practice gets us to the conclusion, that we do not really want to support the full notion of data and type introspection. We //do want// some kind of passive matching on structure, where the receiver explicitly has to supply structural presuppositions. In a fully functional language with a correspondingly rich type system, a partial function (pattern match) would be the solution of choice. Under the given circumstances, we're able to emulate this pattern based on our variant visitor -- which basically calls a different virtual function for each of the types possibly to be encountered "within" a ~GenNode.

This is a rather challenging attitude, and in practice we're bound to allow for a bit of leeway, leave some loopholes to at least "peek" into data about to be handled:
* based on the form of a given node's ID element, we allow to distinguish //attributes and children.//
* the {{{Record<GenNode>}}}, which is used to represent object-like entities, exposes a //semantic type filed// to be used at the local scope's discretion.
* we explicitly enable receiving or handling code to "peek" into that type field, thereby silently absorbing the case when the GenNode in question in fact does not hold a Record.
Each [[Timeline]] has an associated set of global [[pipes|Pipe]] (global busses), similar to the subgroups of a sound mixing desk.
In the typical standard configuration, there is (at least) a video master and a sound master pipe. Like any pipe, ingoing connections attach to the input side, attached effects form a chain, where the last node acts as exit node. The PipeID of such a global bus can be used to route media streams, allowing the global pipe to act as a summation bus bar.
&rarr; discussion and design rationale of [[global pipes|GlobalPipeDesign]]

!Properties and decisions
* each timeline carries its own set of global pipes, as each timeline is an top-level element on its own
* like all [[pipes|Pipe]] the global ones are kept separated per stream (proto)type
* any global pipe //not// connected to another OutputDesignation automatically creates a ModelPort
* global pipes //do not appear automagically just by sending output to them// -- they need to be set up explicitly
* the top-level (BusMO) of the global pipes isn't itself a pipe. Thus the top-level of the pipes forms a list (typically a video and sound master)
* below, a tree-like structure //may// be created, building upon the same scope based routing technique as used for the tracks / forks
//This page serves to shape and document the design of the global pipes//
Many aspects regarding the global pipes turned out while clarifying other parts of ~Steam-Layer's design. For some time it wasn't even clear if we'd need global pipes -- common video editing applications get on without. Mostly it was due to the usefulness of the layout found on sound mixing desks, and a vague notion to separate time-dependant from global parts, which finally led me to favouring such a global facility. This decision then helped in separating the concerns of timeline and sequence, making the //former// a collection of non-temporal entities, while the latter concentrates on time varying aspects.

!Design problem with global Pipes
actually building up the implementation of global pipes seems to pose a rather subtle design problem: it is difficult to determine how to do it //right.//
To start with, we need the ability to attach effects to global pipes. There is already an obvious way how to attach effects to clips (=local pipes), and thus it's desirable to handle it the same way for global pipes. At least there should be a really good reason //not//&nbsp; to do it the same way. Thus, we're going to attach these effects by placement into the scope of another placed MObject. And, moreover, this other object should be part of the HighLevelModel's tree, to allow using the PlacementIndex as implementation. So this reasoning brings us to re-using or postulating some kind of object, while lacking a point or reference //outside this design considerations//&nbsp; to justify the existence of the corresponding class or shaping its properties on itself. Which means &mdash; from a design view angle &mdash; we're entering slippery ground.
!!!~Model-A: dedicated object per pipe
Just for the sake of symmetry, for each global pipe we'd attach some suitable ~MObject as child of the BindingMO representing the timeline. If no further specific properties or functionality is required, we could use Track objects, which are generally used as containers within the model. Individual effects would then be attached as children, while output routing could be specified within the attaching placement, the same way as it's done with clips or tracks in general. As the pipe asset itself already stores a StreamType reference, all we'd need is some kind of link to the pipe asset or pipe-ID, and maybe façade functions within the binding to support the handling.
!!!~Model-B: attaching to the container
Acknowledging the missing justification, we could instead use //just something to attach// &mdash; and actually handle the real association elsewhere. The obvious "something" in this case would be the BindingMO, which already acts as implementation of the timeline (which is a façade asset). Thus, for this approach, the bus-level effects would be attached as direct children of the {{{Placement<BindingMO>}}}, just for the sake of beeing attached and stored within the session, with an additional convention for the actual ordering and association to a specific pipe. The Builder then would rather query the ~BindingMO to discover and build up the implementation of the global pipes in terms of the render nodes.

!!!Comparison
While there might still be some compromises or combined solutions &mdash; to support the decision, the following table detailes the handling in each case
|>|   !~Model-B|!~Model-A |
|Association: | by plug in placement|by scope |
|Output:      | entry in routing table|by plug in placement |
|Ordering:    |>| stored in placement |
|Building:    | sort children by plug+order<br/>query output from routing|build like a clip |
|Extras:      | ? |tree-like bus arrangement,<br/> multi-stream bus |

So through this detailed comparison ''~Model-A looks favourable'': while the other model requires us to invent a good deal of the handling specifically for the global pipes, the former can be combined from patterns and solutions already used in other parts of the model, plus it allows some interesting extensions. 
On a second thought, the fact that the [[Bus-MObject|BusMO]] is rather void of any specific meaning doesn't weight so much: As the Builder is based on the visitor pattern, the individual objects can be seen as //algebraic data types.// Besides, there is at least one little bit of specific functionality: a Bus object actually needs to //claim//&nbsp; to be the OutputDesignation, by referring to the same ~Pipe-ID used in other parts of the model to request output routing to this Bus. Without this match on both ends, an ~OutputDesignation may be mentioned at will, but no connection whatsoever will happen.


!{{red{WIP}}} Structure of the global pipes
;creating global pipes automatically?
:defining the global bus configuration is considered a crucial part of each project setup. Lumiera isn't meant to support fiddling around thoughtlessly. The user should be able to rely on crucial aspects of the global setup never being changed without notice.
;isn't wiring and routing going to be painful then?
:routing is scope based and we employ a hierarchical structure, so subgroups are routed automatically. Moreover, wiring is done based on best match regarding the stream type. We might consider feeding all non-connected output designations to the GUI after the build process, to allow short-cuts for creating further buses.
;why not making buses just part of the fork ("track tree")?
:anything within the scope of some fork ("track") has a temporal extension and may vary -- while it's the very nature of the global pipes to be static anchor points.
;why not having one grand unified root, including the outputs?
:you might consider that a matter of taste (or better common-sense). Things different in nature should not be forced into uniformity
;should global pipes be arranged as list or tree?
:sound mixing desks use list style arrangement, and this has proven to be quite viable, when combined with the ability to //send over// output from one mixer stripe to the input of another, allowing to build arbitrary complex filter matrices. On the other hand, organising a mix in //subgroups// can be considered best practice. This leads to arranging the pipes //as wood:// by default and on top level as list, optionally expanding into a subtree with automatic rooting, augmented by the ability to route any output to any input (cycles being detected and flagged as error).
//some information how to achieve custom drawing with ~GTKmm...//
valuable reference documentation comes bundled with lib ~GTKmm, in the guide [["Programming with GTKmm"|https://developer.gnome.org/gtkmm-tutorial/stable/index.html.en]]
* the chapter detailing [[use of the Gtk::DrawingArea|https://developer.gnome.org/gtkmm-tutorial/stable/chapter-drawingarea.html.en]], including an introduction to [[Cairomm|https://www.cairographics.org/documentation/cairomm/reference/]]
* the chapter about [[constructing a custom widget|https://developer.gnome.org/gtkmm-tutorial/stable/sec-custom-widgets.html.en]]

Basically we have to handle the {{{signal_draw}}} events. Since we need to control the event processing, it is recommended to do this event handling by //overriding the {{{on_draw()}}} function,// not by connecting a slot directly to the signal. Two details are to be considered here: the //return value// controls if the event can be considered as fully handled. If we return {{{false}}}, enclosing (parent) widgets get also to handle this event. This is typically what we want in case of custom drawing. And, secondly, if we derive from any widget, it is a good idea to invoke the //parent implementation of {{{on_draw()}}} at the appropriate point.// This is especially relevant when our custom drawing involves the ''canvas widget'' [[Gtk::Layout|GtkLayoutWidget]], which has the ability to place several further widgets embedded onto the canvas area. Without invoking this parent event handler, those embedded widgets won't be shown.

Typically, when starting the draw operation, we retrieve our //allocation.// This is precisely the rectangle reserved for the current widget, //insofar it is visible.// Especially this means, when a larger canvas is partially shown with the help of scrollbars, the allocation is the actually visible rectangle, not the virtual extension of the canvas. Each scrollbar is associated with a {{{Gtk::Adjustment}}}, which is basically a bracketed value with preconfigured step increments. The //value// of the adjustment corresponds to the //coordinates of the viewport// within the larger overall area of the canvas. Since coordinates in Gtk and Cairo are oriented towards the right and downwards, the value properties of both adjustments (horizontal and vertical) together give us the coordinates of the upper left corner of the viewport. The maximum value possible within such an adjustment is such as to fulfil {{{max(value) + viewport-size == canvas-size}}}. By printing values from the {{{on_draw()}}} callback, it can be verified that Gtk indeed handles it precisely that way.

Thus, if we want to use absolute canvas coordinates for our drawing, we need to adjust the cairo context prior to any drawing operations: we translate it in the opposite direction of the values retrieved from the scrollbar {{{Gtk::Adjustment}}}s. This causes the //user coordinates// to coincede with the absolute canvas coordinates.
//This is the canvas widget of ~GTK-3//
It allows not only custom drawing, but also to embed other widgets at defined coordinates.

!to be investigated
In order to build a sensible plan for our timeline structure, we need to investigate and clarify some fundamental properties of the GtkLayoutWidget
* how are overlapping widgets handled?
* how is resizing of widgets handled, esp. when they need to grow due to content changes?
* how does the event dispatching deal with partially covered widgets?
* how can embedded widgets be integrated into a tabbing / focus order?
* how is custom drawing and widget drawing interwoven?

!test setup
we need a test setup for this investigation.
* easy to launch
* don't waste much time, it is disposable
* realistic: shall reflect the situation in our actual UI
As starting point, in winter 2016/17 the old (broken) timeline panel was moved aside and a new panel was attached for GTK experiments. These basically confirmed the envisioned approach; it is possible to place widgets freely onto the canvas; they are drawn in insert order, which allows for overlapped widgets (and mouse events are dispatched properly, as you'd expect). Moreover, it is also possible to draw directly onto the canvas, by overriding the {{{on_draw()}}}-function. However, some (rather trivial) adjustments need to be done to get a virtual canvas, which moves along with the placed widgets. That is, GtkLayoutWidget handles scrolling of embedded widgets automatically, but you need to adjust the Cairo drawing context manually to move along. The aforementioned experimental code shows how.

After that initial experiments, the focus of development shifted to the top-level UI structure, still unsatisfactory at that time, and integration with the message based communication via UI-Bus and the propagation of changes in the form of [[Diff messages|MutationMessage]] has been achieved. Following this ground work, the [[timeline UI|GuiTimelineView]] has been successfully implemented, using [[custom drawing|GuiTimelineDraw]] on a {{{Gtk::Layout}}} based »timeline body canvas« — thereby fully validating the feasibility of this approach.
//Interface to handle widgets on a timeline canvas without tight coupling to a central [[Layout Manager|TimelineDisplayManager]].//
Instead of actively installing and handling widgets from a central controlling entity, rather we allow the widgets to //attach// to an existing UI context, which beyond that remains unspecified. Widgets attached this way are required to implement a handling interface. Based on this general scheme, two kinds of attachments are modelled, each through a distinct set of interfaces
;~ViewHook
:Widgets of type {{{ViewHooked}}} are placed into an existing layout framework, like e.g. a grid control;
:those widgets maintain an //order of attachment// (which also translates into an order of appearance in the GUI.
;~CanvasHook
:Widgets of type {{{CanvasHooked}}} are attached with relative coordinates onto a [[Canvas widget|GtkLayoutWidget]] combined with custom drawing
In both cases, the interface on the widget side is modelled as //smart-handle// -- the widget detaches automatically at end-of-life.
Similarly, widgets can be re-attached, retaining the order -- or (in case of ~CanvasHook) -- maintaining or possibly adjusting the relative coordinates of attachment. And, also in both cases, the attachment can be repeated recursively, allowing to build nested UI structures.

This interconnected structure allows //another entity// (not the hook or canvas) to control and rearrange the widgets, and also to some degree it allows the widgets to adjust themselves, in response to local interactions like dragging

!Time calibrated display
The arrangement of media elements onto a timeline display imposes another constraint: the position of elements directly relates to their properties, especially a start and end time. Such a requirement is in conflict with the usual arrangement of UI elements, which largely is governed by considerations of layout and style and UI space necessary for proper presentation. When attaching a widget to the Canvas, the //canvas coordinates// of the attachment point need to be specified -- yet the widget knows its own properties rather in terms of time and duration. A //calibration// thus needs to be maintained, in close relation to global layout properties like the zoom scale or the scrolling position of the data section currently visible.

To bridge this discrepancy, the ~CanvasHook interface exposes a ''~DisplayMetric'' component, with operations to convert time to pixel offsets and vice versa. Internally, this ~DisplayMetric is passed through and actually attached to the ZoomWindow component, which is maintained within the {{{TimelineLayout}}} to coordinate all scrolling and zooming activities. Moreover, a //overall canvas extension// is maintained, by anchoring the origin in pixel coordinates at the begin of the {{{TimeSpan}}} defining this canvas range.
//The representation of a [[media clip|Clip]] for manipulation by the user within the UI.//
Within Lumiera, a clip is conceived as a //chunk of media,// which can be handled in compound. Clip as such is an abstract concept, which is treated with minimal assumptions...
* we know that a clip has //media content,// which need not be uniform and can be inherently structured (e.g. several media, several channels, a processing function)
* like anything within a timeline, a clip has a temporal extension (but not necessarily finite; it can //not be zero,// but it can be infinite)
* by virtue of [[Placement]], a clip acquires a time position. (&rarr; GuiPlacementDisplay)
Due to this unspecific nature, a clip might take on various //appearances// within the UI.

!clip appearances
To start with, a clip can be rendered in ''abridged form'', which means that the content is stylised and the temporal extension does not matter. In this form, the clip is reduced to an icon, an expand widget and a ID label. This is the standard representation encountered within the //media bins.// The intent of this representation is to save on screen area, especially to minimise vertical extension. As a derivative of this layout style, a clip may be shown in abridged form, but with proper representation of the temporal extension; to this end, the enclosing box is extended horizontally as needed, while the compound of icon, control and label is aligned such as to remain in sight.

The next step in a series of progressively more detailed clip representations is the ''compact form'', which still focuses on handling the clip as an unit, while at least indicating some of the inherent structuring. Essentially, the clip here is represented as a //strip of rendered preview content,// decorated with some overlays. One of these overlays is the //ID pane,// which resembles the arrangement known from the abridged form: The icon here is always the [[Placement]] icon, followed by the expand widget and the ID label. Again, this pane is aligned such as to remain in sight. Then, there is a pair of overlays, termed the //boundary panes,// which indicate the begin and the end of the clip respectively. Graphically, these overlays should be rendered in a more subtle way, just enough to be recognisable. The boundary panes are the attachment areas for //trimming gestures,// as opposed to moving and dragging the whole clip or shuffle editing of the content. Moreover, these boundary panes compensate for the alignment of the ID pane, which mostly keeps the latter in sight. As this might counterfeit the visual perception of scrolling, the boundary panes serve to give a clear visual clue when reaching the boundary of an extended clip. Optionally, another overlay is rendered at the upper side of the clip's area, to indicate attached effect(s). It is quite possible for these effect decorations not to cover the whole temporal span of the clip.

A yet more detailed display of the clip's internals is exposed in the ''expanded form.'' Here, the clip is displayed as a window pane holding nested clip displays, which in turn might again be abridged, compact or ({{red{maybe 11/16}}}) even expanded. This enclosing clip window pane should be rendered semi transparent, just to indicate the enclosing whole. The individual clip displays embedded therein serve to represent individual media parts or channels, or individual attached effects. Due to the recursive nature of Lumiera's HighLevelModel, each of these parts exposes essentially the same controls, allowing to control the respective aspects of the part in question. We may even consider ({{red{unclear as of 11/16}}}) to allow accessing the parts of a VirtualClip, this way turning the enclosing clip into a lightweight container ({{red{11/2016 give proper credit for this concept! Who proposed this initially in 2008? was this Thosten Wilms?}}}

Finally, there can be a situation where it is just not possible to render any of the aforementioned display styles properly, due to size constraints. Especially, this happens when [[zooming|ZoomWindow]] out such as to show a whole sequence or even timeline in overview. We need to come up with a scheme of ''graceful display degradation'' to deal with this situation -- just naively attempting to render any form might easily send our UI thread into a minute long blocking render state, for no good reason. Instead, in such cases display should fall back to a ''degraded form''
* showing just a placeholder rectangle, when the clip (or any other media element) will cover a temporal span relating to at least 1 pixel width (configurable trigger condition)
* even further collapsing several entities into a strike of elements,to indicate at least  that some content is present in this part of the timeline.
Together this shows we have to decide on a ''display strategy'' //before we even consider to add// a specific widget to the enclosing GTK container....

!!!strategy decision how to show a clip
Depending on specific circumstances within the presentation, we get a fundamentally different code path, well beyond just a variation of display parametrisation. Even more, this decision happens above the level of individual widgets, and this decision might be changed dynamically, independently of adding or removing individual widgets. We may
* need to build a custom drawing container for a single widget, to produce the compact and expanded display; theoretically this display can be //reduced// to the abridged form.
* alternatively we may omit the overhead of a custom drawing container, if we know that we'll most likely stick to the abridged form, or reduce display to a placeholder block
* but in the extreme case, we do not even display anything for a given widget -- rather we need to create a placeholder to stand-in for //N clips.//
This is unfortunate, since it defeats a self-contained control structure, where each widget holds its own model and manages its own state. Rather, we have to distinguish between child elements as belonging to the model vs child widgets as far as the view is concerned. A given scope in the fork could have more clip child elements than child widgets for display. Which means, we have to separate between the view aspect and the modelling aspect. And this split happens already on a level global to the timeline.

This creates a tension related to the kind of architecture we want to build with the help of the UI-Bus. The intention is to split and separate between the application global concerns, like issuing an editing command or retrieving changed model contents, and the //local UI mechanics.// This separation can only work, if -- with respect to the global concerns -- the UI-Bus embodies (actually mediates) the roles of model //and// controler, while the UI widgets take on the role of a "mere view". While, for the local concerns, the same widgets are to act self-contained. But now, already the existence of aforementioned widgets becomes dependent from another, obviously local concern, which unfortunately cross-cuts the 1:1 relation between model entities and view counterpart.

Starting from the architecture as predetermined by the UI-Bus, it is clear that we have to add a UI-Element for every tangible child entity represented in the (abstracted) model. And the parent entity receiving a diff and adding such a child element is also in charge of that element, controlling its lifecycle. Yet this child element is not necessarily bound to be a widget. This is our chance here: what "adding" means is completely within the scope of the parent element and basically an implementation detail. So the parent may create and own a tracker or controller object to stand for the clip, and the latter may create and adjust the actual widget in negotiation with a display strategy. So effectively we're introducing a //mediator,// which represents the clip (as far as the modelling is concerned) and which at the same time communicates with a global display manager. We call this mediating entity a ClipPresenter

!!!how to carry out the clip appearances
Initially, one could think that we'd need to build several widgets to realise the wide variety of clip appearances. But in fact it turns out that we're able to reshape a single base widget to encompass all the presentation styles. Almost all of the necessary styles, to be more precise, omitting the mere overview drawing where several entities are combined into a single box. However, this basic [[element-box-widget|GuiElementBoxWidget]] can be generalised, and even used beyond the clip display itself. It is a simple, one-element container, with an Icon and a textual label. This gives us already a rectangular covered space, and the ability to include a menu and to control the alignment of the label. All the more elaborate presentation styles can be achieved by adding a canvas widget into this frame and then placing additional stuff on top of that. The only tricky part arises in overview display, when just some clip rectangle can stand-in for a whole series of clips, which themselves remain hidden as UI elements.

A prime consideration regarding this whole clip presentation strategy is the performance concern. It is quite common for movie edits to encompass several hundred individual clips. Combined with several tracks and an elaborate audio edit, it may well happen that we end up with thousands of individual UI objects. If treated naively, this load might seriously degrade the responsiveness of the interface. Thus we need to care for the relevant infrastructure to enable optimisation of the display. For that reason, the memory footprint of the ClipPresenter and the basic widget has to be kept as small as possible. And moreover, since we do our own layout management in the timeline display, in theory it is possible //only to add// those widgets to the enclosing GTK container, which are //actually about to become visible.// (if we follow this approach, a problem yet to be solved is how to remove widgets falling out of sight, since removing N widgets easily turns into a quadratic operation -- {{red{2022/8 however...}}} alternatively we might just throw away existing widgets and start from scratch with a new minimal set).

!clip content rendering
In a typical editing application, the user can expect to get some visual clue regarding the media content of a clip. For example, sound clips can be visualised as waveform, while movie clips might feature a sequence of images taken from the video. Our intention is to ''use our own rendering engine'' to produce these thumbnails. In fact, our engine is perfectly suited for this task: it has precisely the necessary media decoding and rendering abilities, plus it offers an elaborate system of priorities and deadlines, allowing to throttle the load produced by thumbnail generation. In addition to all those qualities, our engine is planned to be complemented by an "intelligent" frame cache, which, given proper parametrisation, ensures the frequently requested thumbnails will be available for quick display. For this approach to work, we need to provide some infrastructure
* we need to configure and maintain a //preview rendering strategy.//
* the request for rendering has to be cast "just somewhere" as message, possibly via the UI-Bus
* actually rendered content will likewise arrive asynchronously as message via UI-Bus.
* we still need to work out how buffer management for this task will be handled; it should be a derivative of typical buffer management for display rendering.
* the clip widget needs to provide a simple placeholder drawing to mark the respective space in the interface, until the actual preview arrives.

To start with, mostly this means to avoid a naive approach, like having code in the UI to pull in some graphics from media files. We certainly won't just render every media channel blindly. Rather, we acknowledge that we'll have a //strategy,// depending on the media content and some further parameters of the clip. This might well just be a single ''pivot image'' chosen explicitly by the editor to represent a given take. Seemingly, the proper place to hose that display strategy is ''within the session model'', not within the UI. And the actual implementation of content preview rendering will largely be postponed until we get our rendering engine into a roughly working state.
//how to access ~Steam-Layer commands from the UI and to talk to the command framework//

!Usage patterns
Most commands are simple; they correspond to some action to be performed within the session, and they are in a ''fire and forget'' style from within the widgets of the UI. For this simple case to work, all we need to know is the ''Command ID''. We can then issue a command invocation message over the UI-Bus, possibly supplying further command arguments.

However, some actions within the UI are more elaborate, since they might span several widgets and need to pick up //contextual state//.
Typically, those elaborate interactions can be modelled as [[generalised Gestures|GuiGesture]] -- in fact this means we model them like language sentences, with a ''Subject'', a ''Verb'' (action) and some additional qualifications. Since such a sentence requires some time to be spelled out, we have to collect InteractionState, up to the point where it is clear which command shall finally be issued, and in relation to what subject this command should act.
The topic of command binding addresses the way to access, parametrise and issue [[»Steam-Layer Commands«|CommandHandling]] from within the UI structures.
Basically, commands are addressed by-name -- yet the fact that there is a huge number of commands, which moreover need to be provided with actual arguments, which are to be picked up from some kind of //current context// -- this all together turns this seemingly simply function invocation into a challenging task.
The organisation of the Lumiera UI calls for a separation between immediate low-level UI element reactions, and anything related to the user's actions when working with the elements in the [[Session]] or project. The immediate low-level UI mechanics is implemented directly within the widget code, whereas to //"work on elements in the session",// we'd need a collaboration spanning UI-Layer and Steam-Layer. Reactions within the UI mechanics (like e.g. dragging a clip) need to be interconnected and translated into "sentences of operation", which can be sent in the form of a fully parametrised command instance towards the SteamDispatcher
* questions of architecture related to command binding &rarr; GuiCommandBindingConcept
* study of pivotal action invocation situations &rarr; CommandInvocationAnalysis
* actual design of command invocation in the UI &rarr; GuiCommandCycle
* the way to set up the actual command definitions &rarr; CommandSetup
* access and use commands from UI code &rarr; GuiCommandAccess
The question //how to connect the notion of an ''interface action'' to the notion of a ''command'' issued towards the [[session model|HighLevelModel]].//
* actual design of command invocation in the UI &rarr; GuiCommandCycle
* study of pivotal action invocation situations &rarr; CommandInvocationAnalysis

!prerequisites for issuing a command
Within the Lumiera architecture, with the very distinct separation between [[Session]] and interface view, several steps have to be met before we're able to operate on the model.
* we need a pre-written script, which directly works on the entities reachable through the session interface &rarr; [[Command handling in Steam-Layer|CommandHandling]]
* we need to complement this script with a state capturing script and a script to undo the given action
* we need to combine these fixed snippets into a //command prototype.//
* we need to care for the supply of parameters
** indirectly this defines and limits how this command can be issued
** which in fact embeds the raw command into a context or a gesture of invocation
** and only through this explication the command-as-seen-from-session translates into something tangible within the UI
* next we have to consider conditions and circumstances. Not every command can be invoked any given time
** the focus and current selection is relevant
** the user interaction might supply context by pointing at something
** the proximity of tangible interface elements might be sufficient to figure out missing parts
** at times it might also be necessary to intersperse the invocation of a detail parameter dialogue prior to command execution
* and finally, after considering all these concerns, it is possible to wire a connection into the actual invocation point in UI
This observation might be surprising; even while a given command is well defined, we can not just invoke it right away. The prevalence of all these intermediary complexities is what opens the necessity and the room for InteractionControl, which is a concern specific to human-computer interfaces. Faced with such a necessity, there are two fundamentally different approaches.
!!!Binding close to the widget
This approach concentrates knowledge about the operation at that location, where it is conceived "to happen" -- that is, close to the concrete UI widget.
So the actual widget type implies knowledge about the contents and the meaning of the command scripts. At the point when the widget is actually triggered, it starts to collect the necessary parameters and to this end needs to acquire connections to other facilities within the UI. In fact, even //before// anything can be triggered, the widget has to administer the activation state and show some controls as enabled or disabled, and it needs to observe ongoing state changes to be able to do so.

The consequence of this approach is that the relations and whereabouts of entities involved into this decision tend to be explicated right into the widget code. Any overarching concerns end up being scattered over various implementation sites, need to be dealt with by convention, or rely on all invocation sites to use some helper facilities voluntarily.

!!!Abstracted binding definitions
This contrastive approach attempts to keep knowledge and definition clustered in a way according to the commands and actions to be performed -- even at the price of some abstractions and indirections. There is no natural and obvious place where to collect those information, and thus we need to create such a location deliberately. This new entity or location to be conceived will serve as a link between user interface and session elements, and it tends to rely on definitions from both realms.
* in addition to the command script, here we build a parameter accessor, which is some kind of functor or closure.
* we need to introduce a new abstraction, termed InteractionState. This is deliberately not a single entity, rather some distinct facility in charge for one specific kind of interaction, like gestures being formed by mouse, touch or pen input.
* from the command definition site, we need to send a combination of //rules// and parameter accessors, which together define an invocation path for one specific flavour of a command
* the InteractionState, driven by the state changes he observes, will evaluate those rules and determine the feasibility of specific command invocation paths
* he sends the //enablement of a command invocation trail// as a preconfigured binding to the actual //trigger sites,// which in turn allows them to react to local user interactions properly
* if finally some button is hit, the local event binding can issue the command right away, by accessing just any UI-Bus terminal at reach within that context

''Lumera decides to take the latter approach'' -- resulting in a separation between immediate low-level UI element reactions, and anything of relevance to the workings of the application as a whole. The widget code embodies the low-level UI element reactions and as such becomes more or less meaningless beyond local concerns of layout and presentation. If you want to find out about the //behaviour of the UI,// you need to know where to look, and you need to know how to read and understand those enablement rules. Another consequence is the build-up of dedicated yet rather abstract state tracking facilities, hooking like an octopus into various widgets and controllers, which might work counter to the intentions behind the design of common UI toolkit sets.
&rarr; GuiCommandCycle
&rarr; CommandSetup
//the process of issuing a session command from the UI//
Within the Lumiera UI, we distinguish between core concerns and the //local mechanics of the UI.// The latter is addressed in the usual way, based on a variation of the [[MVC-Pattern|http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller]]. The UI toolkit set, here the GTK, affords ample ways to express actions and reactions within this framework, where widgets in the presentation view are wired with the corresponding controllers vice versa (GTK terms these connections as //"signals"//, we rely on {{{libSigC++}}} for implementation).
A naive approach would extend these mature mechanisms to also cover the actual functionality of the application. This compelling solution allows quickly to get "something tangible" up and running, yet -- on the long run -- inevitably leads to core concerns being tangled into the presentation layer, which in turn becomes hard to maintain and loaded with "code behind". Since we are here "for the long run", we immediately draw the distinction between UI mechanics and core concerns. The latter are, by decree and axiom, required to perform without even an UI layer running. This decision gives rise to the challenge how to form and integrate the invocation of ''core commands'' into the presentation layer.

In a nutshell, we understand each such core command as a ''sentence'', with a //subject//, a //predication//, which is the command script in ~Steam-Layer and can be represented by an ID, and possibly additional arguments. And the key point to note is: //such an action sentence needs to be formed, before it can be issued.//
[>img[Command usage in UI|uml/Command-ui-usage.png]]

!use case analysis
To gain some understanding of the topic, we pose the question "who has to deal with core commands"?
* the developer of ~Steam-Layer, obviously. The result of that development process is a set of [[command definitions|CommandHandling]], which get installed during start-up of the SessionSubsystem
* global menu actions (and keybindings) want to issue a specific command, but possibly also need context information
* a widget, button or context-menu binding typically want to trigger a command on some [[tangible element|UI-Element]] (widget or controller), but also needs to prepare the arguments prior to invocation
* some [[interaction state manager|InteractionState]] observes contextual change and needs to mark possible consequences for invoking a given command

from these use cases, we can derive the //crucial activities for command handling...//
;instance management
:our commands are prototypes; we need to manage instances for binding concrete arguments
:there is a delay between issuing a command in the UI and dispatching it into the session (and dispatch happens in another thread)
:moreover, there is an "air-gap" when passing a command invocation via ~UI-Bus and Interface System
;forming and enrichment of invocation state
:interactions might happen in the form of ''gestures''
:consequently, interaction state is picked up from context, during an extended time span prior to the invocation
;access to the right ~InteractionState
:a widget just wants to invoke a command, yet it needs the help of "some" InteractionState for
:* creating the command instance, so arguments can be bound
:* fill in missing values for the arguments, depending on context

!command invocation protocol
* at start-up, command definitions are created in Steam, hard wired
* ~UI-Elements know the basic ~Command-IDs relevant to their functionality. These are &rarr; [[defined in some central header|CommandSetup]]
* command usage may happen either as simple direct invocation, or as part of an elaborate argument forming process within a context.<br/>thus we have to distinguish two usage patterns
*;fire and forget
*:a known command is triggered with likewise known arguments
*:* just the global ~Command-ID (ID of the prototype) is sent over the UI-Bus, together with the arguments
*:* the {{{CmdInstanceManager}}} in Steam-Layer creates an anonymous clone copy instance from the prototype
*:* arguments are bound and the instance is handed over into the SteamDispatcher, without any further registration
*;context bound
*:invocation of a command is formed within a context, typically through a //interaction gesture.//
*:most, if not all arguments of the command are picked up from the context, based on the current [[Spot]]
*:* on setup of such an invocation context, the responsible part in the UI queries the {{{CmdContext}}} for an InteractionState
*:* the latter exposes a //builder interface// to define the specifics of the //Binding.//
*:* ...resulting in a concrete [[Gesture controllrer|GuiGesture]] to be configured and wired with the //anchor widget.//
*:* Gesture controllers are a sprcific kind of InteractionState (implementation), which means using some kind of ''state machine'' to observe ongoing UI events, detect trigger conditions to start the formation of a Gesture, and finally, when the Gesture is complete, invoke the configured Command and supply the arguments from the invocation state and context information gathered during formation of the gesture.
&rarr; GuiCommandAccess

[<img[Access to Session Commands from UI|uml/Command-ui-access.png]]








Command instances are like prototypes -- thus each additional level of differentiation will create a clone copy and decorate the basic command ID. This is necessary, since the same command may be used and parametrised differently at various places in the UI. If necessary, the {{{CmdInstanceManager}}} internally maintains and tracks a prepared anonymous command instance within a local registration table. The //smart-handle//-nature of command instance is enough to keep concurrently existing instances apart; instances might be around for an extended period, because commands are enqueued with the SteamDispatcher.


!command definition
&rarr; Command scripts are defined in translation units in {{{steam/cmd}}}
&rarr; They reside in the corresponding namespace, which is typically aliased as {{{cmd}}}
&rarr; definitions and usage include the common header {{{steam/cmd.hpp}}}
see the description in &rarr; CommandSetup
//A view within the UI, featuring some component of relevance to »the model«.//
While any UI is comprised of numerous widgets acting as //view of something,// only some of those views play the prominent role to act as //building block component// of the user interface.
Such UI component views exhibit some substantial traits
* they conform to a built-in fixed list of view types, each of which is unique and dedicated to a very specific purpose: //''Timeline'', ''Viewer'', (Asset)''Bin'', ''Infobox'', ''Playcontrol'',...//
* each component view has a distinguishable identity and is connected to and addressable through the UI-Bus
* it can be hosted only at a dedicated location within one or several specific [[docking panels|GuiDockingPanel]].
* multiplicity (only one, one per window, many) depends on the type of view and needs to be managed.
* such a view is not just //created// -- rather it needs to be //allocated//

!Allocation of UI component views
Here, //Allocation// means
* to constitute the desired element's identity
* to consider multiplicity and possibly retrieve an existing instance
* to determine the hosting location
* possibly to instantiate and register a new instance
* and finally to configure that instance for the desired role
The classical example to verify this definition is the //allocation of viewers:// when starting playback of a new media item, we "need a viewer" to show it. But we can not just create yet another viewer window -- rather we're bound to allocate one of the possible "viewer slots". In fact this is a configurable property of the UI layout employed; sometimes some people need the limitation to one single viewer entity (which might even be external, routed to a beamer or monitor), while other ones request the classical editor layout with two viewer windows side by side, while yet different working styles might exploit a limited set of viewers allocated in stack- or round-robin style.

!View access
The global access point to component views is the ViewLocator within InteractionDirector, which exposes a generic access- and management API to
* get (possibly create) some view of given type
* get (possibly create) a view with specific identity
* destroy a specific view
For all these direct access operations, elements are designated by a private name-ID, which is actually more like a type-~IDs, and just serves to distinguish the element from its siblings. The same ~IDs are used for the components in [[UI coordinate specifications|UICoord]]; both usages are closely interconnected, because view access is accomplished by forming an UI coordinate path to the element, which is then in turn used to navigate the internal UI widget structure to reach out for the actual implementation element.

While these aforementioned access operations expose a strictly typed direct reference to the respective view component and thus allow to //manage them like child objects,// in many cases we are more interested in UI elements representing tangible elements from the session. In those cases,  it is sufficient to address the desired component view just via the UI-Bus. This is possible since component ~IDs of such globally relevant elements are formed systematically and thus always predictable: it is the same ID as used within Steam-Layer, which basically is an {{{EntryID<TYPE>}}}, where {{{TYPE}}} denotes the corresponding model type in the [[Session model|HighLevelModel]].

!!!Configuration of view allocation
Since view allocation offers a choice amongst several complex patterns of behaviour, it seems adequate to offer at least some central configuration site with a DSL for readability. That being said -- it is conceivable that we'll have to open this topic altogether for general configuration by the user. For this reason, the configuration site and DSL are designed in a way to foster further evolution of possibilites...
* at definition site {{{id-scheme.hpp}}}, explicit specialisations are given for the relevant types of component view
* each of those general //view configurations//
** defines the multiplicity allowed for this kind of view
** defines how to locate this view
* and that //location definition// is given as a list of //alternatives in order of precedence.// That is, the system tries each pattern of location and uses the first one applicable

!!Standard examples
;Timeline
:add to group of timelines within the timelinePanel
{{{
alloc = unlimited
locate = perspective(edit).panel(timeline)
          or panel(timeline)
          or currentWindow().panel(timeline).create()
}}}
;Viewer
:here multiple alternatives are conceivable
:* allow only a single view instance in the whole application
{{{
alloc = onlyOne
locate = external(beamer)
          or view(viewer)
          or perspective(mediaView).panel(viewer)
          or panel(viewer)
          or firstWindow().panel(viewer).view(viewer).create()
}}}
:* allow two viewer panels (the standard layout of editing applications)
{{{
alloc = limitPerWindow(2)
locate = perspective(edit).panel(viewer)
          or currentWindow().panel(viewer)
          or panel(viewer)
          or currentWindow().panel(viewer).create()
}}}
;(Asset)Bin
:within the dedicated asset panel, add to the appropriate group for the kind of asset
{{{
alloc = unlimited
locate = currentWindow().perspective(edit).tab(assetType())
          or perspective(asset).view(asset)
          or tab(assetType())
          or view(asset).tab(assetType()).create()
          or firstWindow().panel(asset).view(asset).create()
}}}
;~Error-Log
:use the current {{{InfoBoxPanel}}} if such exists, else fall back to using a single view on the primary window
{{{
alloc = limitPerWindow(1)
locate = currentWindow().panel(infobox)
          or view(error)
          or panel(infobox)
          or firstWindow().panel(infobox).view(error).create()
}}}
;Playcontrol
://not determined yet what we'll need here....//

!!DSL structure
* the DSL assigns tokens to some expected Specs
* the tokens themselves are provided as constants
* in fact those tokens are functors
* and are opaquely bound into {{{ViewLocator}}}'s implementation

!!!Signatures
;locate
:is evaluated first and determines the UICoord of the target view
:is a disjunction of alternative specs, which are resolved to use the first applicable one
;alloc
:the //Allocator// is run on those coordinates and actually //allocates// the view
:returns coordinates of an (now) existing view
;allocator specs
:these are //builder functions// to yield a specifically parametrised allocator
:typical example is an allocator to create only a limited number of view instances per window
:the actual builder tokens are synthesised by partial function application on the given lambda
!!!Spec Tokens
;locate
:firstWindow
: currentWindow
: perspective(id)
: panel(id)
: assetType() {{red{WIP 2/18 not clear if necessary}}}
: create()
;alloc
: unlimited
: onlyOne
: limitPerWindow(cnt)

!!!Semantics of location
The given UICoord specs are matched one by one, using the first one applicable. The location indicated by this process describes the parent or scope where the desired view can be found or shall be created. The matching itself is based on the matching of UI coordinates against the existing UI topology, but enriched with some contextual information. By default, a given coordinate spec //is required to exist// -- which implies the semantics of //total coverage,// as defined by the coordinate resolver. When to the contrary the predicate {{{create()}}} is used, we only require that the given path //can be formed unambiguously// within the existing window topology -- the semantics of which is, again, defined by the coordinate resolver and basically implies that the path can be anchored and any missing parts can be interpolated without ambiguity, while possibly some extraneous suffix of components remains to be created by the instantiation process.

The coordinate specs as written within the DSL are slightly abridged, as the final element, the actual component to be created, can be omitted and will be supplied automatically. This is possible insofar the ID of the queried element is actually more like a type specifier, and thus drawn from a finite and well known collection of possible elements (Timeline, Asset Bin, Error Log, Media Viewer, etc.). The elements to be created must invoke existing code and must be able to interface with their actual environment after all.

However, the semantics of UI coordinate resolution and matching are applied against the coordinate specs //as given in the DSL.// Since, by default, what is given is required to exist, it makes quite a difference as to what kind of element is used as terminal element of a given spec. Since our UI coordinate system establishes a distinct depth for each kind of element, we can and even must reject any spec which does not yield a definite location for at least the parent element; we do not interpolate or even invent arbitrary elements, we only ever match against existing ones. Thus any DSL definition must encompass a sane default, a final alternative clause which always succeeds -- and the DSL is considered broken if it doesn't.

!!!!The problem with {{{assetType()}}}
There is a very special use case, where we'd might want to express some fine points regarding placement of a new sub-tab within the view requested. This need arises when the system requires to show some sub-element through a dedicated view. Such might be the case for a special kind of asset, or when a virtual clip or media is to be opened from a context //other than editing the timeline.// In such a situation, a preferable action would be to use or re-use an existing tab of a view of similar kind. Only if this fails, we'd want a new view to be created at a generic location (e.g. a window dedicated to asset management), and only as fall-back we'd want a completely new asset section to show up within the current window.

Unfortunately such surpasses the ability of the solution mechanism backing this location DSL, which basically relies on pattern matching without further expression evaluation or unification. To express the aforementioned special treatment, we'd need a placeholder within the rules, indicated above as {{{assetType()}}}, which has to be replaced by the actual typeID used in the concrete location query. Since our system does not support unification, either the matching mechanism has to be manipulated based on the context, or the specific rule must be preprocessed for token replacement.
It is not clear yet {{red{as of 2/2018}}}, if those additional complexities are justified...
&rarr; Ticket #1130


!!!Semantics of allocation
While the process of probing and matching the location specification finally yields an explicit UICoord path to the desired element, it is up to the allocation step actually to decide on the action to be taken. Some allocation operations impose some kind of limit, and are thus free to ignore the given spec and rather return an existing element in place. In the end, the purpose of this whole matching and allocation process is to get hold of a suitable UI component without knowing its precise coordinates in the UI topology. And this is the very property to enable flexible mapping of the strictly hierarchical session structures onto the UI in a fluid way.
&rarr; [[low level component access|UILowLevelAccess]]
All communication between Steam-Layer and GUI has to be routed through the respective LayerSeparationInterfaces. Following a fundamental design decision within Lumiera, these interface are //intended to be language agnostic// &mdash; forcing them to stick to the least common denominator. Which creates the additional problem of how to create a smooth integration without forcing the architecture into functional decomposition style. To solve this problem, we rely on ''messaging'' rather than on a //business facade// -- our facade interfaces are rather narrow and limited to lifecycle management. In addition, the UI exposes a [[notification facade|GuiNotificationFacade]] for pushing back status information created as result of the edit operations, the build process and the render tasks.

!anatomy of the Steam/GUI interface
* the GuiFacade is used as a general lifecycle facade to start up the GUI and to set up the LayerSeparationInterfaces.<br/>It is implemented by a class //in core// and loads the Lumiera ~GTK-UI as a plug-in.
* once the UI is running, it exposes the GuiNotificationFacade, to allow pushing state and structure updates up into the user interface.
* in the opposite direction, for initiating actions from the UI, the SessionSubsystem opens the SessionCommandFacade, which can be considered //"the" public session interface.//

!principles of UI / Steam interaction
By all means, we want to avoid a common shared data structure as foundation for any interaction. For a prominent example, have a look at [[Blender|https://developer.blender.org]] to see where this leads; //such is not bad,// but it limits to a very specific kind of evolution. //We are aiming for less and for more.// Fuelled by our command and UNDO system, and our rules based [[Builder]] with its asynchronous responses, we came to rely on a messaging system, known as the [[UI-Bus]].

The consequence is that both sides, "the core" and "the UI" remain autonomous within their realm. For some concerns, namely //the core concerns,// that is editing, arranging, processing, the core is in charge and has absolute authority. On the other hand, when it comes to user interaction, especially the //mechanics and materiality of interaction,// the UI is the authority; it is free to decide about what is exposed and in which way. The collaboration between both sides is based on a ''common structural understanding'', which is never fully, totally formed in concrete data structures.

Rather, the core sends ''diff messages'' up to the UI, indicating how it sees this virtual structure to be changing. The UI reflects these changes into //its own understanding and representation,// that is here a structure of display widgets. When the user interacts with these structures of the presentation layer, ''command messages'' are generated, using the element ~IDs to designate the arguments of the intended operation. This again causes reaction and change in the core, which is reflected back in the form of further diff messages. (&rarr; GuiCommandCycle)
//install a listener into the session to cause sending of population diff messages.//
The Lumiera application is not structured as an UI with internal functionality dangling below. Rather, the architecture calls for several self-contained subsystems, where all of the actual "application content" is modelled within the [[Session]] subsystem. The UI implements the //mechanics of user interaction// -- yet as far as content is concerned, it plays a passive role. Within the UI-Layer, there is a hierarchy of presentation entities, to mirror the structure of the session contents. These entities are built step by step, in reception of //population diff messages// sent upwards from the session.

To establish this interaction pattern, a listener gets installed into the session. In fact, the UI just issues a specific command to indicate it is ready for receiving content; this command, when executed down within the SteamDispatcher, causes the setup of aforementioned listener. This listener first has to //catch up// with all content already existing within the session -- it still needs to be defined {{red{as of 7/2018}}} in which form this existing content can be translated into a diff to reflect it within the UI. Since the session itself is {{red{planned (as of 2018)}}} to be persisted as a sequence of building instructions (CQRS, event sourcing), it might be possible just to replay the appropriate defining messages from the global log. After that point, the UI is attached and synchronised with the session contents and receives any altering messages in the order they emerge.

!Trigger
It is clear that content population can commence only when the GTK event loop is already running and the application frame is visible and active. For starters, this sequence avoids all kinds of nasty race conditions. And, in addition, it ensures a reactive UI; if populating content takes some time, the user may watch this process through the visible clues given just by changing the window contents and layout in live state.

And since we are talking about a generic facility, the framework of content population has to be established in the GuiTopLevel. Now, the top-level in turn //starts the event loop// -- thus we need to //schedule// the trigger for content population. The existing mechanisms are not of much help here, since in our case we //really need a fully operative application// once the results start bubbling up from Steam-Layer. The {{{Gio::Application}}} offers an "activation signal" -- yet in fact this is only necessary due to the internals of {{{Gio::Application}}}, with all this ~D-Bus registration stuff. Just showing a GTK window widget in itself does not require a running event loop (although one does not make much sense without the other). The mentioned {{{signal_activation()}}} is emitted from {{{g_application_run()}}} (actually the invocation of {{{g_application_activate()}}} is burried within {{{g_application_real_local_command_line()}}}, which means, the activation happens //immediately before// entering the event loop. Which pretty much rules out this approach in our case, since Lumiera doesn't use a {{{Gtk::Application}}}, and moreover the signal would still induce the (small) possibility of a race between the actual opening of the GuiNotificationFacade and the start of content population from the [[Steam-Layer thread|SessionSubsystem]].

The general plan to trigger content population thus boils down to
* have the InteractionDirector inject the population trigger with the help of {{{Glib::signal_timeout()}}}
* which will activate with a slight delay (100ms) after UI start, and after primary drawing activities ceases
* the trigger itself issues a command towards the session
* execution of aforementioned command activates sending of population / diff messages
* on shutdown of the top-level, send the corresponding deactivation command 
//Generate a visual representation of content manipulated through the Lumiera UI.//
The topic of content rendering is closely related to the generic UI pattern of the [[»Element Box«|GuiElementBoxWidget]], which was introduced to establish some degree of uniformity throughout the GUI. Other than that, content is obviously also displayed in the ''Video viewers'' -- and special content like automation curves or [[Placements|Placement]] are handled in dedicated UI components.

Yet in the generic case, content is shown within a Box with well defined extension; moreover, a //content type// has been established. A specialised ''Content Renderer'' is then installed, with the help of a configuration strategy. For this generic case, the interior of the Element Box can be assumed to be a ''Canvas'' (&rarr; see also GtkCustomDrawing) -- which is used as "stage" by the content renderer...
* to provide a generic //indication of content type// 🠲 e.g. a film-strip or audio wave symbol
* to fill in a concrete //rendering// based on actual content 🠲 e.g. movie frames or wave form display
* alternatively to present a //pivotal content// indicator 🠲 e.g. a key image frame characterising the clip as a whole
* or to present an extended UI to interact with the content 🠲 e.g. expanded Clip display with individual media tracks

!Architecture
These disparate usage patterns impose the challenge to avoid architectural tangling: The [[Element Box Witget|GuiElementBoxWidget]] must be confined to be //merely a container,// and remain agnostic with respect to the inner structure of the content; it is instructed only insofar a specific //content type// is indicated, as foundation for picking a suitable //rendering strategy.//
Inevitably, the UI of an advanced application like Lumiera needs some parts beyond the scope of what can be achieved by combining standard widgets. Typically, an UI toolkit (GTK is no exception here) offers some extension mechanism to integrate such more elaborate, application specific behaviour. Within Lumiera, the Timeline is probably the most prominent place where we need to come up with our own handling solution -- which also means to rely on such extension mechanisms and to integrate well with the more conventional parts of the UI. Since the core concept of typical UI toolkit sets is that of a //widget,// we end up with writing some kind of customised or completely custom defined widget. 

!two fundamental models
So it is clear that we need to write a customised widget to embody the specific behaviour we need. Two distinct approaches are conceivable
;custom arrangement
:define or derive our own widget class to arrange existing widgets in a customised way
;custom drawing
:get the allocated screen area for our widget and perform all drawing and event updating explicitly in our own code

!!!perils of custom drawing
While the second approach intuitively seems to be adequate (and in fact, Cinelerra, Ardour and our own GUI choose this approach), we should note several problematic aspects
* once we "take over", we are entirely responsible for "our area" and GTK steps out of our way
* we have to replicate and code up any behaviour we want and know from the standard GUI
* chances are that we are handling some aspects different than the default, without even noticing there is a default
* chances are that we are lacking the manpower to cope with all interdependencies of concrete presentation situation, custom styling and event state
Our custom made UI elements impend to turn into a tremendous time sink (For reference, Paul Davis reported for Ardour 2.x that he spent 80% of the developer time not with audio processing, but rather with bashing the UI into shape), while non the less delivering just a crappy, home-brew and misaligned user experience which stands out as an alien element not playing well with the rest of the desktop.
{{red{Well}}} //at least we are aware of the danger.//

!!!is custom drawing necessary?
There are some issues though, which more or less force us into custom drawing
* for our task, we're always suffering from lack of screen real estate. This forces us into conceiving elaborate controls way beyond the capabilities of existing standard widgets
* and for the feedback, we also need to tap into very precise event handling, which is hard to achieve with the huge amount of variability found with standard widgets
* our timeline display has to comply to a temporal metric grid, which is incompatible with the leeway present in standard widgets for the purpose of styling and skinning the UI
* the sheer amount of individual elements we need to get to screen is way beyond anything in a conventional GUI -- the UI toolkit set can not be expected to handle such load smoothly

!Remedy: A Canvas Control
All of the above is a serious concern. There is no easy way out, since, for the beginning, we need to get hands on with the display -- to get any tangible elements to work against. Yet there exists a solution to combine the strengths of both approaches: a ''Canvas Widget'' is for one a regular widget, and can thus be integrated into the UI, while it allows to //place child widgets freely onto a managed area,// termed as "the canvas". These child widgets are wired and managed pretty much like any other widget, they participate in theming, styling and accessibility technologies. Moreover, the canvas area can be clipped and scrolled, so to allow for arrangements way beyond the limits of the actual screen.
* in the past, this functionality was pioneered by several extension libraries, for example by the [[GooCanvas|https://developer.gnome.org/goocanvas/stable/GooCanvas.html]] library for ~GTK-2
* meanwhile, ~GTK-3 features several special layout managers, one of which is the [[GtkLayout|https://developer.gnome.org/gtk3/stable/GtkLayout.html]] widget, which incorporates this concept of //widgets placed on a canvas.//

!Investigation: ~GtkLayout
In order to build a sensible plan for our timeline structure, an investigation has been carried out in 2016 to clarify some fundamental properties of the GtkLayoutWidget
* how are overlapping widgets handled?
* how is resizing of widgets handled, esp. when they need to grow due to content changes?
* how does the event dispatching deal with partially covered widgets?
* how can embedded widgets be integrated into a tabbing / focus order?
* how is custom drawing and widget drawing interwoven?

!!!Topics for investigation
# place some simple widgets (Buttons)
# learn how to draw
# place a huge number of widgets, to scrutinise scrolling and performance
# place widgets overlapping
# bind signals to those widgets, to verify event dispatching
# bind some further signal(s) to the ~GtkLayout container
# hide and re-show a partially and a totally overlapped widget
# find a way to move a widget
# expand an existing widget (text change)
# build a custom "''clip''" widget
# retrofit all preceding tests to use this "''clip''" widget

!!!Observations
* children need to be made visible, otherwise they are added, but remain hidden
* when in sight, children receive events and are fully functional, even when placed out of the scrollable area.
* the coordinate of children is their upper left corner, but they may extend beyond that and even beyond the scrollable area
;layering
:children added later are always on top.
;scrolling and size
:the {{{Gtk::Layout}}} widget has a canvas size, which is defined implicitly, by placing child elements
:* the size of the scrollable area is independent of the actual size extension of the canvas
:* the scrollable area determines when scrollbars are visible and what area can be reached through them
:* but children can well be placed beyond; they are fully functional then, just covered and out of sight
:* enlarging the enclosing window and thus enlarging the layout canvas will uncover such stray children.
:* //problematic//
:** children may be so close to the boundary, that it is not possible to click on them
:** when children close to the boundary receive an onClick event, the scrollable area might jump back slightly..
:** the management of visible viewport within {{{Gtk::ScrolledWindow}}} is not correct (see [[ticket #1037|http://issues.lumiera.org/ticket/1037]])<br/>it does not account properly for the area covered by the scrollbars themselves
;moving child widgets
:works as expected
;expanding widgets
:works as expected
;delete child widgets
:is possible by the {{{Container::remove(Widget*)}}} function
:removed child widgets will also removed from display (hidden)
:but the widget object needs to be deleted manually, because detached widgets are no longer managed by GTK
;iteration over child widgets
://problematic//
:* the signal based {{{foreach}}} does not work -- there seems to be a problem with the slot's signature causing a wrong address to be passed in
:* the interface {{{Gtk::Container}}} exposes a {{{get_children}}} function, but this returns a //copy// of the vector with pointers to all child widgets
;about GtkCustomDrawing
:need to derive from {{{Gtk::Layout}}} and override the {{{on_draw(cairocontext)}}} function
:* layering is controlled by the order of the cairo calls, plus the point when the inherited {{{Gtk::Layout::on_draw()}}} is invoked
:** when invoked //before// our custom drawing, we draw on top of the embedded widgets
:** when invoked //after// our custom drawing, the embedded widgets stay on top
:* the {{{Gtk::Allocation}}} is precisely the visible screen area reserved for the widget.<br/>It is //not// the extension of the virtual canvas.
:* ...consequently, if our drawing shall be stitched to the canvas, we need to care for translation and for clipping ourselves. &rarr; see [[here|GtkCustomDrawing]]
;determine canvas extension
:when the extension of the (virtual) canvas area depends on position and size of child widgets, //we need to calculate this extension manually.//
:* beware: the allocation for child widgets is not setup immediately, when adding or moving children
:* rather, we need to wait until in the {{{on_draw()}}} callback for the {{{Gtk::Layout}}}
:* at this point, we may use the //foreach// mechanism of the container
{{{
      uint extH=20, extV=20;
      Gtk::Container::ForeachSlot callback
        = [&](Gtk::Widget& chld)
                {
                  auto alloc = chld.get_allocation();
                  uint x = alloc.get_x();
                  uint y = alloc.get_y();
                  x += alloc.get_width();
                  y += alloc.get_height();
                  extH = max (extH, x);
                  extV = max (extV, y);
                };
      foreach(callback);
      set_size (extH, extV);
}}}
//Management of dockable content panes//
Within each top level application window, the usable screen real estate can be split and arranged into a number of standard view building blocks. Each of this //panels// follows a specific preconfigured basic layout -- these are hard coded, yet customisable in detail. There is a finite list of such panel types available:
* the [[timeline display|GuiTimelineView]]
* the media viewer
* asset management
* information box
Within the preconfigured layout it is a panel's role to host and incorporate GuiComponentView elements.
Please note the distinction between UI component view and panel; in many cases there is one of each for a given kind, and these need to be distinguished: e.g. there is the [[timeline view|GuiTimelineView]] and there is the timeline panel, which houses several timelines (as tabs). Or there is the viewer component, which is located within a dedicated viewer panel each.

!Instantiation
A panel as such is //an identifiable, named, delineated space within the UI.//
* since it is just a space, instantiation of any panel //implies the creation of its content// -- at least in minimal or default form.
* the type of the panel determines, what the possible content of such a panel might be, and what need to be created to allow the panel to exist.
Consequently, there are two distinct ways to instantiate a panel
;by view allocation
:some process or action requires a specific GuiComponentView, and requests access or allocation through the ViewLocator.
:in this case, the panel holding this view is created on demand as a by-product
;by explicit setup
:initiated through user interaction, or caused by default wiring, a certain panel is made to exist somewhere in the UI
:in this case, the type of the panel determines what content view(s) need to be present to allow the panel to exist

!!!requirements and consequences for panel instantiation
Based on these considerations, and based on the fact that any GuiComponentView needs an ID and an UI-Bus connection, we can conclude that the actual panel creation need to be carried out by some core component directly related to the GuiTopLevel (to get UI-Bus access). And we need to ensure that the ID of any created or allocated view is //predictable,// since other parts need to be able to talk to this component via the bus, without any direct way to discover and retrieve the ID beforehand.

!Placing and addressing of embedded contents
A specific problem arises insofar other parts of the UI need to create, address and control some UI entities, which at the same time exist as part of a docking panel. This is a problem of crosscutting concerns: UI control and interaction structure should not be mingled with the generic concern to maintain a component as part of a screen layout. Unfortunately, the design of standard UI toolkit sets is incredibly naive when it comes to interaction design, and the only available alternative is to rely on frameworks, which come with a hard-wired philosophy.
As a result, we're forced to build our UI from components which fall short on the distinction between //ownership and control.//

!Hierarchy and organisation
Each top-level window //holds a single dock,// which in turn might hold several docking panels, in a layout arranged by the user. However, the library ''GDL'', which Lumiera uses to implement docking functionality, introduce the notion of a //dock master.// This is an invisible control component, which maintains a common layout of panels, possibly located within several separated docks. The role of the master can be re-assigned; typically it is automatically attained by the first dock created, and several docks will share the same master, if and only if they are created as dependent, which means to create the second dock by referring to the first one (there is a dedicated function in GDL to achieve that). Only docks managed by the same master may participate in drag-n-drop actions beyond the scope of a single dock. Moreover, all iconified panels of a given master are represented as icons within a single //dock bar.//

!!!Panel Locator
In accordance with this structure, we introduce a central component, the {{{PanelLocator}}} -- to keep track of all {{{DockArea}}} elements within the UI. The latter are responsible for managing the docking panels //within a specific// top-level {{{WorkspaceWindow}}}.

//A building block used pervasively throughout the Lumiera UI  to represent a named entity.//
This widget presents a horizontally extended body, which holds a characteristic ''Head-Triplet'' of visual Elements:
* an //Icon// to create the visual anchor point. In many cases, this will be ''the Placment Icon'' (a hallmark of Lumiera's UI)
* a //Menu Button// -- in fact a small downward arrow, which can be flipped upwards (expand/collapse)
* a //Text Label// designating the element

This characteristic cadence of elements gives us a horizontal box with distinct extension, and an arrangement to attach menu actions and a possibly extended display style. This arrangement can be used for...
* Markers
* Elements in the Bins (Clips, Media, Effects, Assets....)
* abridged display of Clips (and Effects)
* the Effect-bar to mark the presence of an Effect
* for the Track heads in the Timeline Header pane
* for ~Clip-Placement and name

&rarr; see also [[Ticket #1185|https://issues.lumiera.org/ticket/1185]].

!Properties
* the Icon shall be easily configurable (from the coding view point, of course this is all hard wired in the code or the theme)
** the default is to configure the ''Placement Icon''
** alternatively the icon shall indicate the type of the object (video, audio, events, meta,...)
* the horizontal extension is easy to control, through several options
** natural extension; don't give any ''size request'', let GTK figure out the extension rather
** set a fixed length-limit, causing the text label to be abridged eventually
** proportional head-placement
* the body may optionally hold another GTK widget, in which case the head-triplet aligns to the upper side
** this child widget is what //represents the content// of the element box
** typically this //content area// is a canvas with an associated &rarr; [[content rendering mechanism|GuiContentRender]]
** often the content area is empty though, and serves only to indicate the content type through appropriate styling

We offer pre-arranged options for standard wiring of interaction response
* when the Icon is a Placement, it will launch the [[Placement-Pop-up|GuiPlacementDisplay]] -- which in turn can be de-modalised, thereby allocating a place in the [[Property grid pane|GuiPropertyPane]]. When such a placement control already is open and allocated, the click to the placement icon will lead there.
* in the most basic case, there is only one common binding, which is a signal, emitted when clicking on the icon or the menu button
* yet there can be two distinct bindings (signals) for the icon and the button; when present, these //push aside (or shadow)// the generic signal
* the ''Expander'' (which is part of the UI-Element protocol) is pre-wired with the arrow on the menu button, to yield a visible clue for the expand/collapse state; in this case, the menu changes to an up/down arrow and shadows the generic signal (which is typically a context menu)
* each of the bindings can be easily replaced by a [[pop-up menu|GuiPopupMenu]]
* the context menu opens on ''left click'' (selection will be ''right click'' in Lumiera!)

!!!usage and setup
The class {{{widget::ElementBoxWidget}}} uses a flexible declarative constructor style, defining the //intended use// while delegating the details of styling to CSS classes. On construction, it is mandatory to define
;presentation intent
:what we want to achieve with this widget instance
:*{{{MARK}}} : Widget is a pin or marks a position 
:*{{{SPAN}}} : Widget spans a time range
:*{{{ITEM}}} : Widget represents an entity within a collection (Bin)
:*{{{CONTENT}}} : Widget serves to represent a piece of content (Clip)
;object type
:the type of data to be represented by this widget
:*{{{VIDEO}}} : represents moving (or still) image data
:*{{{AUDIO}}} : represents sound data
:*{{{TEXT}}} : represents text content, e.g. Subtitles, Credits
:*{{{AUTO}}} : represents automation
:*{{{EVENT}}} : represents event streams or live connections, like e.g. MIDI data
:*{{{EFFECT}}} : represents a processor or transformer
:*{{{LABEL}}} : represents a label or descriptor
:*{{{RULER}}} : represents an overview ruler or TOC
:*{{{GROUP}}} : represents a [[Fork]], Bin or similar container to group other entities
:*{{{META}}} : represents some meta entity
In addition to these mandatory arguments, a sequence of //qualifiers// can be supplied to tweak the details of the presentation
*{{{name(string)}}} : define the name or ID to be shown in the label field
*{{{expander(ref)}}} : use a expand/collapse button, wired with the given »expander« functor (part of the standard UI-Element protocol)
*{{{constrained(λ, [λₙ])}}} : constrain widget extension, providing getter functors to retrieve desired size in canvas pixels


!!!presentation intent and layout
The indicated //intent of presentation// controls various aspects of the basic arrangement and wiring -- this custom widget has to cover two quite different usage situations, and aims at establishing an uniform handling scheme throughout the UI. For one, it will represent an //item,// either within a selection or within a control pane (e.g. track head). While the other usage scenario is to present //content.//
* presentation of items should be concise, and the visual clues serve to indicate the type of data or object
* while the presentation of content aims at manipulation and arrangement; in many cases there is also an interaction with the content itself
The common denominator however is, that ''each element is an object'' and thus can be ''placed'' (by a [[Placement]]), and it offers ''methods'' through a ''context menu''. Moreover, //mutations and manipulations// shall only be possible on ''selected elements'' (never accidentally by just clicking on something). Selection has to be //kept distinct from manipulation.// Just clicking on the placement icon (or the type icon for that) will activate a //property box,// enabling to tweak the //way of placement// together with some further generic marks and tags. Settings and tweaks on the content are rather accessed through the content display -- while there should be also a shortcut to move from the property box into tweaking of parameters (e.g. of effects)

!!!!To summarise...
* first icon &rarr; placement property box
* second icon &rarr;
** either expand / collapse
** or the regular object context menu
* click on the object also yields that context menu
* mutation methods are only accessible when the entity is selected
* the same holds true for manipulation through gestures (dragging, trim edits)

!!!proportional Head placement
This behaviour pattern (see [[#1186|https://issues.lumiera.org/ticket/1186]]) is a distinguishing trait of the Lumiera timeline display. It indicates elements with a temporal extension surpassing the current display window. Such elements are typically represented by an {{{ElementBoxWidget}}} with an large horizontal extension. Yet when scrolling, the head-triplet shall always remain within the visible area, but it will slowly glide from one side to the other, thereby indicating the our relative position. This pattern of behaviour matters, since, together with the scrolled content display, it creates the dominant movement when scrolling an extended timeline

''Technically'', this has to be build as a self-contained component, which can be unit-tested. The input and reference is a "Window" (in fact an interval) together with its start position //relative// to the element's own start position. The component has to derive from this input the actual relational configuration of the window's extension with respect to its own extension. For context, the //window// corresponds to the visible area, which can be larger, smaller disjoint, partially overlapping and fully contained. Please note that ''handling the relations of intervals is a notoriously challenging problem.''

Especially when the component determines a placement of the //window// within its own extension, it has to derive a proportional placement position, which can be used to arrange the head-triplet of the {{{EventBoxWidget}}}. This proportional placement needs to reflect the location of the window within the overall extension, similar to the hand of a scrollbar. However, when the extension of the head-triplet, together with its placement disallows to show the the triplet unclipped, the triplets shall just be clipped naturally. Ideally, there would be a soft graceful transition between those two representation modes.

!!!constrained Widget size
When used as Clip display on a timeline canvas with calibrated time axis, the requirement arises to confine a GTK widget to a pre established size constraint. This requirement is problematic, as it stands in ''direct contradiction'' to ''GTK's fundamental design'': Elements are never to be positioned explicitly, but rather the layout will flow into balance, factoring in the specific font, language support and interface styling and theming. GTK does not provide any API to set layout explicitly — not even for special corner cases.
Thus, if we intend to bring a custom widget into compliance with our contextual size constraints, the only solution is to //make the GTK layout engine turn out the desired extension allocation in accordance to our constraints// for this custom widget. An detailed survey of the GTK implementation reveals the possible extension points, were such a layout manipulation could be injected.
* some amount of screen extension is allocated by the framework or by a container widget for its children
* this happens on first "realisation" of the Widget, or later, when a resize request is processed and the layout cache invalidated
* the entry point is {{{Gtk::Widget::size_allocate()}}}  (possibly with an additional "baseline" value)
* the implementation first queries the widget for its preferences
*# first, the layout trend is determined: either width-for-given-height, or height-for-given-width
*# then, the widget is queried for its preference in the leading dimension
*# followed by a call to the widget's width-for-given-height, or height-for-given-width implementation
* the resulting desired ("natural") size allocation is then adjusted by widget decoration and finally passed to the widget for storage and use
It turns out that the GTK layout management implementation always observes the widget's preferred extension, but possibly expands the allocation to fill additional space. And the standard implementation of {{{Gtk::Widget}}} in turn delegates those queries to the ''GTK CSS Gadget'' — which is a mapping of the hierarchical widget structure into CSS layout nodes.

So the seemingly ''optimal leverage point'' is to ''return our pre established size constraint as result'' from these query functions — which can be overridden in the Gtkmm C++ implementation through the {{{Gtk::Widget::get_preferred_width_vfunc()}}} and {{{Gtk::Widget::get_preferred_height_for_width_vfunc()}}}. However, since GTK assumes these values to be sane and sufficient for a proper realisation of any embedded content, at this point it becomes our responsibility to control and reduce the embedded child widget's extension to bring them into compliance. Failing to do so will lead to garbled drawing on screen.

Our solution approach is to watch the results returned by the default implementation and possibly to hide content until the remaining content conforms to the size constraint; after that, we can return //exactly our calibrated size// and expect GTK to observe this request, passing it down to embedded widgets reduced by style decorations.

{{red{🛆 ''Warning'':}}} the code to perform these successive layout adjustments is //potentially performance critical.//
This code is called //for each focus change// and might have to treat //hundreds of widgets// in a typical film edit timeline.
Further empiric survey of memory footprint and invocation times seems indicated &rArr; [[Ticket #1240|https://issues.lumiera.org/ticket/1240]]

!!!Content display
the Element Box is a container, enclosing a //content area// -- content is either represented by the extension of the box, or it is actively rendered within Element Box' perimeter. This raises several structural questions, which are addressed using a //Strategy// -- notably the cases need to be distinguished, where either no content is actively presented (only CSS is applied), or otherwise where there is a canvas, and a dedicated [[content renderer|GuiContentRender]] is employed
special LayerSeparationInterface which serves the main purpose to load the GuiStarterPlugin, thus bringing up the Lumiera GTK UI at application start.

It is of no further relevance beyond management of subsystem lifecycle -- which in itself is treated in Lumiera as a mere implementation concern and is not accessible by general application logic. Thus, the UI is largely independent and will be actively accessing the other parts of the application, while these in turn need to go through the public UI façades, esp. the GuiNotificationFacade for any active access to the UI and presentation layer.
//A complex interaction within the UI, akin to a language sentence, to spell out some action to be performed within the context at hand//
Contrary to a simple command, a gesture is not just triggered -- it will be formed rather, involving a coordinated usage of the ''input system'' (keyboard, mouse, pen, hardware controller), possibly even spanning the usage of several imput systems. Typical example would be the trimming or rolling of a clip within the timeline; such an adjustment could be achieved by various means, like e.g. dragging the mouse while pressing some modifier keys, or by a specific keyboard command, followed by usage of the cursor keys, or otherwise followed by usage of a shuttle wheel on a hardware controller.

Within the Lumiera UI, conceptually we introduce an intermittent and cross-cutting level, the InteractionControl, to mediate between the actual widgets receiving UI events, and the commands to be issued via the UI-Bus to translate the user's interation into tangible changes within the session. This seemingly complex approach allows us to abstract from the concrete input system, and to allow for several gestures to achieve the same effect.
&rarr; InteractionState
Considering how to interface to and integrate with the GUI Layer. Running the GUI is //optional,// but it requires to be [[started up|GuiStart]], installing the necessary LayerSeparationInterfaces. Probably the most important aspect regarding the GUI integration is how to get [[access to and operate|GuiConnection]] on the [[Session|SessionInterface]].

More specifically, the integration is based on ''messaging''. To start with, the UI needs to be [[populated with updates|GuiModelUpdate]], and while in operation, it will send command messages over the [[UI-Bus]]. Effectively, the UI maintains its own [[model|GuiModel]], specifically tailored for display and translation into tangible UI entities.

----
In a preliminary attempt to establish an integration between the GUI and the lower layers, in 1/2009 we created an PlayerDummy, which "pulls" dummy frames from the (not yet existing) engine and displays them within an XV viewer widget. This highlighted the problems we're about to encounter and made us think about the more radically decoupled approach we followed thereafter...
Building a layered architecture is a challenge, since the lower layer //really// needs to be self-contained, while prepared for usage by the higher layer.
A major fraction of all desktop applications is written in a way where operational logic is built around the invocation from UI events -- what should be a shell turns into a backbone. One possible way to escape from this common anti pattern is to introduce a mediating entity, to translate between two partially incompatible demands and concerns: Sure, the "tangible stuff" is what matters, but you can not build any significant piece of technology if all you want is to "serve" the user.

Within the Lumiera GTK UI, we use a proxying model as a mediating entity. It is based upon the ''generic aspect'' of the SessionInterface, but packaged and conditioned in a way to allow a direct mapping of GUI entities on top. The widgets in the UI can be conceived as decorating this model. Callbacks can be wired back, so to transform user interface events into a stream of commands for the Steam-Layer sitting below.

The GUI model is largely comprised of immutable ID elements, which can be treated as values. A mutated model configuration in Steam-Layer is pushed upwards as a new structure and translated into a ''diff'' against the previous structure -- ready to be consumed by the GUI widgets; this diff can be broken down into parts and consumed recursively -- leaving it to the leaf widgets to adapt themselves to reflect the new situation.
&rarr; [[Building blocks of the GUI model|GuiModelElements]]
&rarr; [[GUI update mechanics|GuiModelUpdate]]

!{{red{WARNING 2/2017}}} more questionable than ever
The whole Idea to have a "UI model" appears more questionable than ever. It leads to tight coupling with the session and a lot of thread synchronisation headaches, without any clear benefit -- beyond just being the obvious no-brainer solution. During the last months, step by step, several presentation related structures emerged, which //indeed are structured to parallel the outline of the session.// But those structures are widgets and controllers, and it might well be that we do not need a model, beyond the data already present within the widget implementation. Rather it seems we'll get a nested structure of //presenters,// which are linked to the session with the help of the UI-Bus and the [[diff framework|TreeDiffModel]].
* {{red{as of 8/2018}}}, matters on a large scale converge towards a structure //without a classical UI model.// Seemingly we can do without...
* there is a working demonstration in {{{BusTerm_test}}}, which pushes mutation diffs against a mock UI-Element. The immediate response to receiving such a diff notification via the UI-Bus has now been coded; it incurs invoking a passed callback (functor), which performs within the originating context, but within the ~UI-Thread, and produces the actual diff //on-demand.// ("pull principle")

!!!building the model structures
A fundamental decision within the Lumiera UI is to build every model-like structure as immediate response to receiving a diff message pushed up into the UI.
* either this happens when some change occured, which is directly reflected into the UI by a local diff
* or a whole subtree of elements is built up step wise in response to a ''population diff''. This is an systematic description of a complete sub-structure in current shape, and is produced as emanation from a DiffConstituent.

!synchronisation guarantees
We acknowledge that the gui model is typically used from within the GUI event dispatch thread. This is //not// the thread where any session state is mutated. Thus it is the responsibility of this proxying model within the GUI to ensure that the retrieved structure is a coherent snapshot of the session state. Especially the {{{gui::model::SessionFacade}}} ensures that there was a read barrier between the state retrieval and any preceding mutation command. Actually, this is implemented down in Steam-Layer, with the help of the SteamDispatcher.

The forwarding of model changes to the GUI widgets is another concern, since notifications from session mutations arrive asynchronous after each [[Builder]] run. In this case, we send a notification to the widgets registered as listeners, but wait for //them// to call back and fetch the [[diffed state|TreeDiffModel]]. The notification will be dispatched into the GUI event thread (by the {{{GuiNotification}}} façade), which implies that also the callback embedded within the notification will be invoked by the  widgets to perform within the GUI  thread.

!generic model tree building blocks
According {{red{to the current plan (2018)}}}, the GuiModel will not be a dedicated and isolated data structure -- rather it will be interwoven with the actual structure of the widgets and controllers. Some controllers and also some widgets additionally implement the {{{gui::model::Tangible}}}-interface and thus act as "tangible" UI-Element, where each such "tangible element" directly corresponds to a component within the session model. The interface UI-Element requires such elements to provide an attachment point to receive mutations in the form of diff messages, while it remains a local implementation detail within each such element //how actually to respond and reflect// the changes indicated by the diff message. Typically the adding of child elements will result in creation of several GTK widgets, some of which are in turn also again UI-Element implementations, and only the latter count conceptually as "children".

While most actual details are abstracted away by this approach, it can be expected that the handling of typical diff changes is somewhat similar in most actual widgets. For this reason we provide a selection of generic adapters and building blocks to simplify the assembly of actual widget implementations.

!!!expanding and collapsing
Several UI elements offer the ability to be collapsed into a minimal representation to save screen real estate. The UI-Element protocol defines this to happen either by invoking a dedicated signal slot on the element, or by sending an appropriate message over the UI-Bus. The actual implementation is quite simple, but unfortunately requires knowledge regarding the specific widget configuration. A commonly used approach is to wrap the expandable/collapsible element into a {{{Gtk::Expandable}}}, but there are notable exceptions, where the widget is bound to handle the expanding or reducing of information display all by itself. We bridge this discrepancy by introducing an {{{Expander}}} interface to act as adapter.
* the default implementation holds an {{{Expander}}} functor. In default state, this functor as well as expanding / collapsing functionality remains disabled
* to enable it, two lambdas need to be provided, to configure
** how to find out about the expansion state of the widget
** how to change this expansion state (i.e. how to expand or collapse the widget)

!!!revealing an element
The UI-Element protocol also includes the ability to //reveal an element// -- which means actively to bring this element into sight, in case it is hidden, collapsed or obscured by scrolling away.
{{red{As of 8/2018 this is just an idea}}}, and many details still need to be considered. Yet at least one point is clear: implementing such a feature requires the help of the container widget actually holding the element to be revealed. It might even happen that also the collaboration of the container holding aforementioned immediate parent container is necessary -- indicating a recursive implementation scheme. The default implementation is based on a similar scheme as the expand/collapse functionality: here we embed a {{{Revealer}}} functor, which then needs to be outfitted with a lambda binding into the internals of the parent container to effect the necessary presentation changes.
''Building Blocks for the User Interface Model and Control structure''
The fundamental pattern for building graphical user interfaces is to segregate into the roles of __M__odel, __V__iew and __C__controler ([[MVC|http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller]]). This approach is so succesful, that it turned into a de-facto standard in commen UI toolkit sets. But larger, more elaborate and specialised applications introduce several cross cutting concerns, which create a tension within this MVC solution pattern.
[<img[UI-Bus and GUI model elements|uml/fig158213.png]]




The Lumiera GTK UI is built around a distinct backbone, separate from the structures required and provided by GTK.
While GTK -- especially in the object oriented incantation given by Gtkmm -- hooks up a hierarchy of widgets into a UI workspace, each of these widgets can and should incorporate the necessary control and data elements. But actually, these elements are local access points to our backbone structure, which we define as the UI-Bus. So, in fact, the local widgets and controllers wired into the interface are turned into ''Decorators'' of a backbone structure. This backbone is a ''messaging system'' (hence the name "Bus"). The terminal points of this messaging system allow for direct wiring of GTK signals. Operations triggered by UI interactions are transformed into [[Command]] invocations into the Steam-Layer, while the model data elements remain abstract and generic. The entities in our UI model are not directly connected to the actual model, but they are in correspondence to such actual model elements within the [[Session]]. Moreover, there is an uniform [[identification scheme|GenNode]].

;connections
:all connections are defined to be strictly //optional.//
:inactive connections render each element passive
;attributes
:the GuiModel supports a notion of generic attributes, treated as unordered unique children and referred by key name.
;local state
:recommendation is to have widget local state represented //within the UI toolkit (GTK).// On loosing bus connection, any element should disable itself or maybe even shut down.
;updates
:tangible UI elements are //passive.// User interaction just results in messages sent to the bus. Any update and mutation, on notification from the bus, is [[pulled from the model|GuiModelUpdate]]. The individual element is thus required to update itself (and its children recursively) into compliance with the provided state.



!building and updating the tree
The workspace starts out with a single element, corresponding to the »model root« in the ~Steam-Layer HighLevelModel. Initially, or on notification, an [[interface element|UI-Element]] requests a //status update// -- which conceptually implies there is some kind of conversation state. The backbone, as represented by the UI-Bus, might be aware of the knowledge state of its clients and just send an incremental update. Yet the authority or the backbone is absolute. It might, at its own discretion, send a full state update, to which the client elements are expected to comply. The status and update information is exposed in the form of a diff iterator. The client element, which can be a widget or a controller within the workspace, is expected to pull and extract this diff information, adjusting itself and destroying or creating children as applicable. This implies a recursive tree visitation, passing down the diff iterator alongside.

Speaking of implementation, this state and update mechanics relies on two crucial provisions: Lumiera's framework for [[tree diff representation|TreeDiffModel]] and the ExternalTreeDescription, which is an abstracted, ~DOM-like rendering of the relevant parts of the session; this model tree is comprised of [[generic node elements|GenNode]] acting as proxy for [[calls into|SessionInterface]] the [[session model|HighLevelModel]] proper.
Considerations regarding the [[structure of custom timeline widgets|GuiTimelineWidgetStructure]] highlight again the necessity of a clean separation of concerns and an "open closed design". For the purpose of updating the timeline(s) to reflect the HighLevelModel in ~Steam-Layer, several requirements can be identified
* we need incremental updates: we must not start redrawing each and everything on each tiny change
* we need recursive programming, since this is the only sane way to deal with tree like nested structures.
* we need specifically typed contexts, driven by the type demands on the consumer side. What doesn't make sense at a given scope needs to be silently ignored
* we need a separation of model-structure code and UI widgets. The GUI has to capture event and intent and trigger signals, nothing else.
* we need a naming and identification scheme. Steam-Layer must be able to "cast" callback state and information //somehow towards the GUI// -- without having to handle the specifics.

!the UI bus
Hereby we introduce a new in-layer abstraction: The UI-Bus.
* some events and wiring is strictly UI related. This can be handled the conventional way: Connect a ~SigC handler to the slot, in the ctor of your widget.
* but anything related to model interaction has to be targetted at the next applicable service point of the UI bus.
* the UI bus is implemented and covered by unit tests -- and //must not expose any GTK dependecies.// (maybe with the exception of {{{GString}}})

!!Decisions
* the UI bus is strictly single threaded.
* It performs in the GTK event thread.
* no synchronisation is necessary
* use constant values as far as possible
* the UI bus is offered by the GuiModel.
* it is //owned// by the GUI model.
* there is a global "kill switch". If toggled "off" any invocation is NOP.
* thus there is no need for any ownership or resource tracking
* we use simple language functors.


!initiating model updates
Model updates are always pushed up from ~Steam-Layer, coordinated by the SteamDispatcher. A model update can be requested by the GUI -- but the actual update will arrive asynchronously. The update information originate from within the [[build process|BuildFixture]]. {{red{TODO 10/2014 clarify the specifics}}}. When updates arrive, a ''diff is generated'' against the current GuiModel contents. The GuiModel is updated to reflect the differences and then notifications for any Receivers or Listeners are scheduled into the GUI event thread. On reception, it is their responsibility in turn to pull the targeted diff. When performing this update, the Listener thus actively retrieves and pulls the diffed information from within the GUI event thread. The GuiModel's object monitor is sufficient to coordinate this handover.
&rarr; representation of changes as a [[tree of diffs|TreeDiffModel]]
&rarr; properties and behaviour of [[generic interface elements|UI-Element]]

!!!timing and layering intricacies
A relevant question to be settled is as to where the core of each change is constituted. This is relevant due to the intricacies of multithreading: Since the change originates in the build process, but the effect of the change is //pulled// later from within the GUI event thread, it might well happen that at this point, meanwhile further changes entered the model. As such, this is not problematic, as long as taking the diff remains atomic. This leads to quite different solution approaches:
* we might, at the moment of performing the update, acquire a lock from the SteamDispatcher. The update process may then effectively query down into the session datastructure proper, even through the proxy of a diffing process. The obvious downside is that GUI response might block waiting on an extended operation in Steam-Layer, especially when a new build process was started meanwhile. A remedy might be to abort the update in such cases, since its effects will be obsoleted by the build process anyway.
* alternatively, we might incorporate a complete snapshot of all information relevant for the GUI into the GuiModel. Update messages from Steam-Layer must be complete and self contained in this case, since our goal is to avoid callbacks. Following this scheme, the first stage of any update would be a push from Steam to the GuiModel, followed by a callback pull from within the individual widgets receiving the notification later. This is the approach we choose for the Lumiera GUI.

!!!information to represent and to derive
The purpose of the GuiModel is to represent an anchor point for the structures //actually relevant for the UI.// To put that into context, the model in the session is not bound to represent matters exactly the way they are rendered within the GUI. All we can expect is for the //build process// -- upon completion -- to generate a view of the actually altered parts, detailing the information relevant for presentation. Thus we do retain an ExternalTreeDescription, holding all the information received this way within the GuiModel. Whenever a completed build process sends an updated state, we use the diff framework to determine the actually relevant differences -- both for triggering the corresponding UI widgets, and for forwarding focussed diff information to these widgets when they call back later from the UI event thread to pull actual changes.

!!!switch of typed sub-context
When dealing with structural (tree) diffing, there is a specific twist regarding child nodes of mixed type: In the general case, we can not assume that all children of a given node are of the same kind. The classical example is (X)HTML, where a node has //attributes,// various //nested tags// and //nested text content.// The //generic node// thus must be modelled as having several collections of children -- both ordered and un-ordered collections are possible -- and the content of each such sub-collection is itself polymorphic. This constitutes a challenge for the representation of data within the tree diff format. These difficulties can be overcome as follows
#anything, even nested "raw" content is represented //as node//
#all nodes can be addressed by an //generic identifier//
#the diff is in //prefix order,// i.e. it first only mentions the ordering, additions and deletions of nodes designated by these ~IDs
#we introduce a //bracketing construct// into the diff language, to enter a subnode within the diff representation
#the diff is produced and consumed //demand-driven (by pull)//
#whenever a node sees this bracketing construct, in invokes the respective child //recursively//
This treatment ensures each nested diff is consumed within a properly typed context, and without any need of switching types from the outside: the actual consumer of each part of the whole diff description just happens to know the actual meaning of those elements he processes itself, and passes control to others with adequate knowledge for the rest. Changes are broken down and transformed into //atomic changes.// For an internal data exchange, this is sufficient: in the end we're dealing with references to binary data anyway. But when it comes to an external, self-contained representation of diffs, we additionally need a way to attach raw chunks of data corresponding to the description of those atomic changes.
&rarr; this is the purpose of the {{{DataCap}}} within our [[generic node element|GenNode]]
LayerSeparationInterface provided by the GUI.
Access point for the lower layers to push information and state changes (asynchronously) to the GUI. Most operations within Lumiera are in fact initiated by the user through the GUI. In the course of such actions, the GUI uses the services of the lower layer and typically receives an immediate synchronous response to indicate the change was noted. Yet often, these operations may cause additional changes to happen asynchronously from the GUI's perspective. For example, an edit operation might trigger a re-build of the low-level model, which then detects an error. Any such consequences and notifications can be "cast" up into the UI, using the {{{NotificationService}}} described here.

Beyond that, a call to trigger shutdown of the UI layer is also exposed at this façade -- which becomes relevant when other [[sub-systems|Subsystem]] initiate general shutdown.

!Lifecycle and Threading concerns
The GuiNotificationFacade is installed as part of and managed by the ''UI Manager'', and connected to the UI-Bus, which happens while establishing the GuiTopLevel. Yet there is a specific twist, insofar GTK is ''not threadsafe by design''. Any event handling and presentation changes will happen from within a dedicated UI event loop, which needs to be started explicitly, after all of the primary windows and widgets are created and wired. Only after this point the UI becomes //life.//

A dedicated ''activation state'' is necessary for this reason -- which within the implementation translates into a queuing and dispatching facility to reschedule any calls into the UI event thread ''asynchronously''.

!Addressing of UI elements
Most calls in this interface require to specify a receiver or target, in the form of an element ID. It is assumed the caller effectively just knows these ~IDs, typically because the same ~IDs are also used as element ~IDs for the corresponding session entities. Even more so, since the whole UI representation of the session was at some point generated by //population diff messages,// which used the same ~IDs to indicate the creation of the corresponding UI representation elements.
While the HighLevelModel is self-contained and forms an autonomous »Universe«, the Lumiera GUI uses a well defined set of Metaphors, structural patterns and conventions to represent the user's view upon the model within the session. 

The most fundamental principle is that of ''subsidiarity'': we understand both "the core" and "the UI" to be autonomous and responsible for their own concerns. The core has to deal with editing, arranging and processing, while the UI forms the materiality and mechanics of interaction. The [[link between both sides|GuiConnection]] is established through a communication system, the UI-Bus.
&rarr; the UI view [[is updated|GuiModelUpdate]] by ''diff messages''
&rarr; and in turn, commands are [[prepared and issued|GuiCommandBinding]] in the UI and sent as ''command messages''

Based on these foundations, we shape and form the core part of the interface, which is the [[timeline display|GuiTimelineView]]
A cross-cutting and somewhat tricky concern is how to represent and expose the [[MObject Placements|Placement]] within the UI.
For one, a Placement is a set of rules picked up from enclosing scopes -- and it is one of the most fundamental traits of Lumiera that the user is able to edit those placement rules. Yet the combination and final application of those rules also materialises itself into actual properties of the objects of the edit session -- most notably the time position of any element. Consequently, parts of the Placement are unfolded to appear as properties of the placed object, as far as the UI representation is concerned. However, Placement as a generic building block is prominently exposed, insofar pretty much every entity you'll see in the UI has the ability to "edit its placement". This is indicated by a characteristic Icon decoration, leading to a common placement editor, where the user can
* see all the explicitly given locating pins
* see the effective, resulting ExplicitPlacement
* add and manipulate existing rules
To support this generic setup, pretty much every UI element needs to be outfitted with a "placement" attribute, to reflect those distinct information to be exposed in aforementioned placement edit UI. This can be achieved with the help of the GuiElementBoxWidget and mediated by the ClipPresenter, which acts as counterpart for the Clip and Placement in the HighLevelModel, and is connected to the UI-Bus.
&rarr; see also [[Ticket #1188|https://issues.lumiera.org/ticket/1188]].
//Organisation of Pop-up menus in the Lumiera UI.//
A pop-up is created on right click on the associated element, which thereby exposes its //methods for manipulation.// As such this arrangement incorporates the principle of ''object orientation'' into the interactive user interface.

The foundation of population and display of pop-up menues is provided by GTK -- however the //»object orientation«// is our actual design concern here.

{{red{OMG 11/2018 -- I have no idea where to start here...}}}
Starting up the GUI is optional and is considered part of the Application start/stop and lifecycle.
* main and {{{lumiera::AppState}}} activate the lifecyle methods on the ~GuiSubsysDescriptor, accessible via the GuiFacade
** invocation of {{{GuiFacade::start}}}
** creates {{{unique_ptr<GuiRunner>}}}
** the ~GuiRunner holds a ~GuiHandle as member, which causes loading and activation of the plug-in
* loading the {{{gtk_gui.lum}}} plug-in...
** loads the GUI (shared lib)
** after loading, the sole interface method {{{launchUI()}}} is triggered
* the GUI plug-in implementation in {{{gtk-lumiera.cpp}}}
** implements the GUI plug-in interface
** and the~launchUI-Function ''spawns the GUI thread''.
* the ~GuiThread ({{{runGUI()}}})...
** creates the {{{GtkLumiera}}} object, which is the "GUI main"
** invokes the {{{GtkLumiera::run()}}} function (guarded with appropriate error handlers) &rarr; {{{sigTerm}}} at the end
*** this first creates the ''~GUI-Backbone''
**** {{red{TODO 2/2020: unexpected error in ctor might kill the GUI thread, possibly leading to deadlock #1192}}}
***;~UiBus
***:the UI-Bus is the communication backbone and hub
***;~UiManager
***:responsible for wiring a cohesive GuiTopLevel
***:*wired with the UI-Bus
***:*populates the Menu and binds the Actions
***:*populates the {{{GlobalCtx}}} with...
***:**{{{UiBus}}}
***:** {{{UiManager}}}
***:** {{{WindowLocator}}}
***:** {{{InteractionDirector}}}
***:** {{{interact::Wizard}}}
***;~InteractionDirector
***:the InteractionDirector acts as top-level controller within the UI -- corresponding to the root context in the session
***;Application Windows (GTK)
***:a hierarchy of GTK widgets to implement the actual user interface
***;Event Loop
***:this is what makes the UI "live" and responsive
** activating the Event Loop through {{{UiManager.performMainLoop()}}}
*** first installs and opens the primary LayerSeparationInterfaces
**** GuiNotificationFacade (for pushing messages via UI-Bus)
**** DisplayService (to display rendered content)
*** then activates the {{{Gio::Application}}} main loop (blocking event loop)
** establishing the UI-Bus
*** creates the ''Core Service'' as node attached to the bus<br/>this in turn holds...
***:the Nexus
***;this is the communication Hub, and actually every "uplink" into the Bus-term eventually ends here
***:State Manager
***;special service to handle "state mark" messages on the bus
***;this is what essentially creates persistent interface state
** the InteractionDirector immediately launches a ~Callback-Action into the ~Event-Loop
*** which, when performed, sends a "population request" command down to the session
*** the session will response asynchronously by pushing a "population diff" up into the UI-Bus (via Notification façade)

!!!questions of sanity and locking
The initial start-function of the subsystem is protected by a locking guard. Thus, everything up to and including the launch of the GUI thread is "make it or break it". If the {{{facade}}} smart-ptr is not set, the GUI is considered as not started. Likewise, everything //within the GUI thread// is protected by a top-level error handler, which invokes the termination signal and thus causes application shutdown, whenever the thread exits. There is only one tiny gap in this reasoning: when the overall application shuts down //after// the GUI has been launched, but //before// the GUI was able to open the Notification façade, the GUI will miss the shutdown notification and thus the application will hang with defunct core, but with running and responsive UI. However, the Lumiera thread handling framework implements a dedicated barrier to ensure the started thread has picked up its arguments and is about to enter the thread operation function. This means, the tread launching the GUI thread will block for a short period, and when the thread launching function returns (successfully), we can sure the spawned thread is running and at the begin of its payload function. Now, since opening the facade interfaces happens early in this GUI function, and prior to building up all the GTK widgets, the chances for such a race actually happening are rather remote.

!public services
The GUI provides a small number of public services, callable through LayerSeparationInterfaces. Besides that, the main purpose of the GUI of course is user interaction. Effectively the behaviour of the whole system is driven by GUI events to a large extent. These events are executed within the event handling thread (~GTK-main-Thread) and may in turn invoke services of the lower layers, again through the respective LayerSeparationInterfaces.
But the question of special interest here is how the //public services// of the GUI are implemented and made accessible for the lower Layers. Layer isolation is an issue here. If implemented in a rigorous manner, no facility within one layer may invoke implementation code of another layer directly. In practice, this tends to put quite some additional burden on the implementer, without any obvious benefit. Thus we decided to lower the barrier somewhat: while we still require that all service invoking calls are written against a public LayerSeparationInterface, actually the GUI (shared lib) is //linked// against the respective shared libs of the lower layers, thus especially enabling the exchange of iterators, closures and functor objects.

!!!Layer separation
Note that we retain strict isolation in the other direction: no part of the lower layers is allowed to call directly into the GUI. Thus it's especially interesting how access to some GUI public service from the lower layers works in detail.
* when the GUI plugin starts, instances of the Services implementing those public service interfaces are created.
* these service objects in turn hold an ~InstanceHandle, which cares to register and open the corresponding C Language Interface
* additionally this InstanceHandle is configured such as to create a "facade proxy" object, which is implemented within {{{liblumieracommon.so}}}
Now, when invoking an operation on some public interface, the code in the lower layers actually executes an implementation of this operation //on the facade proxy,// which in turn forwards the call through the CL interface into the GUI, where functionality is actually implemented by the corresponding service object instance.

!!!UI top level
Regarding the internal organisation of Lumiera's ~UI-Layer, there is a [[top level structure|GuiTopLevel]] to manage application lifecycle.
This top-level circle is established starting from the UI-Bus (''Nexus'') and the ''UI Manager'', which in turn creates the other dedicated control entities, especially the InteractionDirector. All these build-up steps are triggered right from the UI main() function, right before starting the ''UI event loop''. The remainder of the start-up process is driven by //contextual state,// as discovered by the top-level entities, delegating to the controllers and widgets.

!!!reaching the operative state
The UI is basically in operative state when the GTK event loop is running. Before this happens, the initial //workspace window// is explicitly created and made visible -- showing an empty workspace frame without content and detail views. However, from that point on, any user interaction with and UI control currently available is guaranteed to yield the desired effect, which is typically to issue and enqueue a command into the SteamDispatcher, or to show/hide some other UI element. Which also means that all backbone components of the UI have to be created and wired prior to entering operative state. This is ensured through the construction of the {{{UIManager}}}, which holds all the relevant core components either as directly managed "~PImpl" members, or as references. The GTK UI event loop is activated through a blocking call of {{{UIManager::performMainLoop()}}}, which also happens to open all external façade interfaces of the UI-Layer. In a similar vein, the //shutdown of the UI// can be effected through the call {{{UIManager::terminateUI()}}}, causing the GTK loop to terminate, and so the UI thread will leave the aforementioned {{{performMainLoop()}}} and commence to destroy the {{{UIManager}}}, which causes disposal of all core UI components.

!content population
In accordance with the Lumiera application architecture in general, the UI is not allowed to open and build its visible parts on its own behalf. Content and structure is defined by the [[Session]] while the UI takes on a passive role to receive and reflect the session's content. This is accomplished by issuing a //content request,// which in turn installs a listener within the session. This listener in turn causes a //population diff// to be sent upwards into the UI-Layer. Only in response to these content messages the UI will build and activate the visible structures for user interaction.
&rarr; GuiContentPopulation

A specially configured LumieraPlugin, which actually contains or loads the complete code of the (GTK)GUI, and additionally is linked dynamically against the application core lib. During the [[UI startup process|GuiStart]], loading of this Plugin is triggered from {{{main()}}}. Actually this causes spawning of the GTK event thread and execution of the GTK main loop.
The presentation of the track body area relies on the [[Gtk::Layout "canvas widget"|GtkLayoutWidget]], thus allowing for a mixture of custom drawing with embedded custom Gtk widgets. The actual drawing routine is activated in response to the {{{on_draw}}} signal -- and invoking the inherited handler function will initiate the standard drawing for the embedded child widgets. This partitions the additional, specific drawing activities into a pre-widget drawing phase to prepare the background and framework structure of the track area, and a post-widget drawing phase to show all kinds of overlays, markers cursors and similar UI indicators. A nested structure of {{{TrackBody}}} objects serves as organisational device to structure these custom drawing activities in accordance with the nested structure of the track fork.

!Building a nested 3D structure
[>img[3D structure of track drawing|draw/UI-TimelineTrackProfile-1.png]]A proficient UI design often relies on subtle cues to guide the user intuitively -- which includes shading of boundary areas to structure the interface space. Both the space and the means to give such unambiguous visual clues are limited, and it would be unwise to forgo such possibilities to follow some stylish fad. Rather, we strive at achieving some degree of internal coherency within the application of these stylistic means.

In Lumiera, the //tracks// represent an organisational device, a nested set of //scopes,//  which -- for the UI representation -- is paralleled by nested insets holding the media content. One or several //rulers// as guiding devices run alongside the top of each scope, either representing the scope as a whole, or introducing the working area of this scope similar to a side walk running alongside a channel. A system of increasingly deeper nested scopes thus becomes a cadence of insets in the way of a lateral staircase.

Each individual track contributes a similar sequence of structure elements to this overall ''track profile'':
* a set of rulers
** where each ruler may optionally inject a small additional //gap//
* a content area
* an inset
** holding the self similar recursive child track fork

!!!Assembling the track profile
The actual expression of these elements depends on the content, which is injected via diff message pushed up from the steam layer; additionally, some presentation state like collapsing of elements need to be taken into account. Assembling the complete profile of the track structure thus incurs a tree walk over the nested track body objects, which in turn communicate with the track presenters for layout state. At this point, it is advisable to separate the actual graphics drawing code from the content and state discovery scattered over the nested structure. Thus we produce a ''verb sequence'' as result of the tree walk, which can be stored into a (heap allocated) list for repeated evaluation when handling the actual draw events. Whenever the overall layout has been invalidated, this structure description has to be rebuilt by yet another tree walk. To illustrate this concept, with {{{r(#)}}} ruler, {{{g}}} gap, {{{c(#)}}} content and ''⤵⤴'' direction verbs, the profile above might yield the sequence...
|>|>|>|>|>|>|>|>|!__Track-1__ |
| | | !__Track-11__ |>|>|>|>|!__Track-12__ | |
|~|~|>|  | |!__Track-121__ |!__Track-122__ | |~|
|r(1),r(1),g,''c''(2)|⤵|r(1),g,''c''(3)|r(1),g,r(1)|⤵|r(1),''c''(2)  |r(1),''c''(1)|⤴|⤴|


!!!Drawing strategy
Once the fundamental decision towards a 3D structure is taken, there still is some room for choices regarding the actual strategy to perform the drawing primitives. However, there are also known limitations in what can be achieved by shading within an essentially flat drawing (unless we'll get some kind of holographic display technology at our disposal, somewhere in the future). The point to note is, we can not convey a fully three dimensional structure merely through shading -- rather we can only imply //local depth relations.// And since the actual structure of our nested tracks can be quite elaborate and wide spanning, we should focus on expressing such local relations, in order to support the user and help to grasp the presented structure.
* shading can indicate a //barrier.//
* shading can indicate a //slope// up or down.
An immediate consequence to draw from this observation is that ~CSS3 effects are only of limited use for this purpose. They are more helpful to highlight compact and local structures, like a menu button, or some element or handle on the clips. We can (and likely will) use CSS shading for the [[track overview rulers|TrackRuler]] though. We possibly could use a slight inset box shadow within the actual track content area. However, it is rather pointless to use a box shadow to paint the nested track scope insets -- we rather need to rely on gradients and direct colour shading for that purpose.

Another consequence  is that we do not need to apply an overall "stacked boxes" approach -- there is not much to gain from doing so (and we would do a lot of unnecessary filling operations, most of which will actually be clipped away). For that reason, we define our ''track profile'' to represent a vertical //top-down drawing sweep.// Non the less, the actual colours and shades can be defined completely through [[CSS rules|GuiTimelineStyle]]. To achieve this, we apply some tricks
* we define //virtual widgets,// which are only used to match against selectors and pick up the attached CSS styles
** the virtual element {{{fork}}} represents "a timeline track"
** and a virtual {{{frame}}} element nested therein represents "a ruler on top of a timeline track"
* this leads to the following selectors to attach the &rarr; [[actual styling rules|GuiTimelineStyle]]
*;track fork
*:attach rules to the selector {{{.timeline__page > .timeline__body fork.timeline__fork}}}
*:* the {{{border-top}}} and {{{border-bottom}}} rules will be used to //draw the nested inset.//
*:* a {{{margin}}} at top will add additional space between consecutive tracks
*:* while the {{{padding}}} will be used within the track content area -- <br/> of course only in vertical direction, since the horizontal extension is ruled by the time position of elements within the track
*:* please note that {{{background}}} stylings apply to the //content area// and not the track space at a whole.
*:* especially a {{{box-shadow}}} will be drawn when filling the background of the content area -- however, only an {{{inset}}} makes sense here,<br/>since a regular (outer) box shadow will be covered by consecutive drawing actions...
*;track ruler
*:attach rules to the selector{{{fork.timeline__fork frame.timeline__ruler}}}
*:* again the {{{border}}} ({{{top}}} and {{{bottom}}}) settings will delimit the ruler form the other track content
*:* while {{{margin}}} and {{{padding}}} apply as usual outside and within the border
*:* here the {{{box-shadow}}} will be drawn with the background and aligned with the frame -- and again, only an {{{inset}}} really makes sense,<br/>while a regular (outer) box shadow must be small enough to stick within the limits of the {{{margin}}}
*;slopes
*:we use several classes to define the effect of consecutive nested frames forming a common slope; however,
*:we perform this combination effect only with a limit of 5 steps depth. Only closing (upward) slopes can be combined.
*:* we use class {{{.track-slope--deep1}}} for a simple slope (inset by one nested level)
*:* class {{{.track-slope--deep2}}} for two consecutive nested steps combined.
*:* likewise class {{{.track-slope--deep3}}} for three and class {{{.track-slope--deep4}}} for four combined slopes
*:* finally, class {{{.track-slope--verydeep}}} is used for five and more combined upward (closing) slopes

Please note also that our drawing code operates in several ''passes''. First, a ''background'' pass is used to fill the area, then GTK is invoked recursively for any widgets on the dawing canvas, most notably the ''clips''. Then, a final ''overlay'' pass allows to paint range markers, selections and indicators on top.
There are various reasons why we might want to offer multiple equivalent UI representations of the same Timeline...
* the user might want to see several remote parts of the same timeline simultaneously, in focussed display
* we allow several indeptendent top-level windows (think several desktops), so it might just happen that the same timeline is selected in several windows
* we might want to introduce a focussed view on a nested sequence or virtual clip

Now, since we build our UI on the notion of mapping session contents via a messaging system onto suitable presenters in the UI, we get a conceptual mismatch. Basically we need to cut at some point and duplicate some connections. Either we need the ability within a timeline presentation entity to serve several sets of slave widgets, or we need the ability for those presentation entities to collaborate, where one of them becomes the leader and automatically forwards all notifications to the other members of the cluster. Or, alternatively, we could think of pushing that duplication down into the session, in which case we get a TimelineClone entity, and the Builder then needs to be aware of this situation and generate duplicated responses to be sent to the UI.

{{red{While reconsidering this topic in 10/2018}}}, it looks like I am leaning towards the most systematic option, which is to represent this duplication already within the session as TimelineClone. The rationale is
* splitting this way is likely to produce the least accidental complexity -- at that point we are forced to cross-cut only a small number of other concerns. Were we to cut and duplicate within the UI, we'd be forced to carry care for slave entities into a huge number of entirely unrelated UI concerns, like layout management or media display feedback.
* in the case of a focussed view on a nested sequence we are even forced to go that route, since doing otherwise would carry over core session responsibilities into the presentation layer. Consequently, any other solution scheme causes duplicating of functionality.
* however -- if we implement slave timelines already within the session, we still need to make the UI counterpart basically aware of the situation. Thus the {{{TimelineControler}}} needs the ability to delegate some behaviour to another primary controller. That being said -- I still confirm the decision to postpone that topic altogether...

In any case, this is an advanced topic, and nowhere near trivial. It seems reasonable to reject opening duplicate timeline presentations as a first step, and then address this topic way later, when we've gained sufficient knowledge regarding all the subtleties of timeline presentation and editing.
In parts, the Lumiera Timeline UI will be implemented by custom drawing onto a //Canvas Control,// using libCairo for the actual drawing operations. Beyond the challenge of coordinating an elaborated and nested layout, a special twist arises from the ability for custom styling, present in most contemporary desktop environments -- without special precautions, our custom drawing runs danger of creating a visual style separate and in contradiction to the common style set by the chosen desktop theme. This challenge can be resolved, at least in part, by tapping into the existing CSS definitions, to retrieve the necessary settings and adapt them to our special needs.

To this end, a hierarchy of virtual placeholder widgets is used by the »~StyleManager« in {{{stage::workspace::UiStyle::prepareStyleContext()}}} -- these will represent the structures actually created by custom drawing, and allow to retrieve any CSS definitions applicable at that point in the style class hierarchy.
As outlined in the [[explanation of the actual drawing code|GuiTimelineDraw]], this virtual style hierarchy is comprised of several selectors...
* the virtual element {{{fork}}} represents "a timeline track"
** the selector {{{.timeline__page > .timeline__body fork.timeline__fork}}} will pick up definitions
*** to define the //appearance of the nested inset// based on {{{border}}} settings ({{{top}}} and {{{bottom}}})
*** a {{{margin}}} at top will add additional space between consecutive tracks
*** while the {{{padding}}} in vertical direction will be used within the track content area
*** please note that {{{background}}} stylings apply to the //content area// and not the track space at a whole.
*** especially a {{{box-shadow}}} will be drawn when filling the background of the content area -- however, only an {{{inset}}} makes sense here,<br/>since a regular (outer) box shadow will be covered by consecutive drawing actions...
* a virtual {{{frame}}} element nested within the {{{fork}}} represents "a ruler on top of a timeline track"
** here the selector{{{fork.timeline__fork frame.timeline__ruler}}} will pick up appropriate definitions
*** again the {{{border-top}}} and {{{border-bottom}}} settings will delimit the ruler form the other track content
*** while {{{margin}}} and {{{padding}}} apply as usual outside and within the border
*** here the {{{box-shadow}}} will be drawn with the background and aligned with the frame -- and again, only an {{{inset}}} really makes sense,<br/>while a regular (outer) {{{box-shadow}}} must be small enough to stick within the limits of the {{{margin}}}
* Slopes connecting nested sub-Tracks will be governed by styles...
** we use class {{{.track-slope--deep1}}} for a simple slope (inset by one nested level)
** class {{{.track-slope--deep2}}} for two consecutive nested steps combined.
** likewise class {{{.track-slope--deep3}}} for three and class {{{.track-slope--deep4}}} for four combined slopes
** finally, class {{{.track-slope--verydeep}}} is used for five and more combined upward (closing) slopes

!Demonstration of CSS for custom drawing
The example below demonstrates how CSS rules are picked up and used for custom drawing of the fork structure on the Timeline body canvas.
For sake of demonstration, the drawing code was slightly manipulated, so to shift borders and show canvas content beyond the timeline end.

Lumiera used the »light theme complement« stylesheet -- thus most of the styles are drawn from the desktop theme.
A few additional class definitions were added for styling of the timeline body display
{{{
  .timeline__page > .timeline__body fork.timeline__fork {
    margin: 2ex;
    padding: 10px;
    border-style: inset;
    border-color: IndianRed;
    background-color: Lime;
    box-shadow: inset 2px 2px 5px 1px DeepSkyBlue,
                      5px 5px 2px 1px ForestGreen;
}
  fork.timeline__fork frame.timeline__ruler {
    margin: 1ex;
    padding: 5px;
    border: 3px outset DarkGoldenRod;
    background-color: SandyBrown;
    box-shadow: inset 2px 2px 5px 1px RosyBrown,
                      5px 3px 6px 4px Sienna;
}
}}}
[>img[Demonstration of CSS for custom drawing|draw/TimelineCSS.png]]
Within the Lumieara GUI, the [[Timeline]] structure(s) from the HighLevelModel are arranged and presented according to the following principles and conventions.
Several timeline views may be present at the same time -- and there is not necessarily a relation between them, since »a Timeline« is the top-level concept within the [[Session]]. Obviously, there can also be several //views// based on the same »Timeline« model element, and in this latter case, these //coupled views// behave according to a linked common state. An entity »Timeline« as represented through the GUI, emerges from the combination of several model elements
* a root level [[Binding|BindingMO]] acts as framework
* this binding in turn ties a [[Sequence]]
* and the sequence provides a [[Fork ("tree of tracks")|Fork]]
* within the scope of these tracks, there is content  ([[clips|Clip]])
* and this content implies [[output designations|OutputDesignation]]
* which are resolved to the [[global Pipes|GlobalPipe]] belonging to //this specific Timeline.//
* after establishing a ViewerPlayActivation, a running PlayProcess exposes a PlayController
Session, Binding and Sequence are the mandatory ingredients.

!Basic layout
[>img[Clip presentation control|draw/UI-TimelineLayout-1.png]]The representation is split into a ''Header pane'' exposing structure and configuration ( &rarr; [[Patchbay|TimelinePatchbay]]), and a ''Content pane'' extending in time. The ''Time ruler'' ( &rarr; [[Rulers|TrackRuler]]) running alongside the top of the content pane represents the //position in time.// Beyond this temporal dimension, the content area is conceived as a flexible working space. This working space //can// be structured hierarchically -- when interacting with the GUI, hierarchical nesting will be created and collapsed on demand. Contrast this with conventional editing applications which are built upon the rigid notion of "Tracks": Lumiera is based on //Pipes// and //Scopes// rather than Tracks.

In the temporal dimension, there is the usual [[scrolling and zooming|ZoomWindow]] of content, and possibly a selected time range, and after establishing a ViewerPlayActivation, there is an effective playback location featured as a "Playhead"
The workspace dimension (vertical layout) is more like a ''Fork'', which can be expanded recursively. More specifically, each strip or layer or "track" can be featured in //collapsed// or //expanded state.//
* the collapsed state features a condensed representation ("the tip of the iceberg"). It exposes just the topmost entity, and might show a rendered (pre)view. Elements might be stacked on top, but any element visible here //is still accessible.//
* when expanding, the content unfolds into...
** a ''scope ruler'' to represent the whole sub-scope.<br/>A [[Ruler|TrackRuler]] is rendered as a small pane, extending horizontally, to hold any locally attached labels and track-wide or temporally scoped effects
** the content stack, comprised of [[clip widgets|GuiClipWidget]], attached effects and transitions
** a stack of nested sub-scopes (recursive).

@@float: right;background-color: #e9edf8;width: 82ex;padding: 2ex;margin: 0px 4em 1em 2em;__Note in this example__
* on top level, there are two tracks, the second track has nested sub tracks
* Clip-2 has an effect attached, Clip-3 is expanded and also has an effect attached
* the second track has a global effect attached; it shows up in the scope ruler
@@
This collapsed, expanded and possibly nested workspace structure is always exactly paralleled in the header pane. In addition, it allows to configure specific placement properties for each nested scope, which especially means to display faders and some toggles, depending on what kind of placement was added. Of course, this placement configuration needs to be collapsible too. Effects and markers can appear at various different scopes, sometimes requiring an abridged display
&rarr; more about [[the actual drawing code|GuiTimelineDraw]]
&rarr; the [[Track Head display|TrackHead]]


!!!lifecycle and instances
A given instance of the {{{TimelineWidget}}} is always dedicated to render the contents of //one specific timeline.// We never switch the data model while retaining the UI entities. This also means, a given instance is tied to one conversation with the core; it is created when the core tells us about this timeline with an initial population diff, and it lives until either this timeline is discarded in the core model, or the whole session is shut down.

The dockable ''timeline pannel'' holds onto the existing {{{TimelineWidget}}} instances, allowing to make one of them visible for interaction. Yet the timeline itself is represented by the {{{TimelineControler}}}, which lives //within this widget,// and is attached to and managed by the InteractionDirector, who incorporates the role of representing the [[model root|ModelRootMO]]. To bridge this conflict of control, we introduce a {{{TimelineGui}}} proxy for each timeline. Which allows to close a timeline on widget level, thereby marking it as detached from model updates; and it enables child manipulation by the {{{InteractionDirector}}}, automatically forwarding to the respective {{{TimelineController}}} if applicable. Aside of the timeline display, there is also an ''asset pannel''; display of and interaction with those asset views is handled by dedicated widgets, backed by the {{{AssetControler}}} -- which is also a child of and managed by the InteractionDirector, since assets are modelled as global part of the [[Session]].

In case the UI starts with no session present in the core, an //empty timeline placeholder// will be displayed, which provides UI for creating a new session...

!!!The Canvas
At least the track body area of the timeline display needs to be implemented in parts by //custom drawing// onto a [[»Canvas widget«|GtkLayoutWidget]] -- meaning that we have both to coordinate explicit graphic operations using the »Cairo« library, and child widgets attached at explicit positions on top of this canvas. These nested widgets in turn need to know about the calibration of the canvas in terms of time and pixels, to be able to adjust their display to fit into the established metric. As the first implementation attempt for the timeline highlighted, at this point there is a lurking danger to slide into the usage of one central „God class“ -- in this case a //Layout Manager// -- which has to actuate and coordinate any related interplay by „pulling the strings“ like a puppeteer. To forego that contraption, we introduce yet another abstraction: a [[»Canvas Interface«|GuiCanvasInterface]] to support attachment of widgets and handle the translation of logical (time based) coordinates into pixel values.

Another difficulty arises from the ability for custom styling, present in most contemporary desktop environments -- which is typically implemented within the UI toolkit, thereby more or less relying on a fixed set of standard widgets known in advance to the designer. Once we start implementing the visual representation based on custom drawing code, we are bound to define colours and other drawing properties, and while doing so, we are in danger of creating our own visual style separate and in contradiction to the common style set by the chosen desktop theme. This challenge can be resolved, at least in part, by tapping into the existing CSS definitions, to retrieve the necessary settings and adapt them to our special needs. A way to accomplish this, is to build a virtual hierarchy of placeholder widgets -- these will represent the structures actually created by custom drawing, while retrieving any CSS definitions applicable at that point in the style class hierarchy.
&rarr; see details of [[styling the Timeline|GuiTimelineStyle]]


!!!Placements
As indicated in the drawing above, pretty much every UI element exposes a button to //edit its placement.// &rarr; GuiPlacementDisplay

!!!slave Timelines
It is reasonable to expect the ability to have multiple [[slave timeline presentations|GuiTimelineSlave]] to access the same underlying timeline structure.
Currently {{red{as of 10/2018}}} there is a preference to deal with that problem on session level -- but for now we decide to postpone this topic &rarr; [[#1083|https://issues.lumiera.org/ticket/1083]]
{{red{should also reconsider the term »slave«}}}

!!!nesting
By principle, this workspace structure is //not a list of "Tracks"// -- it is a system of ''nested scopes''. The nesting emerges on demand.
In the most general case, there can be per-track content and nested content at the same point in time. The GUI is able to represent this state. But, due to the semantics of Lumiera's HighLevelModel,  top-level content and nested content are siblings //within the same scope.// Thus, at a suitable point {{red{to be defined}}}, an equivalence transformation is applied to the GUI model, by prepending a new sibling track and moving top-level content there.

&rarr; important question: how to [[organise the widgets|GuiTimelineWidgetStructure]]
&rarr; details of the actual [[timeline drawing code|GuiTimelineDraw]]

The Timeline is probably the most prominent place in the GUI where we need to come up with a custom UI design.
Instead of combining standard components in one of the well-known ways, here we need to come up with our own handling solution -- which also involves to build several custom GTK widgets. Thus the question of layout and screen space division and organisation becomes a crucial design decision. The ~GTK-2 UI, as implemented during the initial years of the Lumiera project, did already take some steps along this route, which was was valuable as foundation for assessment and further planning.

As it stands, this topic touches a tricky design and architectural challenge: the → question [[how to organise custom widgets|GuiCustomWidget]].

In a nutshell, ~GTKmm offers several degrees of customisation, namely to build a custom widget class, to build a custom container widget, and to use the [[Gtk::Layout "canvas widget"|GtkLayoutWidget]], possibly combined with //custom drawing.// In addition to assembling a timeline widget class by combining several nested panes, the timeline display needs to rely on the latter approach to allow for the necessary flexible arrangement of [[clip widgets|GuiClipWidget]] within the [[track fork|Fork]].

!the header pane problem
Based on principles of //conventional UI design,// we derive the necessity to have a [[track header pane area|TimelinePatchbay]], always visible to the left, and scrolling vertically in sync with the actual track display to the right of the timeline area. This insight brings about several consequences. For one this means that our top level widget organisation in the timeline will be a horizontal split. And furthermore this means that we get two distinct sub widgets, whose vertical layout needs to be kept in sync. And even more so, presumably the most adequate implementation technique is different in both parts: the header pane looks like a classical fit for the paradigm of nested boxes and grid layout within those boxes, while the right part -- the actual track contents -- impose very specific layout constraints, not served by any of the pre-existing layout containers -- which means we have to resort to custom drawing on a canvas widget. Following this line of thought, we need an overarching layout manager to coordinate these two disjoint technologies. Any viable alternatives?

!!!considering a table grid layout
The layout mechanics we try to establish here by explicit implementation would be more or less a given, if instead we'd build the whole timeline display from one base widget, which needs to be a table layout, i.e. {{{Gtk::Grid}}}. We'd use two columns, one for the header pane area, one for the timeline display, and we'd use N+1 rows, with the head row holding the time ruler and the additional rows holding individual tracks. But to get the specific UI mechanics desirable for a timeline display, we had to introduce some twists:
* //Scrolling is rather special.// We could use the default scrolling mechanisms in vertical direction only, while, as far as the Gtk Layout management is involved, we have to dress up things such as to make it appear as limited to the available horizontal space, so GTK never attempts to scroll the grid as a whole horizontally.
* rather, we have to integrate a free standing horizontal scrollbar somewhere //in the second column.// This scrollbar has to be tied to a scrolling/zooming function, which we apply by explicit coding synchronously to all cell content in all rows of the second column.
* and then obviously we'd have to add a canvas widget with custom drawing to each of those cells in the second column, each featuring the contents of a single track.
* the //nested scopes are difficult to represent in this layout.// This is, because we use a row for each track-like structure at most fine grained level. If we were to create additional visual clues to indicate nested structures, like indentation or a bracketing structure, we'd have to implement those by repetitively adding appropriate graphical structures to each row in the first column.
* it would be non trivial to place controls (e.g. a volume or fade control) acting on all sub-tracks within a group. We'd have to add those controls in a top row representing the whole group and then we'd have to indicate somehow graphically that those pertain to all nested tracks. In fact, this problem is not limited to a table grid implementation approach, since it is related to usage of screen real estate. But a table grid implementation takes away almost all remaining flexibility and thus makes a workable solution much harder: we'd have to allocate a lot of vertical space for those controls, space, which is wasted on the right side, within the track content display, since there all we need is a small overview ruler without much content demands in vertical direction.
* this problem is mitigated once we add a track with //automation data// linked to the mentioned controls. Yet still, this is not the default situation...
On the other hand, what would be the //obvious benefits...?//
* we just have to add stuff in both the left / right part of the display and get the vertical space management sorted out by framework code
While the special setup for scrolling doesn't really count (since it is necessary anyway), after this initial investigation it seems clear that a global grid layout doesn't yield enough benefit to justify all the quirks and limitations its use would impose -- however, //we can indeed benefit// from GTK's automatic layout management when it comes to building the [[nested Track Head controls|TrackHead]], which can be implemented as nested {{{Gtk::Grid}}}.

!!!follow-up to the obvious choices
We came to this point of re-considering the overall organisation of widgets, after having to re-write the initial version of our timeline widget. This initial version was developed until 2011 by Joel Holdsworth, and it followed a similar reasoning, involving a global timeline layout manager. The distinction between the two panes was not so clear though, and the access to the model code was awkward at places, so the necessity to re-write the timeline widget due to the transition to ~GTK-3 looks like a good opportunity to re-do the same basic reasoning a second time, verify decisions taken and improve matters turning out as difficult on first attempt.

So we get a timeline custom widget, which at top level establishes this two-part layout, provides the global scrollbars and integrates custom widget components for both parts. And this top-level timeline widget owns a layout manager, plus it exposes a common view management interface, to be used both from internal components (e.g. zoom widgets within the UI) and from external actors controlling the timeline display from a global level. Also at this global level, we get to define a layout control interface, the [[»Timeline Display Manager«|TimelineDisplayManager]] -- which is conceived as an interface and has to be implemented within the recursively structured parts of the timeline display. This layout control interface will be used by the top-level structure to exert control over the layout as a whole. Any change will work by...
# setting global parameters, like e.g. the ZoomWindow
# triggering a »''display evaluation pass''« 
# have the individual widgets in turn //pull// the local relevant metric parameters from some abstracted [[»Canvas«|GuiCanvasInterface]] interface.


!dealing with nested structures
The handling of objects structured into nested scopes is a hallmark of the very specific approach taken by Lumiera when it comes to attaching, arranging and relating media objects. But here in the UI display of the timeline, this approach creates a special architectural challenge: the only sane way to deal with nested structures without exploding complexity is to find some way to exploit the ''recursive self similarity'' inherent in any tree structure. But the problematic consequence of this assessment is the tension, even contradiction it creates to the necessities of GUI programming, which forces us to come up with one single, definitive widget representation of what is going on eventually. The conclusion is that we need to come up with an interface such as to allow building and remoulding of the UI display through incremental steps -- where each of this incremental steps relies solely on relative, context based information. Because this is the only way we can deal with building a tree structure by recursive programming. We must not allow the individual step to know its arrangement within the tree, other than indicating a "current" or a "parent" reference point.

The structure of the display is extended or altered under two circumstances:
# some component receives a [[diff mutation message|MutationMessage]], prompting to add or remove a //child component.//
# the display (style) of some component is expanded or collapsed.
Here, the "component" relevant for such structural changes is always the UI representation of a track. Beyond that, the layout can also be changed //without changing the display structure,// when some embedded component, be it placement (in the [[track heads|TrackHead]] / the [[patchbay|TimelinePatchbay]]) or a clip, effect or transition, is expanded or collapsed. In such a case, a resizing challenge needs to be directed towards the next enclosing track container.

From these observations we can draw the conclusion, that we'll build a ''local structural model'', to reflect the logical relations between the parts comprising the timeline display. More precisely, these structuring components are not mere model objects, rather they are mediating entities used to guide and organise the actual view entities, which in turn are passive. They are more like a view model, while also bearing some local controller responsibilities. For this reason, we prefer to term these as ''presenters'' -- i.e. TrackPresenter and ClipPresenter. And each of these local representation components holds onto a ''display context'', which generally links it //into two different display widget stacks// within the two parts of the actual timeline display. Adding a child component thus becomes a rather tricky operation, involving to link possibly two child widgets into two disjoint parent widgets, thereby forming a similar display context for the child presenter. Overall, the guiding idea is that of self similarity: on each level, we have to reproduce the same relations and collaborations as present in the parent level.

Another aspect related to the nesting of scopes is the question how to organise the &rarr; [[actual timeline drawing code|GuiTimelineDraw]].
We may exploit the nested structure of the UI representation objects by letting them emit intermediary instructions towards the actual Cairo drawing code.

!!!building the timeline representation structure
It is a fundamental decision within the Lumiera UI that any structure has to be injected as diff by the core. We do not hold a structure or data model within some UI entity and then "interpret" that model into a widget structure -- rather we build the widget structure itself in response to a diff message describing the structure. Especially in the case of timelines, the receiver of those messages is the InteractionDirector, which corresponds to the model root. On being prompted to set up a timeline, it will allocate a {{{TimelineWidget}}} within a suitable timeline docking panel and then attach to the {{{TimelineController}}} embedded within the widget.

{{red{Problem 10/2018}}} how can the InteractionDirector //manage// a timeline while the timeline widget physically resides within the panel? Can we exploit a simliar structure as we did for the error log?
* in part yes -- however we introduce a mediating entity, the {{{TimelineGui}}} proxy
* it uses a {{{WLink}}} to refer to the actual {{{TimelineWidget}}} -- meaning the widget need not exist, and will detach automatically when destroyed by its holder, which is the {{{TimelinePanel}}}
* {{red{TODO 11/2018}}} still need to care for removing a {{{TimelineWidget}}} when the logically managing parent, i.e. the InteractionDirector deletes the {{{TimelineGui}}} proxy

The diff describing and thus assembling the UI representation of a timeline is typically a ''population diff'' -- which means, it is a consolidated complete description of the whole sub-structure rooted below that timeline. Such a population diff is generated as emanation from the respective DiffConstituent.

!!!interplay with diff mutation
Applying a diff changes the structure, that is, the structure of the local model, not the structure of the display widgets. Because the latter are an entirely private concern of the UI and their structure is controlled by the model components in conjunction with the display manager. And since diff application effects the contents of the model such as to make the intended structural changes happen (indirectly), we are well advised to tie the display control and the widgets very closely to those local model elements, such as to //adjust the display automatically.//
* when a new model element is added, it has to inject something automatically into the display
* when a model element happens to be destructed, the corresponding display element has to be removed.
* such might be triggered indirectly, by clean-up of leftovers, since the {{{DiffApplicator}}} re-orders and deletes by leaving some data behind
* the diff also re-orders model elements, which does not have an immediate effect on the display, but needs to be interpreted separately.
Wrapping this together we get a //fix up stage// after any model changes, where the display is re-adjusted to fit the new situation. This works in concert with the [[display manager|TimelineDisplayManager]] representing only those elements as actual widgets, which get a real chance to become visible. This way we can build on the assumption that the actual number of widgets to be managed any time remains so small as to get away with simple linear list processing. It remains to be seen how far this assumption can be pushed -- the problem is that the GTK container components don't support anything beyond such simple linear list processing; there isn't even a call to remove all child widgets of a container in a single pass.
To a large extent, the Lumiera user interface is built around a //backbone structure,// known as the UI-Bus.
But there are some dedicated top-level entities, collaborating to maintain a consistent application lifecycle
;Application
:the application object, {{{GtkLumiera}}} is what executes within the GuiStarterPlugin and thus within the Gtk event thread
:it is of no further relevance for any of the other UI entities, insofar it just creates and wires the top level constituents and encompasses their lifetime
;~UI-Bus
:the backbone of the user interface
:as central communication system, the UI-Bus has a star shaped topology with a central router and attached CoreService
;UI Manager
:maintain a coherent global interface and perform the GTK event loop
:responsible for all global framework concerns, resources and global application state
;Interaction Director
:establish the connection between global interaction state and global session state
:the InteractionDirector is the root controller and corresponds to the [[root object in session|ModelRootMO]].
;Window List
:organise and maintain the top level workspace windows
:which involves the interplay with [[docking panels|GuiDockingPanel]]
;Notification Façade
:attachment point for lower layers and anyone in need to "talk to the UI"
:the GuiNotificationFacade is a LayerSeparationInterface and integrated with Lumiera's interface system

Together, these entities form a cohesive circle of collaborating global managers, known as ''global UI context''; the interplay of these facilities is essentially an implementation detail, insofar there is not much necessity (and only a narrow API) for the rest of the UI to address those directly. Rather, each member of this circle serves a dedicated purpose and is visible to the rest of the application through some kind of service abstraction. For example, the InteractionDirector is mapped as a top-level model element into the logical model of the UI; typically, other parts of the application address this service through messages via the UI-Bus, while the Interaction Director itself is responsible to create a link between model and interaction state -- a service, which is fulfilled in a transparent way.

!Control structure
Within the UI-Layer, we distinguish between //core concerns and UI concerns,// the latter encompassing anything related to UI mechanics, presentation state, interaction state and InteractionControl. Core concerns are delegated and handled by the lower layers of the architecture, while the UI plays a passive role. This is a fundamental decision and leads to a dichotomy, where -- depending on the context -- a given part might operate as a slave related to core concerns, while taking a leading or controlling position when it comes to UI concerns. Deliberately, both sides are bridged by being interwoven into the same entities, and any entity of relevance to core concerns is also attached to the UI-Bus. Regarding the build-up of the UI, parts and elements are made visible and accessible through widgets, but widgets are created, installed and operated by controllers. The top-level circle defines global actions, which are passed through the relevant controller to come into effect.
While the low-level model holds the data used for carrying out the actual media data processing (=rendering), the high-level model is what the user works upon when performing edit operations through the GUI (or script driven in &raquo;headless mode&laquo;). Its building blocks and combination rules determine largely what structures can be created within the [[Session]].
On the whole, it is a collection of [[media objects|MObjects]] stuck together and arranged by [[placements|Placement]].

Basically, the structure of the high-level model is is a very open and flexible one &mdash; every valid connection of the underlying object types is allowed &mdash; but the transformation into a low-level node network for rendering follows certain patterns and only takes into account any objects reachable while processing the session data in accordance to these patterns. Taking into account the parameters and the structure of these objects visited when building, the low-level render node network is configured in detail. In a similar vein, the [[representation within the GUI|GuiPattern]] is based on distinct patterns and conventions -- any object not in line with these conventions remains //hidden// and is //silently ignored.//

The fundamental metaphor or structural pattern is to create processing ''pipes'', which are a linear chain of data processing modules, starting from an source port and providing an exit point. [[Pipes|Pipe]] are a //concept or pattern,// they don't exist as objects. Each pipe has an input side and an output side and is in itself something like a Bus treating a single [[media stream|StreamType]] (but this stream may still have an internal structure, e.g. several channels related to a spatial audio system). Other processing entities like effects and transitions can be placed (attached) at the pipe, resulting them to be appended to form this chain. Optionally, there may be a ''wiring plug'', requesting the exit point to be connected to another pipe. When omitted, the wiring will be figured out automatically.
Thus, when making an connection //to// a pipe, output data will be sent to the //source port// (input side) of the pipe, wheras when making a connection //from// a pipe, data from it's exit point will be routed to the destination. Incidentally, the low-level model and the render engine employ //pull-based processing,// but this is rather of no relevance for the high-level model.



[img[draw/high-level1.png]]
Normally, pipes are limited to a //strictly linear chain// of data processors ("''effects''") working on a single data stream type, and consequently there is a single ''exit point'' which may be wired to an destination. As an exception to this rule, you may insert wire tap nodes (probe points), which explicitly may send data to an arbitrary input port; they are never wired automatically. It is possible to create cyclic connections by such arbitrary wiring, which will be detected by the builder and flagged as an error.

While pipes have a rather rigid and limited structure, it is allowed to make several connections to and from any pipe &mdash; even connections requiring an stream type conversion. It is not even necessary to specify //any// output destination, because then the wiring will be figured out automatically by searching the context and finally using some general rule. Connecting multiple outputs to the input of another pipe automatically creates a ''mixing step'' (which optionally can be controlled by a fader). Several pipes may be joined together by a ''transition'', which in the general case simultaneously treats N media streams. Of course, the most common case is to combine two streams into one output, thereby also mixing them. Most available transition plugins belong to this category, but, as said, the model isn't limited to this simple case, and moreover it is possible to attach several overlapping transitions covering the same time interval.

Individual Media Objects are attached, located or joined together by ''Placements''. A [[Placement]] is a handle for a single MObject (implemented as a refcounting smart-ptr) and contains a list of placement specifications, called LocatingPin. Adding an placement to the session acts as if creating an //instance.// (it behaves like a clone in case of multiple placements of the same object). Besides absolute and relative placement, there is also the possibility of a placement to stick directly to another MObject's placement, e.g. for attaching an effect to a clip or to connect an automation data set to an effect. This //stick-to placement// creates sort of a loose clustering of objects: it will derive the position from the placement it is attached to. Note that while the length and the in/out points are a //property of the ~MObject,// it's actual location depends on how it is //placed// and thus can be maintained quite dynamically. Note further that effects can have an length on their own, thus by using these attachement mechaics, the wiring and configuration within the high-level model can be quite time dependant.

[>img[draw/high-level2.png]]



Actually a ''clip'' is handled as if it was comprised of local pipe(s). In the example shown here, a two-channel clip has three effects attached, plus a wiring plug. Each of those attachments is used only if applicable to the media stream type the respective pipe will process. As the clip has two channels (e.g. video and audio), it will have two ''source ports'' pulling from the underlying media. Thus, as showed in the drawing to the right, by chaining up any attached effect applicable to the respective stream type defined by the source port, effectively each channel (sub)clip gets its own specifically adapted processing pipe.

@@clear(right):display(block):@@
!!Example of an complete Session
[img[draw/high-level3.png]]
The Session contains several independent [[sequences|Sequence]] plus an output bus section (''global Pipes'') attached to the [[Timeline]]. Each sequence holds a collection of ~MObjects placed within a ''tree of tracks''. 
Within Lumiera, "tracks" (actually implemented as [[forks|Fork]]) are a rather passive means for organizing media objects, but aren't involved into the data processing themselves. The possibility of nesting tracks allows for easy grouping. Like the other objects, tracks are connected together by placements: A track holds the list of placements of its child tracks. Each sequence holds a single placement pointing to the root track. 

As placements have the ability to cooperate and derive any missing placement specifications, this creates a hierarchical structure throughout the session, where parts on any level behave similar if applicable. For example, when a fork ("track") is anchored to some external entity (label, sync point in sound, etc), all objects placed relatively to this track will adjust and follow automatically. This relation between the track tree and the individual objects is especially important for the wiring, which, if not defined locally within an ~MObject's placement, is derived by searching up this track tree and utilizing the wiring plug locating pins found there, if applicable. In the default configuration, the placement of an sequence's root track contains a wiring plug for video and another wiring plug for audio. This setup is sufficient for getting every object within this sequence wired up automatically to the correct global output pipe. Moreover, when adding another wiring plug to some sub track, we can intercept and reroute the connections of all objects creating output of this specific stream type within this track and on all child tracks.

Besides routing to a global pipe, wiring plugs can also connect to the source port of an ''meta-clip''. In this example session, the outputs of 'Seq-2' as defined by locating pins in it's root track's placement, are directed to the source ports of a [[meta-cllip|VirtualClip]] placed within 'Seq-1'. Thus, within 'Seq-1', the contents of 'Seq-2' appear like a pseudo-media, from which the (meta) clip has been taken. They can be adorned with effects and processed further completely similar to a real clip.

Finally, this example shows an ''automation'' data set controlling some parameter of an effect contained in one of the global pipes. From the effect's POV, the automation is simply a ParamProvider, i.e a function yielding a scalar value over time. The automation data set may be implemented as a bézier curve, or by a mathematical function (e.g. sine or fractal pseudo random) or by some captured and interpolated data values. Interestingly, in this example the automation data set has been placed relatively to the meta clip (albeit on another track), thus it will follow and adjust when the latter is moved.
This wiki page is the entry point to detail notes covering some technical decisions, details and problems encountered in the course of the years, while building the Lumiera application.

* [[Memory Management Issues|MemoryManagement]]
* [[Creating and registering Assets|AssetCreation]]
* [[Multichannel Media|MultichannelMedia]]
* [[Editing Operations|EditingOperations]]
* [[Handling of the current Session|CurrentSession]]
* [[using the Visitor pattern?|VisitorUse]] -- resulting in [[»Visiting-Tool« library implementation|VisitingToolImpl]]
* [[Handling of Tracks and render Pipes in the session|TrackPipeSequence]]. [[Handling of Tracks|TrackHandling]] and [[Pipes|PipeHandling]]
* [[getting default configured|DefaultsManagement]] Objects relying on [[rule-based Configuration Queries|ConfigRules]]
* [[integrating the Config Query system|ConfigQueryIntegration]]
* [[identifying the basic Builder operations|BasicBuildingOperations]] and [[planning the Implementation|PlanningNodeCreatorTool]]
* [[how to handle »attached placement«|AttachedPlacementProblem]]
* working out the [[basic building situations|BuilderPrimitives]] and [[mechanics of rendering|RenderMechanics]]
* how to classify and [[describe media stream types|StreamType]] and how to [[use them|StreamTypeUse]]
* considerations regarding [[identity and equality|ModelObjectIdentity]] of objects in the HighLevelModel
* the [[identification of frames and nodes|NodeFrameNumbering]]
* the relation of [[Project, Timelines and Sequences|TimelineSequences]]
* how to [[start the GUI|GuiStart]] and how to [[connect|GuiConnection]] to the running UI.
* build the first LayerSeparationInterfaces
* create an uniform pattern for [[passing and accessing object collections|ForwardIterator]]
* decide on SessionInterface and create [[Session datastructure layout|SessionDataMem]]
* shaping the GUI/~Steam-Interface, based on MObjectRef and the [[Command frontend|CommandHandling]]
* defining PlacementScope in order to allow for [[discovering session contents|Query]]
* working out a [[Wiring concept|Wiring]] and the foundations of OutputManagement
* shaping the foundations of the [[player subsystem|Player]]
* detail considerations regarding [[time and time quantisation|TimeQuant]]
* [[Timecode]] -- especially the link of [[TC formats and quantisation|TimecodeFormat]]
* designing how to [[build|BuildFixture]] the [[Fixture]] (...{{red{WIP}}}...)
* from [[play process|PlayProcess]] to [[frame dispatching|FrameDispatcher]] and [[node invocation|NodeInvocation]]
* how to shape the GuiConnection: with the help of a mediating GuiModel, which acts as UI-Bus, exchanging TreeDiffModel messages for GuiModelUpdate
* shape the Backbone of the UI, the UI-Bus and the GuiTopLevel
* build a framework for InteractionControl in the User Interface
* establish a flexible layout structure for the GuiTimelineView
* how to represent content in the UI: GuiContentRender

! Integration
To drive integration of the various parts and details created in the last years, we conduct [[»Vertical Slices« for Integration|IntegrationSlice]]
;populate timeline
:✔ send a description of the model structure as //population diff// through the UI-Bus to [[populate|GuiContentPopulation]] the GuiTimelineView
;play a clip
:🗘 the »PlaybackVerticalSlice« [[#1221|https://issues.lumiera.org/ticket/1221]] drives integration of [[Playback|PlayProcess]] and [[Rendering|RenderProcess]]
:* the actual content is mocked and hard wired
:* in the GUI, we get a simple [[playback control|GuiPlayControl]] and some [[display of video|GuiVideoDisplay]]
:* OutputManagement and an existing ViewConnection are used to initiate the PlayService
:* an existing [[Fixture]] is used to drive a FrameDispatcher to generate [[Render Jobs|RenderJob]]
:* the [[Scheduler]] is established to [[operate|NodeOperationProtocol]] the [[render nodes|ProcNode]] in the [[Low-level-Model|LowLevelModel]]
/***
''InlineJavascriptPlugin for ~TiddlyWiki version 1.2.x and 2.0''
^^author: Eric Shulman - ELS Design Studios
source: http://www.TiddlyTools.com/#InlineJavascriptPlugin
license: [[Creative Commons Attribution-ShareAlike 2.5 License|http://creativecommons.org/licenses/by-sa/2.5/]]^^

Insert Javascript executable code directly into your tiddler content. Lets you ''call directly into TW core utility routines, define new functions, calculate values, add dynamically-generated TiddlyWiki-formatted output'' into tiddler content, or perform any other programmatic actions each time the tiddler is rendered.
!!!!!Usage
<<<
When installed, this plugin adds new wiki syntax for surrounding tiddler content with {{{<script>}}} and {{{</script>}}} markers, so that it can be treated as embedded javascript and executed each time the tiddler is rendered.

''Deferred execution from an 'onClick' link''
By including a label="..." parameter in the initial {{{<script>}}} marker, the plugin will create a link to an 'onclick' script that will only be executed when that specific link is clicked, rather than running the script each time the tiddler is rendered.

''External script source files:''
You can also load javascript from an external source URL, by including a src="..." parameter in the initial {{{<script>}}} marker (e.g., {{{<script src="demo.js"></script>}}}). This is particularly useful when incorporating third-party javascript libraries for use in custom extensions and plugins. The 'foreign' javascript code remains isolated in a separate file that can be easily replaced whenever an updated library file becomes available.

''Defining javascript functions and libraries:''
Although the external javascript file is loaded while the tiddler content is being rendered, any functions it defines will not be available for use until //after// the rendering has been completed. Thus, you cannot load a library and //immediately// use it's functions within the same tiddler. However, once that tiddler has been loaded, the library functions can be freely used in any tiddler (even the one in which it was initially loaded).

To ensure that your javascript functions are always available when needed, you should load the libraries from a tiddler that will be rendered as soon as your TiddlyWiki document is opened. For example, you could put your {{{<script src="..."></script>}}} syntax into a tiddler called LoadScripts, and then add {{{<<tiddler LoadScripts>>}}} in your MainMenu tiddler.

Since the MainMenu is always rendered immediately upon opening your document, the library will always be loaded before any other tiddlers that rely upon the functions it defines. Loading an external javascript library does not produce any direct output in the tiddler, so these definitions should have no impact on the appearance of your MainMenu.

''Creating dynamic tiddler content''
An important difference between this implementation of embedded scripting and conventional embedded javascript techniques for web pages is the method used to produce output that is dynamically inserted into the document:
* In a typical web document, you use the document.write() function to output text sequences (often containing HTML tags) that are then rendered when the entire document is first loaded into the browser window.
* However, in a ~TiddlyWiki document, tiddlers (and other DOM elements) are created, deleted, and rendered "on-the-fly", so writing directly to the global 'document' object does not produce the results you want (i.e., replacing the embedded script within the tiddler content), and completely replaces the entire ~TiddlyWiki document in your browser window.
* To allow these scripts to work unmodified, the plugin automatically converts all occurences of document.write() so that the output is inserted into the tiddler content instead of replacing the entire ~TiddlyWiki document.

If your script does not use document.write() to create dynamically embedded content within a tiddler, your javascript can, as an alternative, explicitly return a text value that the plugin can then pass through the wikify() rendering engine to insert into the tiddler display. For example, using {{{return "thistext"}}} will produce the same output as {{{document.write("thistext")}}}.

//Note: your script code is automatically 'wrapped' inside a function, {{{_out()}}}, so that any return value you provide can be correctly handled by the plugin and inserted into the tiddler. To avoid unpredictable results (and possibly fatal execution errors), this function should never be redefined or called from ''within'' your script code.//

''Accessing the ~TiddlyWiki DOM''
The plugin provides one pre-defined variable, 'place', that is passed in to your javascript code so that it can have direct access to the containing DOM element into which the tiddler output is currently being rendered.

Access to this DOM element allows you to create scripts that can:
* vary their actions based upon the specific location in which they are embedded
* access 'tiddler-relative' information (use findContainingTiddler(place))
* perform direct DOM manipulations (when returning wikified text is not enough)
<<<
!!!!!Examples
<<<
an "alert" message box:
{{{
<script>alert('InlineJavascriptPlugin: this is a demonstration message');</script>
}}}
<script>alert('InlineJavascriptPlugin: this is a demonstration message');</script>

dynamic output:
{{{
<script>return (new Date()).toString();</script>
}}}
<script>return (new Date()).toString();</script>

wikified dynamic output:
{{{
<script>return "link to current user: [["+config.options.txtUserName+"]]";</script>
}}}
<script>return "link to current user: [["+config.options.txtUserName+"]]";</script>

dynamic output using 'place' to get size information for current tiddler
{{{
<script>
 if (!window.story) window.story=window;
 var title=story.findContainingTiddler(place).id.substr(7);
 return title+" is using "+store.getTiddlerText(title).length+" bytes";
</script>
}}}
<script>
 if (!window.story) window.story=window;
 var title=story.findContainingTiddler(place).id.substr(7);
 return title+" is using "+store.getTiddlerText(title).length+" bytes";
</script>

creating an 'onclick' button/link that runs a script
{{{
<script label="click here">
 if (!window.story) window.story=window;
 alert("Hello World!\nlinktext='"+place.firstChild.data+"'\ntiddler='"+story.findContainingTiddler(place).id.substr(7)+"'");
</script>
}}}
<script label="click here">
 if (!window.story) window.story=window;
 alert("Hello World!\nlinktext='"+place.firstChild.data+"'\ntiddler='"+story.findContainingTiddler(place).id.substr(7)+"'");
</script>

loading a script from a source url
{{{
<script src="demo.js">return "loading demo.js..."</script>
<script label="click to execute demo() function">demo()</script>
}}}
where http://www.TiddlyTools.com/demo.js contains:
>function demo() { alert('this output is from demo(), defined in demo.js') }
>alert('InlineJavascriptPlugin: demo.js has been loaded');
<script src="demo.js">return "loading demo.js..."</script>
<script label="click to execute demo() function">demo()</script>
<<<
!!!!!Installation
<<<
import (or copy/paste) the following tiddlers into your document:
''InlineJavascriptPlugin'' (tagged with <<tag systemConfig>>)
<<<
!!!!!Revision History
<<<
''2006.01.05 [1.4.0]''
added support 'onclick' scripts. When label="..." param is present, a button/link is created using the indicated label text, and the script is only executed when the button/link is clicked. 'place' value is set to match the clicked button/link element.
''2005.12.13 [1.3.1]''
when catching eval error in IE, e.description contains the error text, instead of e.toString(). Fixed error reporting so IE shows the correct response text. Based on a suggestion by UdoBorkowski
''2005.11.09 [1.3.0]''
for 'inline' scripts (i.e., not scripts loaded with src="..."), automatically replace calls to 'document.write()' with 'place.innerHTML+=' so script output is directed into tiddler content
Based on a suggestion by BradleyMeck
''2005.11.08 [1.2.0]''
handle loading of javascript from an external URL via src="..." syntax
''2005.11.08 [1.1.0]''
pass 'place' param into scripts to provide direct DOM access 
''2005.11.08 [1.0.0]''
initial release
<<<
!!!!!Credits
<<<
This feature was developed by EricShulman from [[ELS Design Studios|http:/www.elsdesign.com]]
<<<
!!!!!Code
***/
//{{{
version.extensions.inlineJavascript= {major: 1, minor: 4, revision: 0, date: new Date(2006,1,5)};

config.formatters.push( {
 name: "inlineJavascript",
 match: "\\<script",
 lookahead: "\\<script(?: src=\\\"((?:.|\\n)*?)\\\")?(?: label=\\\"((?:.|\\n)*?)\\\")?\\>((?:.|\\n)*?)\\</script\\>",

 handler: function(w) {
 var lookaheadRegExp = new RegExp(this.lookahead,"mg");
 lookaheadRegExp.lastIndex = w.matchStart;
 var lookaheadMatch = lookaheadRegExp.exec(w.source)
 if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
 if (lookaheadMatch[1]) { // load a script library
 // make script tag, set src, add to body to execute, then remove for cleanup
 var script = document.createElement("script"); script.src = lookaheadMatch[1];
 document.body.appendChild(script); document.body.removeChild(script);
 }
 if (lookaheadMatch[2] && lookaheadMatch[3]) { // create a link to an 'onclick' script
 // add a link, define click handler, save code in link (pass 'place'), set link attributes
 var link=createTiddlyElement(w.output,"a",null,"tiddlyLinkExisting",lookaheadMatch[2]);
 link.onclick=function(){try{return(eval(this.code))}catch(e){alert(e.description?e.description:e.toString())}}
 link.code="function _out(place){"+lookaheadMatch[3]+"};_out(this);"
 link.setAttribute("href","javascript:;"); link.setAttribute("title",""); link.style.cursor="pointer";
 }
 else if (lookaheadMatch[3]) { // run inline script code
 var code="function _out(place){"+lookaheadMatch[3]+"};_out(w.output);"
 code=code.replace(/document.write\(/gi,'place.innerHTML+=(');
 try { var out = eval(code); } catch(e) { out = e.description?e.description:e.toString(); }
 if (out && out.length) wikify(out,w.output);
 }
 w.nextMatch = lookaheadMatch.index + lookaheadMatch[0].length;
 }
 }
} )
//}}}
An RAII class, used to manage a [[facade interface between layers|LayerSeparationInterface]].
The InstanceHandle is created by the service implementation and will automatically
* either register and open a ''C Language Interface'', using Lumiera's InterfaceSystem
* or load and open a LumieraPlugin, also by using the InterfaceSystem
* and optionally also create and manage a facade proxy to allow transparent access for client code

&rarr; see [[detailed description here|LayerSeparationInterfaces]]
A »[[vertical slice|https://en.wikipedia.org/wiki/Vertical_slice]]« is an //integration effort that engages all major software components of a software system.//
It is defined and used as a tool to further and focus the development activity towards large scale integration goals.

!populate timeline
✔ [[#1014  »TimelinePopulation«|https://issues.lumiera.org/ticket/1014]]:
Send a description of the model structure in the form of a population diff from the Session in Steam-Layer up through the UI-Bus. When received in the UI-Layer, a new Timeline tab will be allocated and [[populated|GuiContentPopulation]] with appropriate widgets to create a GuiTimelineView. The generated UI structures will feature several nested tracks and some placeholder clips, which can be dragged with the mouse. Moreover, the nested track structure is visualised by [[custom drawing|GtkCustomDrawing]] onto a canvas widget, and the actual colours and shades for this drawing operations [[will be picked up|GuiTimelineStyle]] from the current desktop theme, in combination with an CSS application stylesheet.

!send messages via UI-Bus
✔ [[#1014 »Demo GUI Roundtrip«|https://issues.lumiera.org/ticket/1099]]:
Set up a test dialog in the UI, which issues test/dummy commands. These are [[propagated|GuiModelElements]] through the SteamDispatcher and by special rigging reflected back as //State Mark Messages// over the UI-Bus, causing a visible state change in the //Error Log View// in the UI.

!play a clip
🗘 [[#1221|https://issues.lumiera.org/ticket/1221]]: The »PlaybackVerticalSlice« drives integration of [[Playback|Player]] and [[Rendering|RenderEngine]]. While the actual media content is still mocked and hard wired, we get a simple [[playback control|GuiPlayControl]] in the GUI and some [[display of video|GuiVideoDisplay]]. When activated, an existing ViewConnection is used to initiate a PlayProcess; the [[Fixture]] between HighLevelModel and LowLevelModel will back a FrameDispatcher to generate [[Render Jobs|RenderJob]], which are then digested and activated by the [[Scheduler]] in the Vault-Layer, thereby [[operating|NodeOperationProtocol]] the [[render nodes|ProcNode]] to generate video data for display.
This overarching topic is where the arrangement of our interface components meets considerations about interaction design.
The interface programming allows us to react on events and trigger behaviour, and it allows us to arrange building blocks within a layout framework. Beyond that, there needs to be some kind of coherency in the way matters are arranged -- this is the realm of conventions and guidelines. Yet in any more than trivial UI application, there is an intermediate and implicit level of understanding, where things just happen, which can not fully be derived from first principles. It is fine to have a convention to put the "OK" button right -- but how to we get at trimming a clip? How do we how we are to get at trimming a clip? if we work with the mouse? or the keyboard? or with a pen? or with a hardware controller we don't even know yet? We could deal with such on a case-by-case base (as the so called reasonable people do) or we could aim at an abstract intermediary space, with the ability to assimilate the practical situation yet to come.

;interface has a spatial quality
:the elements within an user interface are arranged in a way that parallels our experience when working in real world space. With the addition of some minor dose of //"hyper"// -- allowing for cross connections and shortcuts beyond spatial logic
;locality of work spaces
:but the arrangement of the interface interactions is not amorphous, rather it is segregated into cohesive clusters of closely interrelated actions. We move between these clusters of activity the same way as we move between several well confined rooms within a building.
;context and focus of activity
:most of what we could do //in theory,// is not relevant most of the time. But when the inner logic of what we're about to do coincides with the things at hand, then we feel enabled.
;shift of perspective
:and while we work, the focus moves along. Some things are closer, other things are remote and require us to move and re-orient and reshape our perspective, should we choose to turn towards them.
;the ability to arrange what is relevant
:we do the same stuff again and again, and this makes us observe and gradually understand matters. As we reveal the inner nature of what we're doing, we desire to arrange close at hand what belongs together, and to expunge the superficial and distracting.

&rarr; detailed [[analysis how commands are to be invoked|CommandInvocationAnalysis]]

!Foundation Concepts
The primary insight is  //that we build upon a spatial metaphor// -- and thus we start out with defining various kinds of //locations.// We express interactions as //happening somewhere...//
;work site
:a distinct, coherent place where some ongoing work is done
:the WorkSite might move along with the work, but we also may leave it temporarily to visit some other work site
;the spot
:the [[Spot]] is //where we currently are// -- taken both in the sense of a location and a spotlight
:thus a spot is potentially at some work site, but it can be navigated to another one
;focus
:the concrete realisation of the spot within a given control system
;control system
:a practical technical realisation of an human-computer-interface, like keyboard input/navigation, mouse, pen, hardware controller, touch
;focus goal
:an order or instruction to bring something //into focus,// which also means to move the spot to the designated location.
;UI frame
:the overall interface is arranged into independent top-level segments of equal importance.
:practically speaking, we may have multiple top-level windows residing on multiple desktops...
;perspective
:a set of concrete configuration parameters defining the contents of one UI frame
:the perspective defines which views are opened and arranged at what position and within which docking panel
;focus path
:concrete coordinates to reach a specific work site
:the focus path specifies the UI frame (top-level window), the perspective, and then some canonical path to navigate down a hierarchy to reach the anchor point of the work site
;the spot locator
:navigating means to move the SpotLocator, in order to move the spot from work site to work site
:the spot locator is relocated by loading a new focus path leading to another [[work site|WorkSite]]

The concept of a //focus goal// has several ramifications: for one it implies that there is something akin the //"current cotrol system",// which also could be the //currently active control system(s).// Simply because focus, as the realisation of the abstract notion of the spot, is always tied to a control system able to implement it. And when we're able to define generic location coordinates and then //"move there",// with the help of the SpotLocator, we draw the conclusion that there must be a focus (implementation), somehow getting shifted towards that location. Like e.g.  the desired entity to gain the keyboard focus. And, beyond that, the second thing we may conclude is that there need to be some degree of leeway in the way such a focus goal can be reached. Since the inner logic of control systems can be quite drastically different from each other, we are well advised to leave it to the actual control system //how actually to fulfil the focus goal.// To point out an obvious example: it is not a good idea to move the mouse pointer forcibly onto a screen element. Rather, we must use the established mechanisms of switching, scrolling and unfolding to bring the desired target element into the visible area, leaving the last step to the user, which must actively move the mouse onto the target. And we must give good visual clues as to what happened, and what we expect from the user (namely to direct her attention onto the element brought into focus).

!Building the framework
To create such a system is an ambitious goal for sure. We can not reach it in a single step, since it entails the formation of a whole intermediary layer, on top of the //usual UI mechanics,// yet below the concrete UI interactions. Especially, we'd need to clarify the meaning of //perspective,// we need to decide on the relation of top level frame, individual view, layout, focus and //current location within the UI.// On a second thought, building such a system implies we'll have to live with an intermediary state of evolution, where parts of the new framework are already in place without interfering with common conventional usage of the interface as-is.

!!!UI coordinates
Especially the focus navigation entails the use of some kind of ubiquitous [[coordinate system within the user interface|UICoord]]. In fact this is more of a topological navigation, since these coordinates describe the decisions and forks taken on navigation down the //focus path.//
* [optional] top-level Window (UI frame)
* [optional] Perspective
* Panel
* local path
** [optional] Group
** ~View-ID
** component.component.component...

//the top-level controller within the UI.//
In Lumiera, the structures of the model within the [[Session]] (the so called HighLevelModel) are mapped onto corresponding [[tangible UI entities|UI-Element]], which serve as a front-end to represent those entities towards the user. Within this model, there is a //conceptual root node// -- which logically corresponds to the session itself. This [[root element in model|ModelRootMO]] links together the actual top-level entities, which are the (multiple) timelines, together with the asset management and defaults and rules configuration within the session.

And the counterpart of this root element within the UI is the {{{InteractionDirector}}}, a top-level controller. As a controller, it responds to actions like opening a specific timeline, entering the asset management section, the request for help, and actions like saving, opening and closing of the session as a whole. Beyond that, the Interaction Director is the connection joint to that part of the UI which deals with global interaction state: this topic relates to questions about "the current element", "the focus", "where we are right now" (in what "location" or "room" within the UI) and also what tangible interface controller we're actually using (mouse, keyboard, graphics pen, hardware controller, touch screen).

Why do we need a connection joint between those parts?
Because issuing any actions on the model within the session -- i.e. any editing operation -- is like forming a sentence: we need to spell out //what we want to do// and we need to spell out the subject and the object of our activity. And any one of these can, and will in fact, be sometimes derived //from the context of the interaction.// Because, given the right context, it is almost clear what you want to do -- you just need to fill in that tiny little bit of information to actually make it happen. In Lumiera we strive at building a good UI, which is an UI well suited to this very human way of interacting with one's environment within a given context.
> to understand this, it is best &rarr; to [[look at some examples|CommandInvocationAnalysis]]

!Children
The InteractionDirector is part of the model, and thus we have to distinguish between model children and other UI components held as children.
;model
:anything related to //core concerns// and session global structures falls into that category
:* some parts might be related to ''Asset management''
:* some concerns might touch questions of ''configuration''
:* and interaction state is closely related to ''persistent UI state''
;interaction control
:anything related to forming and tracking of user interactions &rarr; InteractionControl
:* the SpotLocator is what is "moved" when the [[Spot]] of current activity moves
:* the FocusTracker is responsible for //detecting changes in current selection and focus.//
:* the [[Navigator]] is a special controller to handle moving the SpotLocator within the UI tree topology
:* the ViewLocator serves for any kind of high-level, abstracted access to [[component views|GuiComponentView]] and location resolution.

!Collaborations
* several global actions are exposed here, like opening the session and creating a new timeline.<br/>Such operations are typically bound into menu and action buttons, and in turn invoke [[commands|CommandHandling]] into the Session
* some further actions involve [[management of docking panels|GuiDockingPanel]], like e.g. instantiating a new [[timeline display|GuiTimelineView]] into a timeline pane.
* in fact it is the InteractionDirector's job to keep track of [[component views|GuiComponentView]], some of which
** can exist only once
** or can exist only once per top level window
** some might even be required once per window
** while others may be duplicated locally or globally

!!!cyclic dependencies
The InteractionDirector interconnects various aspects of UI management and thus can be expected to exhibit cyclic dependencies on several levels. Bootstraping the InteractionDirector is thus a tricky procedure and requires all participating services to operate demand-driven. In fact, these services represent some aspects of the same whole -- the UI. They are bounded by thematic considerations, not so much implementation concerns, and many of them rely on their siblings actually provide some essential part of their service. For example, the Navigator exposes the UI as topological tree structure, while the ViewLocator encapsulates the concrete access paths towards specific kinds of UI entities (views, panels). Obviously, the Navigator needs the ViewLocator to build its tree abstraction on top. But, thematically, //resolving a view// is part of accessing and / or creating  some view, and thus the ViewLocator becomes indirectly dependent on the tree topology established by the Navigator.
A facility within the GUI to// track and manage one specific aspect of interaction state.//
In a more elaborate UI, as can be expected for such a task like editing, there are interactions beyond "point and shot". For a fluid and natural interaction it is vital to build and exploit an operation context, so to guide and specify the ongoing operations. Interaction events can not be treated in isolation, but rather in spatial and temporal clusters known as ''gestures''. A good example is the intention to trim or roll an edit. Here the user has some clips in mind, which happen to be located in immediate succession, and the kind of adjustment has to be determined from the way the user approaches the junction point. To deal with such an interaction pattern, we need to track a possible future interpretation of the user's actions as a hypothesis, to be confirmed and executed when all pieces fall into place.

An InteractionState is a global component, but often not addressed directly. To deal with context dependent activation, this tracking component attaches and taps into various information sources to observe some aspects of global state. Moreover, it is outfitted with a set of rules, leading to enablement of some [[command invocation trail|InvocationTrail]]. These enablements or disablements are forwarded to the actual trigger points, which are those UI elements to witness the completion of a gesture.

&rarr; CommandInvocationAnalysis
&rarr; InteractionControl
&rarr; GuiCommandBinding
&rarr; CommandUsage

! interaction state and the actual widgets
InteractionControl is conceived as an additional intermediary layer, distinct from the actual widgets. The whole idea is that we //do not want// intricate state managing logic to be scattered all over the concrete UI widget code -- doing so would defeat any higher level structuring and turn the UI code into highly tangled very technical implementation logic; ideally, UI code should mostly be specification, setup and wiring, yet void of procedural logic.

The actual widgets rely on the {{{CmdContext}}} to as access point, to set up a binding with some elaborate interaction pattern or [[Gesture|GuiGesture]]; the implementation of such a Gesture typically acts like a ''state machine'' -- it observes UI events and eventually detects the formation of the specific gesture in question.
[img[Access to Session Commands from UI|uml/Command-ui-access.png]]
//one specific way to prepare and issue a ~Steam-Layer-Command from the UI.//
The actual persistent operations on the session model are defined through DSL scripts acting on the session interface, and configured as a //command prototype.// Typically these need to be enriched with at least the actual subject to invoke this command on; many commands require additional parameters, e.g. some time or colour value. These actual invocation parameters need to be picked up from UI elements, sometimes even from the context of the triggering event. When all arguments are known, finally the command -- as identified by a command-ID -- can be issued on any bus terminal, i.e. on any [[tangible interface element|UI-Element]].
&rarr; CommandInvocationAnalysis

Thus an invocation trail represents one specific path leading to the invocation of a command. In the current state of the design ({{red{in late 2017}}}), this is a concept; initially it was meant to exist as object, but this approach turned out to be unnecessarily complex. We can foresee that there will be the somewhat tricky situation, where a command is ''context-bound''. In those cases, we rely on the InteractionState helper, which is to track {{red{planned 4/2017}}} an enablement entry for each possible invocation trail. Basically this means that some commands need to be prepared and bound explicitly into some context (e.g. the tracks within a sequence), while enabling and parameter binding happens automatically, driven by interaction events.
&rarr; InteractionControl

!further evolution of this concept {{red{WIP 2021}}}
* it was coined 2015-2017, with the intention to represent it as actual stateful object
* late in 2017, this design was ''postponed'' -- more or less abandoned -- since it is unable to represent the simple case in a simple way
* in spring 2021, after successfully building the backbone for [[Timeline display|GuiTimelineWidgetStructure]], an initial draft for [[dragging a clip|ClipRelocateDrag]] is on the agenda {{red{WIP 4/21}}}
* at that point {{red{in 4/21}}}, handling of [[Gestures within the UI|GuiGesture]] is reconsidered, leaning towards a system of hierarchical controllers
* //it is conceivable// that the idea of an InvocationTrail might be reinstated as a //generalised hierarchical gesture controller.//
''Note'': {{red{future plans and visions -- no clear and distinct meaning -- as of 4/21}}}
//Point at the real wall-clock time axis, at which a [[render job|RenderJob]] must have delivered its action.//
After that point, further pursuing the evaluation of such a job, or any of its dependencies is futile and can be abandoned.

!External constraints and Latency
For the typical calculation of media data frames, several dependent actions have to be carried out in proper order. The »top-level job« serves the purpose to deliver the complete media frame into some external //data sink.// When the render engine operates //time-bound// (as for real-time playback), delivery of data is acceptable only during a limited //time window// -- it starts when the data buffer (from a double buffering or round-robin scheme) becomes „open“ and ready to receive data, and it ends with the next »frame clock tick« when this data must be posted to the external hardware output or receiver. Working backward from these external constraints, it is clear that the job actually must be dispatched earlier by some margin, since moving forward all these interactions requires some processing time. This additional time margin is known as ''Latency'' -- further structured into the //output latency// inherent to the presentation mechanism, and the //engine latency// due to the overhead of coordinating the render operations. In addition to the latency, also the actual ''calculation effort'' must be accounted for, and combining these figures gives the ''schedule deadline'', which is the latest point in (real wall-clock) time where a calculation may still possibly reach its goal.

Most calculations are not monolithic and can not be performed in-memory in a single run -- rather, some ''prerequisites'' must be prepared, like e.g. loading source media data from disk and decoding this data through an external library. Often prerequisites are //I/O-bound// and //intermittent// in nature, and will thus be organised as separate ''prerequisite jobs''. Deadline calculations for these are coordinated by a similar scheme and lined up backwards to the dependend job's schedule deadline.
!!!Computation of Deadlines
Pre-planning this chain of schedules is only necessary for {{{TIMEBOUND}}} playback -- in the other cases, for //best effort// or //background// computations, just the sequence of dependencies must be observed, starting the next step only after all prerequisites are ready. And while the [[Scheduler]] actually has the capability to react on such logical dependencies, for precise timing the schedule deadlines will be prepared in the JobPlanningPipeline. Working backwards from the »top-level« frame job, prerequisites will be discovered incrementally by a recursive depth-first »tree exploration«. Defined by a //exploration function// within an iterator-pipeline construct, further layers of child dependencies are discovered and pushed onto a stack. At some point, a »leaf prerequisite« is discovered -- and at that point the complete chain of dependencies resides within this stack, each represented as a {{{JobPlanning}}} data record. The computation is arranges in a way to link these dependent planning contexts together, allowing to determine the dependent scheduling deadlines by recurring to the parent {{{JobPlanning}}} record, all the way up to the root record, which aligns to an time gird point externally set by the playback timings.
//Depth-first evaluation pipeline used in the FrameDispatcher to generate the next chunk of [[render jobs|RenderJob]]//
This is an implementation structure backed and established by the [[Dispatcher|FrameDispatcher]] and operated by the RenderDrive core of each CalcStream, where it is assembled by a //builder notation// -- backed by the {{{IterExplorer}}} iterator pipeline framework; besides the typical filter and transform operations, the latter offers an »expansion« mechanism to integrate a //monadic exhaustive depth-first search,// allowing to pick up all prerequisites of a given calculation, proceeding step by step within a local planning context.

!Stages of the Job-planning pipeline
;frame-tick
:establish a sequence of frame start times
;selector
:choose a suitable ModelPort and associated JobTicket;
:this entails to select the actually active ProcNode pipeline
;expander
:monadic depth-first exploration of calculation prerequisites
;job-planning
:collect the complete planning context and determine time frame
:*especially add a {{{DataSink}}} specification to the JobFunctor closure
:*and evaluate the dependency structure from the //exploration step// to establish Job [[deadlines|JobDeadline]].
The actual media data is rendered by [[individually scheduled render jobs|RenderJob]]. All these calculations together implement a [[stream of calculations|CalcStream]], as demanded and directed by the PlayProcess. During the preparation of playback, a ''node planning phase'' is performed, to arrange for [[dispatching|FrameDispatcher]] the individual calculations per frame.  The goal of these //preparations//&nbsp; is to find out
* which [[model ports|ModelPort]] can be processed independently
* what prerequisites to prepare beforehand
* what parameters to provide
The result of this planning phase is the {{{JobTicket}}}, a complete ''execution plan''.
This planning is uniform for each [[segment|Segmentation]] and treated for all channels together, resulting in a nested tree structure of sub job tickets, allocated and stored alongside with the processing nodes and wiring descriptors to form the segment's data and descriptor network. Job tickets are //higher order functions:// entering a concrete frame number and //absolute nominal time point// of that frame into a given job ticket will produce an actual job descriptor, which in itself is again a function, to be invoked through the scheduler when it's time to trigger the actual calculations.

!Structure of the Render Jobs created
To be more precise: in the general case, invoking the ~JobTicket with a given frame number will produce //multiple jobs//  -- typically each frame rendering will require at least one further source media frame; and because Lumiera render jobs will //never block waiting on IO,// this source media access will be packaged as a separate [[resource retrieving job|ResourceJob]], to be treated specifically by the scheduler.

To support the generation of multiple dependent jobs, a ~JobTicket might refer to further ~JobTickets corresponding to the prerequisites. These prerequisite ~JobTickets are the result of a classical recursive descent call into the top level ProcNode, to perform the planning step. There is always an 1:1 relation between actual jobs generated and the corresponding tickets. More precisely, for each possible job to generate there is a suitable ''job closure'', representing exactly the context information necessary to get that job started. To stress that point: a ProcNode might be configured such as to perform a series of recursive invocations of other prerequisite nodes right away (especially when those recursive invocations correspond just to further CPU bound calculations) -- yet still there is only //one//&nbsp; ~JobTicket, because there will be only one Job to invoke the whole sequence. On the other hand, when there is a prerequisite requiring an operation to be scheduled separately, a corresponding separate {{{JobClosure}}} will be referred.
A major //business interface// &mdash; used by the layers for interfacing to each other; also to be invoked externally by scripts.
&rarr; [[overfiew and technical details|LayerSeparationInterfaces]]
Lumiera uses a 3-layered architecture. Separation between layers is crucial. Any communication between the layers regarding the normal operation of the application should //at least be initiated// through ''Layer abstraction interfaces'' (Facade Interfaces). This is a low-impact version of layering, because, aside from this triggering, direct cooperation of parts within the single Lumiera process is allowed, under the condition that is is implemented using additional abstractions (interfaces with implementation level granularity). We stick to the policy of //disallowing direct coupling of implementations located in different layers.//

[>img[Anatomy of a Layer Separation Interface|uml/fig132869.png]]
The goal is for the interface to remain fairly transparent for the client and to get an automatic lifecycle management. 
To implement such a structure, each layer separation interface actually is comprised of several parts:
* an C Language Interface ("''CL Interface''") to be installed into the InterfaceSystem
* a ''facade interface'' defining the respective abstractions in terms of the client side impl. language (C or C++)
* a ''service implementation'' directly addressed by the implementation instantiated within the InterfaceSystem
* a ''facade proxy'' object on the client side, which usually is given inline alongside with the CL interface definition.

!opening and closing
Handling the lifecycle can be tricky, because client- and service-side need to carry out the opening and closing operations in sync and observing a specific call order to ensure calls get blocked already on the client side unless the whole interface compound is really up and running. To add to this complexity, plugins and built-in interfaces are handled differently regarding the question who is in charge of the lifecycle: interfaces are installed on the service side, whereas the loading of plugins is triggered from client side by requesting the plugin from the loader.
Anyway, interfaces are resources which best should be managed automatically. At least within the C++ part of the application we can ensure this by using the InstanceHandle template. This way the handling of plugins and interfaces can be unified and the opening and closing of the facade proxy happens automatically.

The general idea is, that each facade interface actually provides access to a specific service; there will always be a single implementation object somewhere, which can be thought of as acting as "the" service. This service-providing object will then contain the mentioned InstanceHandle; thus, its lifecycle becomes identical with the service lifecycle.
* when the service relies on a [[plugin|LumieraPlugin]], this service providing object (containing the InstanceHandle) needs to sit at the service accessing side (as the plugin may not yet be loaded and someone has to pull it up).
* otherwise, when the service is just an interface to an already loaded facility, the service providing object (containing the InstanceHandle) will live on the service providing side, not the client side. Then the ctor of the service providing object needs to be able to access an interface instance definition (&rarr; InterfaceSystem); the only difference to the plugin case is to create the InstanceHandle with a different ctor, passing the interface and the implementing instance.

!outline of the major interfaces
|!Layer|>|!Interface |
|GUI|GuiFacade|UI lifecycle &rarr; GuiStart|
|~|GuiNotificationFacade|status/error messages, asynchronous object status change notifications, trigger shutdown|
|~|DisplayFacade|pushing frames to a display/viewer|
|Steam|SessionFacade|session lifecycle|
|~|EditFacade|edit operations, object mutations|
|~|PlayerDummy|player mockup, maybe move to Vault?|
|//Lumiera's major interfaces//|c


An implementation facility used by the session manager implementation to ensure a consistent lifecycle. A template-method-like skeleton of operations can be invoked by the session manager to go through the various lifecycle stages, while the actual implementation functionality is delegated to facilities within the session. The assumption is for the lifecycle advisor to be executed within a controlled environment, a single instance and single threaded.

---------------------
!Implementation notes
The goal on the implementation level is to get a consolidated view on the whole lifecycle.
Reading the source code should convey a complete picture about what is going on with respect to the session lifecycle.

!!entrance points
;close
: the shutdown sequence cleanly unwinds any contents and returns into //uninitialised state.//
;reset
: is comprised of two phases: shutdown and startup of an empty new session
: shutdown will be skipped automatically when uninitialised
: startup has to inject default session content
;load
: again, first an existing session is brought down cleanly
: the startup sequence behaves similarly to __reset__, but injects serialised content
Note that, while causing a short //freeze period,// __saving__ and __(re-)building__ aren't considered lifecycle operations

!!operation sequences
The ''pull up'' sequence performs basic initialisation of the session facilities and then, after the pImpl-switch, executes the various loading and startup phases. Any previous session switched away is assumed to unwind automatically, nothing is done especially to disconnect such an existing session, we simply require the session subsystem to be in a pristine state initially.
The ''shut down'' sequence does exactly that: halt processing and rendering, disconnect an existing session, if any, get back into initial state. It doesn't care for unwinding session contents, which is assumed to happen automatically when references to previous session contents go out of scope.
Opening and accessing media files on disk poses several problems, most of which belong to the domain of Lumiera's data Vault. Here, we focus on the questions related to making media data available to the session and the render engine. Each media will be represented by an MediaAsset object, which indeed could be a compound object (in case of MultichannelMedia). Building this asset object thus includes getting information from the real file on disk. For delegating this to the Vault, we use the following query interface:
* {{{queryFile(char* name)}}} requests accessing the file and yields some (opaque) handle when successful.
* {{{queryChannel(fHandle, int)}}} will then be issued in sequence with ascending index numbers, until it returns {{{NULL}}}.
* the returned struct (pointer) will provide the following information:
** some identifier which can be used to create a name for the corresponding media (channel) asset
** some identifier characterizing the access method (codec) needed to get at the media data. This should be rather a high level description of the media stream type, e.g. "H264"
** some (opaque) handle usable for accessing this specific stream. When the render engine later on pulls data for this channel, it will pass this handle down to the Vault.

{{red{to be defined in more detail later...}}}

&rarr; see "~MediaAccessFacade" for a (preliminary) interface definitioin
&rarr; see "~MediaAccessMock" for a mock/test implementaion
Used to actually implement the various kinds of [[Placement]] of ~MObjects. ~LocatingPin is the root of a hierarchy of different kinds of placing, constraining and locating a Media Object. Basically, this is an instance of the ''state pattern'': The user sees one Placement object with value semantics, but when the properties of the Placement are changed, actually a ~LocatingPin object (or rather a chain of ~LocatingPins) is changed within the Placement. Subclasses of ~LocatingPin implement different placing/constraining behaviour:
* {{{FixedLocation}}} places a MObject to a fixed temporal position and track {{red{wrong! it is supposed to be an output designation}}}
* {{{RelativeLocation}}} is used to atach the MObject to some other anchor MObject
* //additional constraints, placement objectives, range restrictions, pattern rules will follow...//
All sorts of "things" to be placed and manipulated by the user in the session. This interface abstracts the details and just supposes
* the media object has a duration
* it is allways //placed// in some manner, i.e. it is allways accessed via a [[Placement]]
* {{red{and what else?}}}

&rarr; [[overview of the MObject hierarchy|MObjects]]
''The Problem of referring to an [[MObject]]'' stems from the object //as a concept// encompassing a wider scope then just the current implementation instance. If the object was just a runtime entity in memory, we could use a simple (language) reference or pointer. Actually, this isn't sufficient, as the object reference will pass LayerSeparationInterfaces, will be handed over to code not written in the same implementation language, will be included in an ''UNDO'' record for the UndoManager, and thus will need to be serialized and stored permanently within the SessionStorage.
Moreover [[MObject instances|MObject]] have a 2-level structure: the core object holds just the properties in a strict sense, i.e. the properties which the object //owns.// Any properties due to putting the object into a specific context, i.e. all relation properties are represented as [[Placement]] of the object. Thus, when viewed from the client side, a reference to a specific ~MObject //instance,// actually denotes a //specific//&nbsp; Placement of this object into the Session.

!Requirements
* just the reference allone is sufficient to access the placement //and//&nbsp; the core object.
* the reference needs to be valid even after internal restructuring in the object store.
* there must be a way to pass these references through serialisation/deserialisation
* we need a plain-C representation of the reference, which ideally should be incorporated into the complete implementation
* references should either be integrated into memory management, or at least it should be possible to detect dangling references
* it should be possible to make handling of the references completely transparent, if the implementation language supports doing so.

!Two models
For the implementation of the object references, as linked to the memory management in general, there seem to be the following approaches
* the handle is a simple table index. Consequently we'll need a garbage collection to deal with deleted objects. This model has several flavours
** using an generation ID to tag the index handle, keep an translation table on each GC, mapping old indices to new ones and raise an error when encountering an outdated handle
** use a specific datastructure allowing the handles to remain valid in case of cleanup/reorganisation. (Hashtable or similar)
** keep track of all index handles and rewrite them on GC
* handles are refcounting smart pointers. This solution is C++ specific, albeit more elegant and minimalistic. As all the main interfaces should be accessible via C bindings, we'd need to use a replacement mechanism on the LayerSeparationInterfaces, which could be mapped to or handled by an C++ smart-ptr. (We used a similar approach for the PlayerDummy design study)

Obviously, the second approach has quite some appeal &mdash; but, in order to use it, we'd have to mitigate its drawbacks: it bears the danger of creating a second separate code path for C language based clients, presumably receiving lesser care, maintenance and stress testing. The mentioned solution worked out earlier this year for the player mockup (1/2009) tries at least partially to integrate the C functionality, as the actual implementation is derived from the C struct used as handle, thus allowing to use the same pointers for both kinds of interface, and in turn by doing the de-allocation by a call through the C dtor function, which is installed as deleter function with the boost::smart_ptr used as base class of the handle. Conceptually, the interface on this handle is related to the actual implementation refered to by the handle (handle == smart_ptr) as if the latter was a subclass of the former. But on the level of the implementation, there is no inheritance relation between the handle and the referent, and especially this allows to define the handle's interface in terms of abstract interface types usable on the client side, while the referent's interface operates on the types of the service implementation. Thus, the drawback of using a C language interface is turned into the advantage of completely separating implementation and client.

!Implementation concept
Presumably, none of the both models is usable as-is; rather we try to reconstruct the viable properties of both, starting out with the more elegant second model. Thus, basically the ''reference is a smart-ptr'' referring to the core object. Additionally, it incorporates a ''systematic ID denoting the location of the placement''. This ID without the smart-ptr part is used for the C-implementation, making the full handle implementation a shortcut for an access sequence, which first querries the placement from the Session, followed by dereferencing the placement to get at the core object. Thus, the implementation builds upon another abstraction, the &rarr; PlacementRef, which in turn assumes for an index within the implementation of the [[session datastructure|SessionDataMem]] to track and retrieve the actual Placement.
[img[Structure of MObjectRef and PlacementRef|uml/fig136581.png]]


!using ~MObject references
~MObject references have a distinct lifecycle: usually, they are created //empty// (invalid ref, inactive state), followed by activating them by attachment to an existing placement within the session. Later on, the reference can be closed (detached, deactivated). Activation can be done either directly by a {{{Placement<MO>&}}}, or indirectly by any {{{Placement::ID}}} tag or even plain LUID denoting a placement known to the PlacementIndex embedded within the [[Session]]. Activation can fail, because the validity of the reference is checked. In this respect, they behave exactly like PlacementRef, and indeed are implemented on top of the latter. From this point onward, the referred ~MObject is kept alive by the reference &mdash; but note, this doesn't extend to the placement, which still may be modified or removed from the session without further notice. {{red{TODO investigate synchronisation guarantees or a locking mechanism}}} Thus, client code should never store direct references to the placement.

~MObject references have value semantics, i.e. they don't have an identity and can be copied freely. Dereferencing yields a direct (language) ref to the MObject, while the placement can be accessed by a separate function {{{getPlacement()}}}. Moreover, the MObjectRef instance provides a direct API to some of the most common query functions you could imagine to call on both the object and the placement (i.e. to find out about the start time, length, ....)
The ~MObjects Subsystem contains everything related to the [[Session]] and the various Media Objects placed within. It is complemented by the Asset Management (see &rarr; [[Asset]]). Examples for [[MObjects |MObject]](&rarr; def) being:
* audio/video clips
* [[effects and plugins|EffectHandling]]
* special facilities like mask and projector
* [[Automation]] sets
* labels and other (maybe functional) markup

This Design strives to achieve a StrongSeparation between the low-level Structures used to carry out the actual rendering and the high level Entities living in the session and being manipulated by the user. In this high level view, the Objects are grouped and located by [[Placements|Placement]], providing a flexible and open way to express different groupings, locations and ordering constraints between the Media Objects.
&rarr; EditingOperations
&rarr; PlacementHandling
&rarr; SessionOverview


[img[Classess related to the session|uml/fig128133.png]]
The HighLevelModel consists of MObjects, which are attached to one another through their [[Placement]]. While this is a generic scheme to arrange objects in a tree of [[scopes|PlacementScope]], some attachments are handled specifically and may trigger side-effects

{{red{drafted feature as of 6/2010}}}

* a [[binding|BindingMO]] attached to root is linked to a [[Timeline]]
* a [[Fork]] attached to root corresponds to a [[Sequence]]

&rarr; see ModelDependencies
''[[Lumiera|http://Lumiera.org/documentation]]''
[[Dev Notepad|CoreDevelopment]]
[[Session]]
[[Wiring]]
[[GUI|GuiTopLevel]]
[[Implementation|ImplementationDetails]]
[[Admin]]
Problem is: when removing an Asset, all corresponding MObjects need to disappear. This means, besides the obvious ~Ref-Link (MObject referring to an asset) we need backlinks or a sort of registry. And still worse: we need to remove the affected MObject from the object network in the session and rebuild the Fixture...

&rarr; for a general design discussion see [[Relation of Clip and Asset|RelationClipAsset]]

//Currently// Ichthyo considers the following approach:
* all references between MObjects and Assets are implemented as __refcounting__ boost::shared_ptr
* the opposite direction is also a __strong reference__, effectively keeping the clip-MO alive even if it is no longer in use in the session (note this is a cyclic dependency that needs to be actively broken on deletion). This design decision is based on  logical considerations (&rarr; see "deletions, Model-2" [[here|RelationClipAsset]]). This back-link is implemented by a Placement which is stored internally within the asset::Clip, it is needed for clean deletion of Assets, for GUI search functions and for adding the Clip to the session (again after been removed, or multiple times as if cloned).
* MObjects and Assets implement an {{{unlink()}}} function releasing any internal links causing circular dependencies. It is always implemented such as to drop //optional// associations while retaining those associations mandatory for fulfilling the objects contract.
* Instead of a delete, we call this unlink() function and let the shared_ptr handle the actual deletion. Thus, even if the object is already unlinked, it is still valid and usable as long as some other entity holds a smart-ptr. An ongoing render process for example can still use a clip asset and the corresponding media asset linked as parent, but this media asset's link to the dependant clip has already been cleared (and the media is no longer registered with the AssetManager of course).
* so the back-link from dependant asset up to the parent asset is mandatory for the child asset to be functional, but preferably it should be {{{const}}} (only used for information retrieval)
* the whole hierarchy has to be crafted accordingly, but this isn't much of a limitation
* we don't use a registry, rather we model the real dependencies by individual dependency links. So a MediaAsset gets links to all Clips created from this Asset and by traversing this tree, we can handle the deletion
* after the deletion, the Fixture needs to be rebuilt.
* but any render processes still can have pointers to the Asset to be removed, and the shared_ptr will ensure, that the referred objects stay alive as long as needed.

{{red{let's see if this approach works...}}}
Contrary to the &rarr;[[Assets and MObjects|ManagementAssetRelation]], the usage pattern for [[render nodes|ProcNode]] is quite simple: All nodes are created together every time a new segment of the network is being build and will be used together until this segment is re-built, at which point they can be thrown away altogether. While it would be easy to handle the nodes automatically by smart-ptr (the creation is accessible only by use of the {{{NodeFactory}}} anyways), it //seems advisable to care for a bulk allocation/deallocation here.// The reason being not so much the amount of memory (which is expected to be moderate), but the fact the build process can be triggered repeatedly several times a second when tweaking the session, which could lead to fragmentation and memory pressure.

__10/2008__: the allocation mechanism can surely be improved later, but for now I am going for a simple implementation based on heap allocated objects owned by a vector or smart-ptrs. For each segment of the render nodes network, we have several families of objects, each of with will be maintained by a separate low-level memory manager (as said, for now implemented as vector of smart-ptrs). Together, they form an AllocationCluster; all objects contained in such a cluster will be destroyed together.
The Interface asset::Media is a //key abstraction// It ties together several concepts and enables to deal with them on the interfaces in a uniform manner. Besides, like every Asset kind, it belongs rather to the bookkeeping view: an asset::Media holds the specific properties and parametrisation of the media source it stands for. Regarding the __inward interface__ &mdash; as used from within the [[model|HighLevelModel]] or the [[render nodes|ProcNode]], it is irrelevant if any given asset::Media object stands for a complete media source, just a clip taken from this source or if a placeholder version of the real media source is used instead.
[img[Asset Classess|uml/fig130437.png]]
{{red{NOTE 3/2010:}}} Considering to change that significantly. Especially considering to collapse clip-asset and clip-MO into a single entity with multiple inheritance
&rarr; regarding MultichannelMedia (see the notes at bottom)

&rarr; see also LoadingMedia
The ~Steam-Layer is designed such as to avoid unnecessary assumptions regarding the properties of the media data and streams. Thus, for anything which is not completely generic, we rely on an abstract [[type description|StreamTypeDescriptor]], which provides a ''Facade'' to an actual library implementation. This way, the fundamental operations can be invoked, like allocating a buffer to hold media data.

In the context of Lumiera and especially in the Steam-Layer, __media implementation library__ means
* a subsystem which allows to work with media data of a specific kind
* such as to provide the minimal set of operations
** allocating a frame buffer
** describing the type of the data within such a buffer
* and this subsystem or external library has been integrated to be used through Lumiera by writing adaptation code for accessing these basic operations through the [[implementation facade interface|StreamTypeImplFacade]]
* such a link to an type implementation is registered and maintained by the [[stream type manager|STypeManager]]

!Problem of the implementation data types
Because we deliberately won't make any asumptions about the implementation library (besides the ones imposed indirectly by the facade interface), we can't integrate the data types of the library first class into the type system. All we can do is to use marker types and rely on the builder to have checked the compatibility of the actual data beforehand. 
It would be possible to circumvent this problem by requiring all supported implementation libraries to be known at compile time, because then the actual media implementation type could be linked to a facade type by generic programming. Indeed, Lumiera follows this route with regards to the possible kinds of MObject or [[Asset]] &mdash; but to the contraty, for the problem in question here, being able to include support for a new media data type just by adding a plugin by far outweights the benefits of compile-time checked implementation type selection. So, as a consequence of this design decision we //note the possibility of the media file type discovery code to be misconfigured// and select the //wrong implementation library at runtime.// And thus the render engine needs to be prepared for the source reading node of any pipe to flounder completely, and protect the rest of the system accordingly
Of course: Cinelerra currently leaks memory and crashes regularilly. For the newly written code, besides retaining the same level of performance, a main goal is to use methods and techniques known to support the writing of quality code. So, besides the MultithreadConsiderations, a solid strategy for managing the ownership of allocated memory blocks is necessary right from start.

!Problems
# Memory management needs to work correct in a //fault tolerant environment//. That means that we need to be prepared to //handle on a non-local scale// some sorts of error conditions (without aborting the application). To be more precise: some error condition arises locally, which leads to a local abort and just the disabling/failing of some subsystem without affecting the application as a whole. This can happen on a regular base (e.g. rendering fails) and thus is __no excuse for leaking memory__
# Some (not all) parts of the core application are non-deterministic. That means, we can't tie the memory management to any assumptions on behalf of the execution path

!C++ solution
First of all -- this doesn't concern //every// allocation. It rather means there are certain //dangerous areas// which need to be identified. Anyhow, instead of carrying inherent complexities of the problem into the solution, we should rather look for  common solution pattern(s) which help factoring out complexity.

For the case here in question this seems to be the __R__esource __A__llocation __I__s __I__nitialisation pattern (''RAII''). Which boils down to basically never using bare pointers when concerned with ownership. Client code allways gets to use a wrapper object, which cannot be obtained unless going through some well defined construction site. As an extension to the baisc RAII pattern, C++ allows us to build //smart wrapper objects//, thereby delegating any kind of de-registration or resource freeing automatically. This usage pattern doesn't necessarily imply (and in fact isn't limited to) just ref-counting. 

!!usage scenarios
# __existence is being used__: Objects just live for being referred to in a object network. In this case, use refcounting smart-pointers for every ref. (note: problem with cyclic refs)
# __entity bound ownership__: Objects can be tied to some long living entity in the program, which holds the smart-pointer
#* if the existence of these ref-holding entity can be //guaranteed// (as if by contract), then the other users can build a object network with conventional pointers
#* otherwise, when the ref-holding entity //can disappear// in a regular program state, we need weak-refs and checking (because by our postulate the controlled resource needs to be destructed immediately, otherwise we would have the first case, existence == being used)

!!!dangerous uses
* the render nodes &rarr; [[detail analysis|ManagementRenderNodes]] {{red{TODO}}}
* the MObjects in the session &rarr; [[detail analysis|ManagementMObjects]] {{red{TODO}}}
* Asset - MObject relationship. &rarr; [[detail analysis|ManagementAssetRelation]] {{red{TODO}}}

!!!rather harmless
* Frames (buffers), because they belong to a given [[RenderProcess (=StateProxy)|StateProxy]] and are just passed in into the individual [[ProcNode]]s. This can be handled consistently with conventional methods.
* each StateProxy belongs to one top-level call to the ~Controller-Facade
* similar for the builder tools, which belong to a build process. Moreover, they are pooled and reused.
* the [[sequences|Sequence]] and the defined [[assets|Asset]] belong together to one [[Session]]. If the Session is closed, this means a internal shutdown of the whole ProcLayer, i.e. closing of all GUI representations and terminating all render processes. If these calles are implemented as blocking operations, we can assert that as long as any GUI representation or any render process is running, there is a valid session and model.

!using Factories
And, last but not least, doing large scale allocations is the job of the Vault. Exceptions being long-lived objects, like the session or the sequences, which are created once and don't bear the danger of causing memory pressure. Generally speaking, client code shouldn't issue "new" and "delete" when it comes in handy. Questions of setup and lifecycle should allways be delegated, typically through the usage of some [[factory|Factories]], which might return the product conveniently wrapped into a RAII style handle. Memory allocation is crucial for performance, and needs to be adapted to the actual platform -- which is impossible unless abstracted and treated as a separate concern.
This category encompasses the various aspects of the way the application controls and manages its own behaviour. They are more related to the way the application behaves, as opposed to the way the edited data is structured and organised (which is the realm of [[structural assets|StructAsset]]) &rarr; {{red{Ticket #1156}}}
* StreamType &rarr; a type system for describing and relating media data streams
* ScaleGrid &rarr; to manage time scales and frame alignment
* {{{ErrorLog}}} &rarr; collect incident records

!accessing meta assets
It turns out that all meta assets follow a distinct usage pattern: //they aren't built as individual entities.// Either, they are introduced into the system as part of a larger scale configuration activity, or they are //derived from category.// The latter fits in with a prototype-like approach; initially, the individual entry just serves to keep track of a categorisation, while at some point, such a link into a describing category may evolve into a local differentiation of some settings.

Another distinct property of meta assets is to be just a generic front-end to some very specific data entry, which needs to be allocated and maintained and provided on demand. Consider for example configuration rules, which have both a textual and an AST representation and will be assembled and composed into an effective rule set, depending on usage scope. Another example would be the enormous amount of parameter data created by parameter automation in the session. While certainly the raw data needs to be stored and retrieved somehow, the purpose of the corresponding meta asset is to access and manipulate this data in a structured and specific fashion.

!!!self referential structure
These observation leads to a design relying on a self referential structure: each meta asset is a {{{meta::Descriptor}}}. In the most basic version -- as provided by the generic implementation by {{{asset::Meta}}}, this descriptor is just the link to another descriptor, which represents a category. Thus, meta assets are created or accessed by
* just an EntryID, which implicitly also establishes a type, the intent being "get me yet another of this kind"
* a descriptor and an EntryID, to get a new element with a more distinct characterisation.

!!!mutating meta assets
Meta assets are ''immutable'' -- but they can be //superseded.//
For each meta asset instance, initially a //builder// is created for setting up the properties; when done, the builder will "drop off" the new meta asset instance. The same procedure is used for augmenting or superseding an existing element.
Lumiera's Steam-Layer is built around //two interconnected models,// mediated by the [[Builder]]. Basically, the &rarr;[[Session]] is an external interface to the HighLevelModel, while the &rarr;RenderEngine operates the structures of the LowLevelModel.
Our design of the models (both [[high-level|HighLevelModel]] and [[low-level|LowLevelModel]]) relies partially on dependent objects being kept consistently in sync. Currently (2/2010), __ichthyo__'s assessment is to consider this topic not important and pervasive enough to justify building a dedicated solution, like e.g. a central tracking and registration service. An important point to consider with this assessment is the fact that the session implementation is deliberately kept single-threaded. While this simplifies reasoning, we also lack one central place to handle this issue, and thus care has to be taken to capture and treat all the relevant individual dependencies properly at the implementation level.

!known interdependencies
[>img[Fundamental object relations used in the session|uml/fig136453.png]]
* the session API relies on two kinds of facade like assets: [[Timeline]] and [[Sequence]], linked to the BindingMO and [[Fork]] ("track") objects within the model respectively.
* conceptually, the DefaultsManagement and the AssetManager count as being part of the [[global model scope|ModelRootMO]], but, due to their importance, these facilities are accessible through an singleton interface.
* currently as of 2/2010 the exact dependency of the automation calculation during the render process onto the automation definitions within the HighLevelModel remains to be specified.

!!Timelines and Sequences
While implemented as StructAsset, additionally we need to ensure every instance gets linked to the relevant parts of the model and registered with the session. Contrast this with other kinds of assets, which may just remain enlisted, but never actually used.

;the Session
:...is linked 1:1 with timelines and sequences. Registration and deregistration is directly tied to creation and destruction.
: __created__ &rArr; default timeline
: __destroy__ &rArr; discard all timelines, discard all sequences

;Timeline
:acts as facade and is implemented by an root-attached BindingMO. Can't exist in isolation.
: __created__ &rArr; create a dedicated new binding, either using an existing sequence, or a newly created empty sequence
: __destroy__ &rArr; remove binding, while the previously bound sequence remains in model.
;root-placed Binding
:while generally a Binding can just exist somewhere in the model, when attached to root, a Timeline will be created
: __created__ &rArr; an existing sequence might be given on creation, otherwise a default configured sequence is created
: __destroy__ &rArr; implemented by detaching from root (see below) prior to purging from the model.
: __attached__ to root &rArr; invoke Timeline creation
: __detached__ from root &rArr; will care to destroy the corresponding timeline

;Sequence
:is completely dependent on a root-scoped "track" (fork root), can optionally be bound, into one/multiple timelines/VirtualClip, or unbound
: __created__ &rArr; mandates specification of an track-MO, (necessarily) placed into root scope &mdash; {{red{TODO: what to do with non-root tracks?}}}
: __destroy__ &rArr; destroy any binding using this sequence, purge the corresponding track from model, if applicable, including all contents
: __querying__ &rArr; forwards to creating a root-placed track, unless the queried sequence exists already
;root-placed Track
:attachment of a track to root scope is detected magically and causes creation of a Sequence
: __attached__ to root &rArr; invoke sequence creation
: __detached__ from root &rArr; find and detach from corresponding sequence, which is then destroyed. //Note:// contents remain unaffected
: irrespective if the track exists, is empty, or gets purged entirely, only the connection to root scope counts; thus relocating a track by [[Placement]] might cause its scope with all nested contents to become a sequence of its own or become part of another sequence. As sequences aren't required to be bound into a timeline, they may be present in the model as invisible, passive container

!!Magic attachments
While generally the HighLevelModel allows all kinds of arrangements and attachments, certain connections are [[detected automatically|ScopeTrigger]] and may trigger special actions, like the creation of Timeline or Sequence façade objects as described above. The implementation of such [[magic attachments|MagicAttachment]] relies on the PlacementIndex.
When it comes to addressing and distinguishing object instances, there are two different models of treatment, and usually any class can be related to one of these: An object with ''value semantics'' is completely defined through this "value", and not distinguishable beyond that. Usually, value objects can be copied, handled and passed freely, without any ownership. To the contrary, an object with ''reference semantics'' has an unique identity, even if otherwise completely opaque. It is rather like a facility, "living" somewhere, often owned and managed by another object (or behaving special in some other way). Usually, client code deals with such objects through a reference token (which has value semantics). Care has to be taken with //mutable objects,// as any change might influence the object's identity. While this usually is acceptable for value objects, it is prohibited for objects with reference semantics. These are typically created by //factories// &mdash; and this fabrication is the only process to define the identity.

!Assets
Each [[Asset]] holds an identification tuple; the hash derived from this constant tuple is used as ~Asset-ID.
* the {{{Asset::Ident}}} tuple contains the following information
*# a __name-ID__, which is a human understandable but sanitised word
*# a tree-like classification of the asset's __category__, comprised of
*#* asset kind {{{{AUDIO, VIDEO, EFFECT, CODEC, STRUCT, META}}}}
*#* a path in a virtual classification tree. Some &raquo;folders&laquo; have magic meanings
*# an __origin-ID__ to denote the origin, authorship or organisation, acting like a namespace
*# a __version number__.
Of these, the tuple {{{(org,category,name)}}} defines the unique asset identity, and is hashed into the asset-ID. At most one asset with a given ident-tuple (including version) can exist in the whole system. Any higher version is supposed to be fully backwards compatible to all previous versions. Zero is reserved for internal purposes. {{red{1/10: shouldn't we extend the version to (major,minor), to match the plug-in versioning?}}} Thus, the version can be incremented without changing the identity, but the system won't allow co-existence of multiple versions.
* Assets are ''equality'' comparable (even ''ordered''), based on their //identity// &mdash; sans version.
* Categories are ''equality'' comparable value objects

!~MObjects
As of 1/10, MObjects are mostly placeholders or dummies, because the actual SessionLogic has still to be defined in detail.

The following properties can be considered as settled:
* reference semantics
* non-copyable, created by MObjectFactory, managed automatically
* each ~MObject is associated n:1 to an asset.
* besides that, it has an opaque instance identity, which is never made explicit.
* this identity springs from the way the object is created and manipulated. It is //persistent.//
* because of the ~MObject-identity's nature, there is //no point in comparing ~MObjects.//
* ~MObjects are always attached to the session by a placement, which creates kind-of an //instance//
* thus, because placements act as a subdivision of ~MObject identification, in practice always placements will be compared.

!Placements
[[Placements|Placement]] are somewhat special, as they mix value and reference semantics. First off, they are configuration values, copyable and smart-pointers, referring to a primary subject (clip, effect, fork, label, binding,....). But, //by adding a placement to the session,// we create an unique instance-identity. This is implemented by copying the placement into the internal session store and thereby creating a new hash-ID, which is then registered within the PlacementIndex. Thus, a ''placement into the model'' has a distict identity.
* Placements are ''equality'' comparable, based on this instance identity (hash-ID)
* besides, there is an equivalence relation regarding the "placement specification" contained in the [[locating pins|LocatingPin]] of the Placement.
** they can be compared for ''equivalent definition'': the contained definitions are the same and in the same order
** alternatively, they can be checked for ''effective equivalence'': both placements to be compared resolve to the same position
note: the placement equality relation carries over to ~PlacementRef and ~MObjectRef

!Commands
{{red{WIP}}} For now, commands are denoted by an unique, human-readable ID, which is hard-coded in the source. We might add an LUID and a version numbering scheme later on.
Commands are used as ''prototype object'' &mdash; thus we face special challenges regarding the identity, which haven't yet been addressed.

!References and Handles
These are used as token for dealing with other objects and have no identity of their own. PlacementRef tokens embody a copy of the referred placement's hash-ID. MObjectRef handles are built on top of the former, additionally holding a smart-ptr to the primary subject.
* these reference handles simply reflect the equality relation on placements, by virtue of comparing the hash-ID.
* besides, we could build upon the placement locating chain equivalence relations to define a semantic equivalence on ~MObjectRefs

Any point where output possibly might be produced. Model port entities are located within the [[Fixture]] &mdash; model port as a concept spans the high-level and low-level view. A model port can be associated both to a pipe in the HighLevelModel but at the same time denotes a set of corresponding [[exit nodes|ExitNode]] within the [[segments|Segmentation]] of the render nodes network. As far as rendering is concerned, a port corresponds to a [[stream of calculations|CalcStream]] to produce a distinct output data stream, generated time-bound yet asynchronously.

A model port is rather derived than configured; it emerges when a pipe [[claims|WiringClaim]] an output destination, while some other entity at the same time actually //uses this designation as a target,// either directly or indirectly. This match of provision and usage is detected during the build process and produces an entry in the fixture's model port table. These model ports in the fixture are keyed by ~Pipe-ID, thus each model port has an associated StreamType.

Model ports are the effective, resulting outputs of each timeline; additional ports result from [[connecting a viewer|ViewConnection]]. Any render or display process happens at a model port. The //granularity// of the ports is defined by what data has to be delivered time-bound and together for any conceivable output activity.

!formal specification
Model port is a //conceptual entity,// denoting the possibility to pull generated data of a distinct (stream)type from a specific bus within the model -- any possible output produced or provided by Lumiera is bound to appear at a model port. The namespace of model ports is global, each being associated with a ~Pipe-ID. Media can be inherently multi-channel, and when it is delivered as a whole (with multiplexed or planar channels), then such a possible media output stream is represented by a single model port.

Model ports are represented by small non-copyable descriptor objects with distinct identity, which are owned and managed by the [[Fixture]]. Clients are mandated to resolve a model port on each usage, as configuration changes within the model might cause ports to appear and decease. To stress this usage pattern, actually {{{ModelPort}}} instances are small copyable value objects (smart handles), which can be used to access data within an opaque registry. Each model port belongs to a specific Timeline and is aware of this association, as is the timeline, allowing to get the collection of all ports of a given timeline. Besides, within the Fixture each model port refers to a specific [[segmentation of the time axis|Segmentation]] (relevant for this special timeline actually). Thus, with the help of this segmentation, a model port can yield an ExitNode to pull frames for a given time.

Model ports are conceptual entities, denoting the points where output might possibly be produced &rarr; see [[definition|ModelPort]].
But there is an actual representation, a collection of small descriptor objects managed by the Fixture and organised within the model port table datastructure. Because model ports are discovered during the build process, we need the ability to (re)build this table dynamically, finally swapping in the modified configuration with a transactional switch. Only the builder is allowed to perform such mutations, while for client code the model ports are immutable.

!supported operations
* get the model port by ~Pipe-ID
* a collection of model ports per timeline
* sub-grouping by media kind, possibly even by StreamType
* possibility to enumerate model ports in distinct //order,// defined within the timeline
* with the additional possibility to filter by media kind or suitable stream type.
* a way to express the fact of //not having a model port.//

!!!mutating and rebuilding
Model ports are added once and never changed. The corresponding timeline and pipe is known at setup time, but the overall number of model ports is determined only as a result of completing the build process. At that point, the newly built configuration is swapped in transactionally to become the current configuration.

!Implementation considerations
The transactional switch creates a clear partitioning in the lifespan of the model port table. //Before// that point, entries are just added, but not accessed in any way. //After// that point, no further mutation occurs, but lookup is frequent and happens in a variety of different configurations and transient orderings.

This observation leads to the idea of using //model port references// as frontend to provide all kinds of access, searching and reordering. These encapsulate the actual access by silently assuming reference to "the" global current model port configuration. This way the actual model port descriptors could be bulk allocated in a similar manner as the processing nodes and wiring descriptors. Access to stale model ports could be detected by the port references, allowing also for a {{{bool}}} checkable "has no port" information.

A model port registry, maintained by the builder, is responsible for storing the discovered model ports within a model port table, which is then swapped in after completing the build process. The {{{builder::ModelPortRegistry}}} acts as management interface, while client code accesses just the {{{ModelPort}}} frontend. A link to the actual registry instance is hooked into that frontend when bringing up the builder subsystem.
A special kind of MObject, serving as a marker or entry point at the root of the HighLevelModel. As any ~MObject, it is attached to the model by a [[Placement]]. And in this special case, this placement froms the ''root scope'' of the model, thus containing any other PlacementScope (e.g. forks, clips with effects,...)

This special ''session root object'' provides a link between the model part and the &raquo;bookkeeping&laquo; part of the session, i.e. the [[assets|Asset]]. It is created and maintained by the session (implementation level) &mdash; allowing to store and load the asset definitions as contents of the model root element. Beyond that, this //root scope representation// serves another, closely related purpose: conceptually, structures of the HighLevelModel are mapped into the UI, by virtue of the [[diff framework|TreeDiffModel]]. Corresponding to each relevant entity in the model, there is a widget or a controller in the UI to serve as [[tangible front-end|UI-Element]] to expose this entity. And in accordance to this regime, the session root object is mapped onto the [[InteractionDirector|UI-InteractionDirector]] (top-level controller), which, in conjunction with {{{UiManager}}} and UI-Bus, forms the leading structure of the UI-Layer.

__Note__: nothing within the PlacementIndex requires the root object to be of a specific type; the index just assumes a {{{Placement<MObject>}}} (or subclass) to exist as root element. And indeed, for several unit tests we create an Index mock with a tree of dummy ~MObjects and temporarily shaddow the "real" PlacementIndex by this mock (&rarr; see SessionServices for the API allowing to access this //mock index//- functionality)

Based on practical experiences, Ichthyo tends to consider Multichannel Media as the base case, while counting media files providing just one single media stream as exotic corner cases. This may seem counter intuitive at first sight; you should think of  it as an attempt to avoid right from start some of the common shortcomings found in many video editors, especially
* having to deal with keeping a "link" between audio and video clips
* silly limitations on the supported audio setups (e.g. "sound is mono, stereo or Dolby-5.1")
* unnecessary complexity when dealing with more realistic setups, esp. when working on dialogue scenes
* inability to edit stereoscopic (3D) video in a natural fashion

!Compound Media
Basically, each [[media asset|MediaAsset]] is considered to be a compound of several elementary media (tracks), possibly of various different media kinds. Adding support for placeholders (''proxy clips'') at some point in future will add still more complexity (because then there will be even dependencies between some of these elementary media). To handle, edit and render compound media, we need to impose some structural limitations. But anyhow, we try to configure as much as possible already at the "asset level" and make the rest of the Steam-Layer behave just according to the configuration given with each asset.
{{red{Note 1/2015}}}: various details regarding the model representation of multichannel media aren't fully settled yet. There is a placeholder in the source, which can be considered more or less obsolete

!!Handling within the Model
* from a Media asset, we can get a [[Processing Pattern (ProcPatt)|ProcPatt]] describing how to build a render pipeline for this media
* we can create a Clip (MObject) from each Media, which will be linked back to the media asset internally.
* media can be compound internally, but will be handled as a single unit as long as possible. Even for compound media, we get a single clip.
* within the assets, we get a distinct ChannelConfig asset to represent the structural assembly of compound media based on elementary parts.
* sometimes its necessary to split and join the individual channels in processing, for technical reasons. Clearly the goal is to hide this kind of accidental complexity and treat it as an implementation detail. At HighLevelModel scope, conceptually there is just one "stream" for each distinct kind of media, and thus there is only one [[Pipe]]. But note, compound media can be structured beyond having several channels. The typical clip is a compound of image and sound. While still handled as one unit, this separation can't be concealed entirely; some effects can be applied to sound or image solely, and routing needs to separate sound and image at least when reaching the [[global pipes|GlobalPipe]] within the timeline. 
* the Builder gets at the ProcPatt (descriptor) of the underlying media for each clip and uses this description as a template to build the render pipeline. That is, the ProcPatt specifies the codec asset and maybe some additional effect assets (deinterlace, scale) necessary for feeding media data corresponding to this clip/media into the render nodes network.

!!Handling within the engine
While initially it seems intuitive just to break down everything into elementary stream processing within the engine, unfortunately a more in-depth analysis reveals that this approach isn't viable. There are some crucial kinds of media processing, which absolutely require having //all channels available at the actual processing step.// Notable examples being sound panning, reverb and compressors. Same holds true for processing of 3D (stereoscopic) images. In some cases we can get away with replicating identical processor nodes over the multiple channels and applying the same parameters to all of them. The decision which approach to take here is a tricky one, and requires much in-depth knowledge about the material to be processed -- typically the quality and the image or sound fidelity depends on these kind of minute distinctions. Many -- otherwise fine -- existing software falls short on this domain. Making such fine points accessible through [[processing rules|Rules]] is one of the core goals of the Lumiera project.

As an immediate consequence of not being able to reduce processing to elementary stream processing, we need to introduce a [[media type system|StreamType]], allowing us to reason and translate between the conceptual unit at the model level, the compound of individual processing chains in the builder and scheduling level, and the still necessary individual render jobs to produce the actual data for each channel stream. Moreover it is clear that the channel configuration needs to be flexible, and not automatically bound to the existence of a single media with that configuration. And last but not least, through this approach, we also enable handling of nested sequences as virtual clips with virtual (multichannel) media.

&rArr; conclusions
* either the ~ClipMO referres directly to a ChannelConfig asset &mdash; or in case of the VirtualClip a BindingMO takes this role.
* as the BindingMO is also used to implement the top-level timelines, the treatment of global and local pipes is united
* every pipe (bus) should be able to carry multiple channels, //but with the limitation to only a single media StreamType//
* this "multichannel-of-same-kind" capability carries over to all entities within the build process, including ModelPort entries and even the OutputSlot elements
* in playback / rendering, within each "Feed" (e.g. for image and sound) we get [[calculation streams|CalcStream]] matching the individual channels
* thus, only as late as when allocating / "opening" an OutputSlot for actual rendering, we get multiple handles for plain single channels.
* the PlayProcess serves to join both views, providing a single PlayController front-end, while [[dispatching|FrameDispatcher]] to single channel processing.
//message on the UI-Bus to cause changes on the targeted UI-Element.//
The UI-Bus offers a dedicated API to direct ~MutationMessages towards {{{Tangible}}} elements, as designated by the given ID. Actually, such messages serve as //capsule to transport a [[diff-sequence|TreeDiffModel]]// -- since a diff sequence as such is always concrete and tied to a specific context, we can not represent it directly as an abstract type at interface level. The receiver of a diff sequence must offer the ability to be reshaped through diff messages, which is expressed through the interface {{{DiffMutable}}}.

In the case at hand, the basic building block of the Lumiera UI, the {{{Tangible}}} offers this interface and thus the ability to construct a concrete TreeMutator, which in turn is bound to the internals of the actual UI-Element in question. Together this allows for a generic implementation of MutationMessage handling, where the designated UI-Element is reshaped by applying a concrete diff sequence embedded in the message with the help of a {{{DiffApplicator<DiffMutable>}}}, based on the TreeMutator exposed.
//Service to navigate through the UI as generic structure.//
The Navigator is a component maintained by the InteractionDirector, and the actual implementation is backed by several facilities of the GuiTopLevel. It serves as foundation to treat the UI as a topological network of abstracted locations, represented as [[UI-Coordinates|UICoord]]. This design, together with the UI-Bus helps to reduce coupling within the UI implementation, since it enables to //get somewhere// and reach //some place// -- without the necessity to rely on concrete widget implementation structure.

!The problem of abstraction
This goal initially poses some challenge, since it aims beyond the conventional notion of UI programming, where it is sufficient just to wire some widgets and manipulate the receiver of an event notification. The usual UI widgets are just not meant to be treated in a more systematic, generic way -- and indeed, in most cases and for most purposes it is not a good idea to approach UI programming in a to much schematic way. User interfaces need to be tangible, something concrete, with lots of specifics and local traits. Yet this very nature of UI programming tends to turn some //cross-cutting concerns// into serious liabilities. So the deliberate ''decision to introduce a Navigator'' in avoidance of these future burdens and liabilities is a decision of priorities when it comes to shaping the Lumiera UI.

Which leaves us with the quest of mapping a generic location scheme onto a load of implementation defined structures not exposing any kind of genericness, and to accomplish this within an environment lacking meta information and support for self discovery beyond the most basic abstraction level. As a first step towards bridging this gap, we'll have to identify the actual //command-and-query operations// required to treat UI elements as a topological structure.

!!!Analysis of expected functionality
In order to build a navigation facility, we need to...
* follow a path
** which means to constitute a location
** and to discover child nodes at that location
* and we might want to extend (maybe also prune) the collection of children

!!!Use cases
In the current situation ({{red{10/2017}}}), before engaging into the actual implementation, we're able to identify two distinct use cases
;View [[specification|GuiComponentView]]
:locate a view based on a preconfigured placement
:* either to allocate a new view instance
:* or to get //just some instance// of a view identified by type
;WorkSite navication
:move the Spot to some other place in the UI known by its [[UI-Coordinates|UICoord]]
!!!{{red{Update 2/2018:}}} changed responsibilities
Elaboration on the topic of »View Allocation« caused some minor architectural rearrangements.
* Navigator became a pure information service (read-only), working on an abstracted view of the UI through [[UI coordinates|UICoord]]
* the ViewLocator became service point for any high-level access to GuiComponentView elements

!!!Requirements clarified
From these use cases we conclude that the actual requirements for a Navigator component are less than one might expect.
In fact it is sufficient to keep //the actual element// entirely opaque, so the Navigator works on [[UI coordinates|UICoord]] solely. The result -- some other UI coordinates -- can then be used to accomplish some tasks implemented elsewhere, like allocating a new view or actually moving [[the Spot|Spot]] (&rarr; InteractionControl)

!Challenges of the implementation
Some tricky problems remain to be solved, though: since the Navigator works on UI coordinates, the fundamental problem remains how to acquire the initial coordinates to start navigation. This is a problem of //reverse lookup:// given a concrete element of the UI, find it's UI coordinates. While we should note that it might not be necessary to "discover" coordinates, because in fact we may know them already -- either the element has to store them (on creation), or some lookup index table could be maintained to serve the same purpose. The actual access to low-level UI entities generates a host of further tecnicalities, which we attempt to stash away into a different low-level service, the  [[gui:ctrl::ElmAccessDir|UILowLevelAccess]].

Moreover, the design of coordinate matching and resolving incurs a structure similar to [[render job planning|FrameDispatcher]] -- and the corresponding design problems remain to be solved in a satisfactory way &rarr; [[some notes...|AboutMonads]]

Various aspects of the individual [[render node|ProcNode]] are subject to configuration and may influence the output quality or the behaviour of the render process.
* the selection //what// actual implementation (plugin) to used for a formally defined &raquo;[[Effect|EffectHandling]]&laquo;
* the intermediary/common StreamType to use within a [[Pipe]]
* the render technology (CPU, hardware accelerated {{red{&rarr; Future}}})
* the ScheduleStrategy (possibly subdividing the calculation of a single frame)
* if this node becomes a possible CachePoint or DataMigrationPoint in RenderFarm mode
* details of picking a suitable [[operation mode|RenderImplDetails]] of the node (e.g. utilitsing "in-place" calculation)
~NodeCreatorTool is a [[visiting tool|VisitorUse]] used as second step in the [[Builder]]. Starting out from a [[Fixture]], the builder first [[divides the Timeline into segments|SegmentationTool]] and then processes each segment with the ~NodeCreatorTool to build a render nodes network (Render Engine) for this part of the timeline. While visiting individual Objects and Placements, the ~NodeCreatorTool creates and wires the necessary [[nodes|ProcNode]]
!Problem of Frame identification

!Problem of Node numbering
In the most general case the render network may be just a DAG (not just a tree). Especially, multiple exit points may lead down to the same node, and following each of this possible paths the node may be at a different depth on each. This rules out a simple counter starting from the exit level, leaving us with the possibility of either employing a rather convoluted addressing scheme or using arbitrary ID numbers.{{red{...which is what we do for now}}}
{{red{⚠ In-depth rework underway as of 7/2024...}}}
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
The [[nodes|ProcNode]] are wired to form a "Directed Acyclic Graph"; each node knows its predecessor(s), but not its successor(s).  The RenderProcess is organized according to the ''pull principle'', thus we find an operation {{{pull()}}} at the core of this process. Meaning that there isn't a central entity to invoke nodes consecutively. Rather, the nodes themselves contain the detailed knowledge regarding prerequisites, so the calculation plan is worked out recursively. Yet still there are some prerequisite resources to be made available for any calculation to happen. So the actual calculation is broken down into atomic chunks of work, resulting in a 2-phase invocation whenever "pulling" a node. For this to work, we need the nodes to adhere to a specific protocol:
;planning phase
:when a node invocation is foreseeable to be required for getting a specific frame for a specific nominal and actual time, the engine has to find out the actual operations to happen
:# the planning is initiated by issuing an "get me output" request, finally resulting in a JobTicket
:# recursively, the node propagates "get me output" requests for its prerequisites
:# after retrieving the planning information for these prerequisites, the node encodes specifics of the actual invocation situation into a closure called StateAdapter <br/>{{red{TODO: why not just labeling this &raquo;~StateClosure&laquo;?}}}
:# finally, all this information is packaged into a JobTicket representing the planning results.
;pull phase
:now the actual node invocation is embedded within a job, activated through the scheduler to deliver //just in time.//
:# Node is pulled, with a StateProxy object as parameter (encapsulating BufferProvider for access to the required frames or buffers)
:# Node may now retrieve current parameter values, using the state accessible via the StateProxy
:# to prepare for the actual {{{process()}}} call, the node now has to retrieve the input prerequisites
:#* when the planning phase determined availability from the cache, then just these cached buffer(s) are now retrieved, dereferencing a BuffHandle
:#* alternatively the planning might have arranged for some other kind of input to be provided through a prerequisite Job. Again, the corresponding BuffHandle can now be dereferenced
:#* Nodes may be planned to have a nested structure, thus directly invoking {{{pull()}}} call(s) to prerequisite nodes without further scheduling
:# when input is ready prior to the {{{process()}}} call, output buffers will be allocated by locking the output [[buffer handles|BuffHandle]] prepared during the planning phase
:# since all buffers and prerequistes are available, the Node may now prepare a frame pointer array and finally invoke the external {{{process()}}} to kick off the actual calculations
:# finally, when the {{{pull()}}} call returns, "parent" state originating the pull holds onto the buffers containing the calculated output result.
{{red{WIP as of 9/11  -- many details here are still to be worked out and might change as we go}}}

{{red{Update  8/13  -- work on this part of the code base has stalled, but now the plain is to get back to this topic when coding down from the Player to the Engine interface and from there to the NodeInvocation. The design as outlined above was mostly coded in 2011, but never really tested or finished; you can expect some reworkings and simplifications, but basically this design looks OK}}}

some points to note:
* the WiringDescriptor is {{{const}}} and precalculated while building (remember another thread may call in parallel)
* when a node is "inplace-capable", input and output buffer may actually point to the same location
* but there is no guarantee for this to happen, because the cache may be involved (and we can't overwrite the contents of a cache frame)
* nodes in general may require N inputs and M output frames, which are expected to be processed in a single call
* some of the technical details of buffer management are encapsulated within the BufferTable of each invocation

&rarr; the [["mechanics" of the render process|RenderMechanics]]
&rarr; more fine grained [[implementation details|RenderImplDetails]]
The calculations for rendering and playback are designed with a base case in mind: calculating a linear sequence of frames consecutive in time.
But there are several important modes of playback, which violate that assumption...
* jump-to / skip
* looping
* pause
* reversed direction
* changed speed
** slow-motion
** fast-forward/backward shuffling 
* scrubbing
* free-wheeling (as fast as possible) 

!search for a suitable implementation approach {{red{WIP 1/2013}}}
The crucial point seems to be when we're picking a starting point for the planning related to a new frame. &rarr; {{{PlanningStepGenerator}}}
Closely related is the question when and how to terminate a planning chunk, what to dispose as a continuation, and when to cease planning altogether.

!requirement analysis
These non linear playback modes do pose some specific challenges on the overall control structure distributed over the various collaborators within the play and render subsystem.The following description treats each of the special modes within its relations to this engine context
;jumping
:creates a discontinuity in //nominal time,// while the progress of real wall clock time deadlines remains unaffected
:we need to distinguish two cases
:* a //pre planned jump// can be worked into the frame dispatch just like normal progression. It doesn't create any additional challenge on timely delivery
:* to the contrary, a //spontaneous re-adjustment of playback position// deprives the engine of any delivery headroom, forcing us to catch up anew.<br/>&rarr; we should introduce a configurable slippage offset, to be added on the real time deadlines in this case, to give the engine some head start
:since each skip might create an output discontinuity, the de-clicking facility in the output sink should be notified explicitly (implying that we need an API, reachable from within the JobClosure)
;looping
:looped playback is implemented as repeated skip at the loop boundary; as such, this case always counts as pre planned jump.
;pausing
:paused playback represents a special state of the engine, where we expect playback to be able to commence right away, with minimal latency
:&rarr;we need to take several coordinated measures to make this happen
:* when going to paused state, previously scheduled jobs aren't cancelled, rather rescheduled to background rendering, but in a way which effectively pins the first frames within cache
:* additionally, the OutputSlot needs to provide a special mode where output is //frozen// and any delivery is silently blocked. The reason is, we can't cancel already triggered output jobs
:* on return to normal playback, we need to ensure that the availability of cached results will reliably prevent superfluous prerequisite jobs from being scheduled at all &rarr; we need conditional prerequisites
:The availability of such a pausing mechanism has several ramifications. We could conceive an auto-paused mode, switching to playback after sufficient background pre rendering to ensure the necessary scheduling headroom. Since the link to wall clock time and thus the real time deadlines are established the moment actual playback starts, we might even transition through auto paused mode whenever playback starts from scratch into some play mode.
;single stepping
:this can be seen as application of paused mode: we'd schedule a single play-out job, as if resuming from paused state, but we re-enter paused state immediately
;reversed play direction
:while basically trivial to implement, the challenge lies in possible naive implementation decisions assuming monotonic ascending frame times. Additionally, media decoders might need some hinting...
:however, reversed (and speed adjusted) sound playback is surprisingly tricky -- even the most simplistic solution foces us to insert an effect processor into the calculation path.
;speed variations
:the relation between nominal time and real wall clock time needs to include a //speed factor.//
;fast cueing
:the purpose of cuing is to skip through a large amount of material to spot some specific parts. For this to work, the presented material needs to be still recognisable in some way. Typically this is done by presenting small continuous chunks of material interleaved with regular skips. For editing purposes, this method is often perceived as lacking, especially by experienced editors. The former, mechanical editing systems to the contrary had the ability to run with actually increased frame rate, without skipping any material.
:&rarr; for one, this is a very specific configuration of the loop play mode.
:&rarr; it is desirable to improve the editor's working experience here. We might consider actually increasing the frame rate, given the increased availability of high-framerate capable displays. Another approach would be to apply some kind of granular synthesis, dissolving several consecutive segments of material. The latter would prompt to include a specific buffering and processing device not present in the render path for normal playback. Since we do dedicated scheduling for each playback mode, we're indeed in a position to implement such facilities.
;scrubbing
:the actual scrubbing facility is an interactive device, typically even involving some kind of hardware control interface. But the engine needs to support scrubbing, which translates into chasing a playback target, which is re-adjusted regularly. The implementation facilities discussed thus far are sufficient to implement this feature, combined with the ability of life changes to the playback mode.
;free-wheeling
:at first sight, this feature is trivial to implement. All it takes is to ignore any real time scheduling targets, so it boils down to including a flag into the [[timing descriptor|Timings]]. But there is a catch. Since free-wheeling only makes sense for writing to a file like sink, actually the engine might be overrunning the consumer. In the end, we get to choose between the following alternatives: do we allow the output jobs to block in that case, or do we want to implement some kind of flow regulation?

!essential implementation level features
Drawing from this requirement analysis, we might identify some mandatory implementation elements necessary to incorporate these playback modes into the player and engine subsystem.
;for the __job planning stage__:
:we need a way to interact with a planning strategy, included when constituting the CalcStream
:* ability for planned discontinuities in the nominal time of the "next frame"
:* ability for placing such discontinuities repeatedly, as for looped playback
:* allow for interleaved skips and processed chunks, possibly with increased speed
:* ability to inject meta jobs under specific circumstances
:* placing callbacks into these meta jobs, e.g. for auto-paused mode
;for the __timings__:
:we need flexibility when establishing the deadlines
:* allow for an added offset when re-establishing the link between nominal and real time on replanning and superseding of planned jobs
:* flexible link between nominal and real time, allowing for reversed playback and changed speed
:* configurable slippage offset
;for the __play controller__:
:basically all changes regarding non linear playback modes are initiated and controlled here
:* a paused state
:* re-entering playback by callback
:* re-entering paused state by callback
:* a link to the existing feeds and calculation streams for superseding the current planning
:* use a strategy for fast-cueing (interleaved skips, increased speed, higher framerate, change model port on-the-fly to use a preconfigured granulator device)
;for the __scheduler interface__:
:we need some structural devices actually to implement those non-standard modes of operation
:* conditional prerequisites (prevent evaluation, re-evaluate later)
:* special "as available" delivery, both for free-wheeling and background
:* a special way of "cancelling" jobs, which effectively re-schedules them into background.
:* a way for hinting the cache to store background frames with decreasing priority, thus ensuring the foremost availability of the first frames when picking up playback again
;for the __output sinks__:
:on the receiver side, we need some support to generate smooth and error free output delivery
:* automated detection of timing glitches, leading to activation of the discontinuity handling (&raquo;de-click facility&laquo;)
:* low-level API for signalling discontinuity to the OutputSlot. This information pertains to the currently delivered frame -- this is necessary when this frame //is actually being delivered timely.//
:* high-level API to switch any ~OutputSlot into "frozen mode", disabling any further output, even in case of accidental delivery of further data by jobs currently in progression.
:* ability to detect and signal overload of the receiver, either through blocking or for flow-control
Cinelerra2 introduced OpenGL support for rendering previews. I must admit, I am very unhappy with this, because
* it just supports some hardware
* it makes building difficult
* it can't handle all color models Cinelerra is capable of
* it introduces a separate codepath including some complicated copying of video data into the textures (and back?)
* it can't be used for rendering

So my judgement would be: in contrary to a realtime/gaming application, for quality video editing it is not worth the effort implementing OpenGL support in all details and with all its complexity. I would accept ~OpenGL as an option, if it could be pushed down into a Library, so it can be handled and maintained transparently and doesnt bind our limited developer manpower.

But because I know the opinions on this topc are varying (users tend to be delighted if they hear "~OpenGL", because it seems to be likted to the notion of "speed" and "power" todays) &mdash; I try to integrate ~OpenGL as a possibility into this design of the Render Engine. But I insist on putting up the requirement that it //must not jeopardize the code structure.//

My proposed aproach is to treat OpenGL as a separate video raw data type, requiring separete and specialized [[Processing Nodes|ProcNode]] for all calculations. Thus the Builder could connect OpenGL nodes if it is possible to cover the render path in whole or partially or maybe even just for preview.
A low-level abstraction within the [[Builder]] &mdash; it serves to encapsulate the details of making multi-channel connections between the render nodes: In some cases, a node can handle N channels internally, while in other cases we need to replicate the node N times and wire each channel individually. As it stands, the OperationPoint marks the ''borderline between high-level and low-level model'': it is invoked in terms of ~MObjects and other entities of the high-level view, but internally it manages to create ProcNode and similar entities of the low-level model.

The operation point is provided by the current BuilderMould and used by the [[processing pattern|ProcPatt]] executing within this mould and conducting the current build step. The operation point's interface allows //to abstract//&nbsp; these details, as well as to //gain additional control//&nbsp; if necessary (e.g. addressing only one of the channels). The most prominent build instruction used within the processing patterns (which is the instruction {{{"attach"}}}) relies on the aforementioned //approach of abstracted handling,// letting the operation point determine automatically how to make the connection.

This is possible because the operation point has been provided (by the mould) with information about the media stream type to be wired, which, together with information accessible at the [[render node interface|ProcNode]] and from the [[referred processing assets|ProcAsset]], with the help of the [[connection manager|ConManager]] allows to figure out what's possible and how to do the desired connections. Additionally, in the course of deciding about possible connections, the PathManager is consulted to guide strategic decisions regarding the [[render node configuration|NodeConfiguration]], possible type conversions and the rendering technology to employ.
An ever recurring problem in the design of Luimiera's ~Steam-Layer is how to refer to output destinations, and how to organise them.
Wiring the flexible interconnections between the [[pipes|Pipe]] should take into account both the StreamType and the specific usage context ([[scope|PlacementScope]]) -- and the challenge is to avoid hard-linking of connections and tangling with the specifics of the target to be addressed and connected. This page, started __6/2010__ by collecting observations to work out the relations, arrives at defining a //key abstraction// of output management.

!Observations
* effectively each [[Timeline]] is known to expose a set of global Pipes
* when connecting a Sequence to a Timeline or a VirtualClip, we also establish a mapping
* this mapping translates possible media stream channels produced by the sequence into (real) output slots located in the enclosing entity
* uncertainty on who has to provide the global pipes, implementation wise &mdash;
** as Timeline is just a façade, BindingMO has to expose something which can be referred for attaching effects (to global pipes)
** when used as VirtualClip, there is somehow a channel configuration, either as asset, or exposed by the BindingMO
* Placements always resolve at least two dimensions: time and output. The latter means that a [[Placement]] can figure out to where to connect
* the resolution ability of Placements could help to overcome the problems in conjunction with a VirtualClip: missing output destination information could be inherited down....
* expanding on the basic concept of a Placement in N-dimensional configuration space, this //figuring out// would denote the ability to resolve the final output destination
* this resolution to a final destination is explicitly context dependent. We engage into quite some complexities to make this happen (&rarr; BindingScopeProblem)
* [[processing patterns|ProcPatt]] are used for creating nodes on the source network of a clip, and similarly for fader, overlay and mixing into a summation pipe
* in case the fork ("track tree") of a sequence doesn't contain specific routing advice, connections will be done directly to the global pipes in order and by matching StreamType (i.e. typically video to video master, audio to stereo audio master). When a monitor (viewer window) is attached to a timeline, similar output connections are made from the timeline's global pipes, i.e. the video display will take the contents of the first video (master) bus, and the first stereo audio pipe will be pulled and sent to system audio out.
* a mismatch between the system output possibilities and the stream type of a bus to be monitored should result in the same adaptation mechanism to kick in, as is used internally, when connecting an ~MObject to the next bus. Possibly we'll use separate rules in this case (allow 3D to flat, stereo to mono, render 5.1 into Ambisonics...)

!Conclusions
* there are //direct, indirect//&nbsp; and //relative//&nbsp; referrals to an output designation
* //indirect// means to derive the destination transitively (looking at the destination's output designation and so on)
* //relative// in this context means that we refer to "the N^^th^^ of this kind" (e.g. the second video out)
* we need to attach some metadata with an output; especially we need an associated StreamType
* the referral to an output designation can be observed on and is structured into several //levels://
** within the body of the model, mostly we address output destinations relatively
** at some point, we'll address a //subgroup// within the global pipes, which acts like a summation sink
** there might be //master pipe(s),// collecting the output of various subgroups
** finally, there is the hardware output or the distinct channel within the rendered result &mdash; never to be referenced explicitly
!!!relation to Pipes
in almost every case mentioned above, the output designation is identical with the starting point of a [[Pipe]]. This might be a global pipe or a ClipSourcePort. Thus it sounds reasonable to use pipe-~IDs directly as output designation. Pipes, as an accountable entity (=asset) just //leap into existence by being referred.// On the other hand, the //actual//&nbsp; pipe is a semantic concept, a specific structural arrangement of objects. Basically it means that some object acts as attachment point and thereby //claims//&nbsp; to be the entrance side of a pipe, while other processor objects chain up in sequence.
!!!system outputs
System level output connections seem to be an exception to the above rule. There is no pipe at an external port, and these externally visible connection points can appear and disappear, driven by external conditions. Yet the question is if system outputs shall be directly addressable at all as output designation? Generally speaking, Lumiera is not intended to be a system wide connection manager or a real time performance software. Thus it's advisable to refrain from direct referrals to system level connections from within the model. Rather, there should be a separate OutputManagement to handle external outputs and displays, both in windows or full screen. So these external outputs become rather a matter of application configuration &mdash; and for all the other purposes we're free to ''use pipe-~IDs as output designation''.
!!!consequences of mentioning an output designation
The immediate consequence is that a [[Pipe]] with the given ID exists as an accountable entity. Only if &mdash; additionally &mdash; a suitable object within the model //claims to root this pipe,//  a connection to this designation is wired, using an appropriate [[processing pattern|ProcPatt]]. A further consequence then is for the mentioned output designation to become a possible candidate for a ModelPort, an exit node of the built render nodes network. By default, only those designations without further outgoing connections actually become active model ports (but an existing and wired pipe exit node can be promoted to a model port explicitly).
&rarr; OutputManagement

!!Challenge: mapping of output designations
An entire sequence can be embedded within another sequence as a VirtualClip. While for a simple clip there is a Clip-~MObject placed into the model, holding an reference to a media asset, in case of a virtual clip an BindingMO takes on the role of the clip object. Note that, within another context, BindingMO also acts as [[Timeline]] implementation &mdash; indeed even the same sequence might be bound simultaneously as a timeline and into a virtual clip. This flexibility creates a special twist, as the contents of this sequence have no way of finding out if they're used as timeline or embedded virtual clip. So parts of this sequence might mention a OutputDesignation which, when using the sequence as virtual clip, needs to be translated into a virtual media channel, which can be treated in the same manner like any channel (video, audio,...) found within real media. Especially, a new output designation has to be derived in a sensible way, which largely depends on how the original output designation has been specified.
&rarr; see OutputMapping regarding details and implementation of this mapping mechanism




!Output designation definition
OutputDesignation is a handle, denoting a target [[Pipe]] to connect.
It exposes a function to resolve to a Pipe, and to retrieve the StreamType of that resolved output. It can be ''defined'' either explicitly by ~Pipe-ID, or by an indirect or relative specification. The later cases are resolved on demand only (which may be later and might even change the meaning, depending on the context). It's done this way intentionally to gain flexibility and avoid hard wiring (=explicit ~Pipe-ID)

!!!Implementation notes
Thus the output designation needs to be a copyable value object, but opaque beyond that. Mandated by the various ways to specify an output designation, a hidden state arises regarding the partial resolution. The implementation solves that dilemma by relying on the [[State|http://en.wikipedia.org/wiki/State_pattern]] pattern in combination with an opaque in-place buffer.


!Use of output designations
While actually data frames are //pulled,// on a conceptual level data is assumed to "flow" through the pipes from source to output. This conceptual ("high level") model of data flow is comprised of the pipes (which are rather rigid), and flexible interconnections. The purpose of an output designation is to specify where the data should go, especially through these flexible interconnections. Thus, when reaching the exit point of a pipe, an output designation will be //queried.// Finding a suitable output designation involves two parts:
* first of all, we need to know //what to route// -- kind of the traits of the data. This is given by the //current pipe.//
* then we'll need to determine an output designation //suitable for this data.// This is given by a "Plug" (WiringRequest) in the placement, and may be derived.
* finally, this output designation will be //resolved// -- at least partially, resulting in a target pipe to be used for the wiring
As both of these specifications are given by [[Pipe]]-~IDs, the actual designation information may be reduced. Much can be infered from the circumstances, because any pipe includes a StreamType, and an output designation for an incompatible stream type is irrelevant. (e.g. and audio output when the pipe currently in question deals with video)
//writing down some thoughts//

* ruled out the system outputs as OutputDesignation.
* thus, any designation is a [[Pipe]]-ID.
* consequently, it is not obviously clear if such an designation is the final exit point
* please note the [[Engine interface proposal|http://lumiera.org/documentation/devel/rfc_pending/EngineInterfaceOverview.html]]
* this introduces the notion of a ModelPort: //a point in the (high level) model where output can be produced//
* thus obviously we need an OutputManager element to track the association of OutputDesignation to OutputSlot

Do we get a single [[Fixture]] &mdash; guess yes

From the implementation side, the only interesting exit nodes are the ones to be //actually pulled,// i.e. those immeditately corresponding to the final output.
* __rendering__ happens immediately at the output of a GlobalPipe (i.e. at a [[Timeline]], which is top-level)
* __playback__ always happens at a viewer element

!Attaching and mapping of exit nodes
Initially, [[Output designations|OutputDesignation]] are typically just local or relative references to another OutputDesignation; yet after some resolution steps, we'll arrive at an OutputDesignation //defined absolute.// Basically, these are equivalent to a [[Pipe]]-ID choosen as target for the connection and -- they become //real//&nbsp; by some object //claiming to root this pipe.// The applicability of this pattern is figured out dynamically while building the render network, resulting in a collection of [[model ports|ModelPort]] as part of the current [[Fixture]]. A RenderProcess can be started to pull from these -- and only from these -- active exit points of the model. Besides, when the timeline enclosing these model ports is [[connected to a viewer|ViewConnection]], an [[output network|OutputNetwork]] //is built to allow hooking exit points to the viewer component.// Both cases encompass a mapping of exit nodes to actual output channels. Usually, this mapping just relies on relative addressing of the output sinks, starting to allocate connections with the //first of each kind// (e.g. "connect to the first usable audio output destination").

We should note that in both cases this [[mapping operation|OutputMapping]] is controlled and driven and constrained by the output side of the connection: A viewer has fixed output capabilities, and rendering targets a specific container format -- again with fixed and pre-settled channel configuration ({{red{TODO 9/11}}} when configurting a render process, it might be necessary to pre-compute the //possible kinds of output streams,// so to provide a sensible pre-selection of possible output container formats for the user to select from). Thus, as a starting point, we'll create a default configured mapping, assigning channels in order. This mapping then should be exposed to modification and tweaking by the user. For rendering, this is part of the render options dialog, while in case of a viwer connection, a switch board is created to allow modifying the default mapping.


[>img[Output Management and Playback|uml/fig143877.png]]

!Connection to external outputs
External output destinations are never addressed directly from within the model. This is an design decision. Rather, model parts connect to an OutputDesignation, and these in turn may be  [[connected to a viewer element|ViewConnection]]. At this point, related to the viewer element, there is a mapping to external destination(s): for images, a viewer typically has an implicit, natural destination (read: actually there is a corresponding viewer window or widget), while for sound we use an mapping rule, which could be overridden locally in the viewer.

Any external output sink is managed as a [[slot|DisplayerSlot]] in the ~OutputManager. Such a slot can be opened and allocated for a playback process, which allows the latter to push calculated data frames to the output. Depending on the kind of output, there might be various, often tight requirements on the timed delivery of output data, but any details are abstracted away &mdash; any slot implementation provides a way to handle time-outs gracefully, e.g. by just showing the last video frame delivered, or by looping and fading sound
&rarr; the OutputManager interface describes handling this mapping association
&rarr; see also the PlayService


!the global output manager
Within the model routing is done mostly just by referring to an OutputDesignation -- but at some point finally we need to map these abstract designations to real output capabilities. This happens at the //output managing elements.// This interface, OutputManager, exposes these mappings of logical to real outputs and allows to  manage and control them. Several elements within the application, most notably the [[viewers|ViewerAsset]], provide an implementation of this interface -- yet there is one primary implementation, the ''global output manager'', known as OutputDirector. It can be accessed through the {{{Output}}} façade interface and is the final authority when it comes to allocating and mapping of real output possibilities. The OutputDirector tracks all the OutputSlot elements currently installed and available for output.

The relation between the central OutputDirector and the peripheral OutputManager implementations is hierarchical. Because output slots are usually registered rather at some peripheral output manager implementation, a direct mapping from OutputDesignation (i.e. global pipe) to these slots is created foremost at that peripheral level. Resolving a global pipe into an output slot is the core concern of any OutputManager implementation. Thus, when there is a locally preconfigured mapping, like e.g. for a viewer's video master pipe to the output slot installed by the corresponding GUI viewer element, then this mapping will be picked up foremost to resolve the video master output.

For a viewer widget in the GUI this yields exactly the expeted behaviour, but in other cases, e.g. for sound output, we need more general, more globally scoped output slots. In these cases, when a local mapping is absent, the query for output resolution is passed on up to the  OutputDirector, drawing on the collection of globally available output slots for that specific kind of media.
{{red{open question 11/11: is it possible to retrieve a slot from another peripheral node?}}}

!!!output modes
Most output connections and drivers embody some kind of //operation mode:// Display is characterised by resolution and colour depth, sound by number of channels and sampling rate, amongst others. There might be a mismatch with the output expectations represented by [[output designations|OutputDesignation]] within the model. Nontheless we limit those actual operation modes strictly to the OutputManager realm. They should not leak out into the model within the session.
In practice, this decision might turn out to be rather rigid, but some additional mechanisms allow for more flexibility
* when [[connecting|ViewConnection]] timeline to viewer and output, stream type conversions may be added automatically or manually
* since resolution of an output designation into an OutputSlot is initiated by querying an output manager, this query might include additional constraints, which //some// (not all) concrete output implementations might evaluate to provide an more suitably configured output slot variant.
The term &raquo;''Output Manager''&laquo; might denote two things: first, there is an {{{steam::play::OutputManager}}} interface, which can be exposed by several components within the application, most notably the [[viewer elements|ViewerAsset]]. And then, there is "the" global output manager, the OutputDirector, which finally tracks all registered OutputSlot elements and thus is the gatekeeper for any output leaving the application.

&rarr; see [[output management overview|OutputManagement]]
&rarr; see OutputSlot
&rarr; see ViewerPlayActivation

!Role of an output manager
The output manager interface describes an entity handling two distinct concerns, tied together within a local //scope.//
* a ''mapping'' to resolve a ModelPort (given as ~Pipe-ID) into an OutputSlot
* the ''registration'' and management of possible output slots, thereby creating a preferred local mapping for future connections

Note that an OutputSlot acts as a unit for registration and also for allocating / "opening" an output, while generally there might still be multiple physical outputs grouped into a single slot. This is relevant especially for sound output. A single slot is just the ability to allocate output ports up to a given limit (e.g. 2 for a stereo device, or 6 for a 5.1 device). These multiple channels are allways connected following a natural channel ordering. Thus the mapping is a simple 1:1 association from pipe to slot, assuming that the media types are compatible (and this has been checked already on a higher level).

The //registration//&nbsp; of an output slot installs a functor or association rule, which later on allows to claim and connect up to a preconfigured number of channels. This allocation or usage of a slot is exclusive (i.e. only a single client at a time can allocate a slot, even if not using all the possible channels). Each output manager instance may or may not be configured with a //fall-back rule:// when no association or mapping can be established locally, the connection request might be passed down to the global OutputDirector. Again, we can expect this to be the standard behaviour for sound, while video likely will rather be handled locally, e.g. within a GUI widget (but we're not bound to configure it exactly this way)
An output mapping serves to //resolve//&nbsp; [[output designations|OutputDesignation]].

!Mapping situations
;external outputs
:external outputs aren't addressed directly. Rather we set up a default (channel) mapping, which then can be overridden by local rules.
:Thus, in this case we query with an internal OutputDesignation as parameter and expect an OutputSlot
;viewer connections
:any Timeline produces a number of [[model ports|ModelPort]]. On the other hand, any viewer exposes a small number of output designations, representing the image and sound output(s).
:Thus, in this case we resolve similar to a bus connection, possibly overridden by already pre-existing or predefined connections.
;switch board
:a viewer might receive multiple outputs and overlays, necessitating a user operated control to select what's actually to be displayed
:Thus, in this case we need a backwards resolution at the lower end of the output network, to connect to the model port as selected through the viewer's SwitchBoard
;global pipes or virtual media
:when binding a Sequence as Timeline or VirtualClip, a mapping from output designations used within the Sequence to virtual channels or global pipes is required
:Thus, in this case we need to associate output designations with ~Pipe-IDs encountered in the context according to some rules &mdash; again maybe overridden by pre-existing connections

!Conclusions
All these mapping steps are listed here, because they exhibit a common pattern.
* the immediately triggering input key is a [[Pipe]]-ID (and thus provides a stream type); additional connection hints may be given.
* the mapped result draws from a limited selection of elements, which again can be keyed by ~Pipe-IDs
* the mapping is initialised based on default mapping rules acting as fallback
* the mapping may be extended by explicitly setting some associations
* the mapping itself is a stateful value object
* there is an //unconnected//&nbsp; state.

!Implementation notes
Thus the mapping is a copyable value object, using an associative array. It may be attached to a model object and persisted alongside. The mapping is assumed to run a defaults query when necessary. To allow for that, it should be configured with a query template (string). Frequently, special //default pipe// markers will be used at places where no distinct pipe-ID is specified explicitly. Besides that, invocations might supply additional predicates (e.g. {{{ord(2)}}} to point at "the second stream of this kind") thereby hinting the defaults resolution. Moreover, the mapping needs a way to retrieve the set of possible results, allowing to filter the results of the rules based default. Mappings might be defined explicitly. Instead of storing a //bottom value,// an {{{isDefined()}}} predicate might be preferable.

First and foremost, mapping can be seen as a //functional abstraction.// As it's used at implementation level, encapsulation of detail types in't the primary concern, so it's a candidate for generic programming: For each of those use cases outlined above, a distinct mapping type is created by instantiating the {{{OutputMapping<DEF>}}} template with a specifically tailored definition context ({{{DEF}}}), which takes on the role of a strategy. Individual instances of this concrete mapping type may be default created and copied freely. This instantiation process includes picking up the concrete result type and building a functor object for resolving on the fly. Thus, in the way typical for generic programming, the more involved special details are moved out of sight, while being still in scope for the purpose of inlining. But there //is// a concern better to be encapsulated and concealed at the usage site, namely accessing the rules system. Thus mapping leads itself to the frequently used implementation pattern where there is a generic frontend as header, calling into opaque functions embedded within a separate compilation unit.
Within the Lumiera player and output subsystem, actually sending data to an external output requires to allocate an ''output slot''
This is the central metaphor for the organisation of actual (system level) outputs; using this concept allows to separate and abstract the data calculation and the organisation of playback and rendering from the specifics of the actual output sink. Actual output possibilities (video in GUI window, video fullscreen, using a beamer, sound, Jack, rendering to file) can be added and removed dynamically from various components (Vault, Stage), all using the same resolution and mapping mechanisms (&rarr; OutputManagement)

!Properties of an output slot
Each OutputSlot is an unique and distinguishable entity. It corresponds explicitly to an external output, or a group of such outputs (e.g. left and right soundcard output channels), or an output file or similar capability accepting media content. First off, an output slot needs to be provided, configured and registered, using an implementation for the kind of media data to be output (sound, video) and the special circumstances of the output capability (render a file, display video in a GUI widget, send video to a full screen display, establish a Jack port, just use some kind of "sound out"). An output slot is always limited to a single kind of media, and to a single connection unit, but this connection may still be comprised of multiple channels (stereoscopic video, multichannel sound).

In order to be usable as //output sink,// an output slot needs to be //allocated,// i.e. tied to and locked for a specific client. At any time, there may be only a single client using a given output slot this way. To stress this point: output slots don't provide any kind of inherent mixing capability; any adaptation, mixing, overlaying and sharing needs to be done within the nodes network producing the output data fed to the slot. (in special cases, some external output capabilities -- e.g. the Jack audio connection system -- may still provide additional mixing capabilities, but that's beyond the scope of the Lumiera application)

[>img[Outputslot implementation structures|uml/fig151685.png]]
Once allocated, the output slot returns a set of concrete ''sink handles'' (one for each physical channel expecting data). The calculating process feeds its results into those handles. Size and other characteristics of the data frames are assumed to be suitable, which typically won't be verified at that level anymore (but the sink handle provides a hook for assertions). Besides that, the allocation of an output slot reveals detailed ''timing expectations''. The client is required to comply to these [[timings|Timings]] when ''emitting'' data -- he's even required to provide a //current time specification,// alongside with the data. Based on this information, the output slot has the ability to handle timing failures gracefully; the concrete output slot implementation is expected to provide some kind of de-click or de-flicker facility, which kicks in automatically when a timing failure is detected.

!!!usage and implementation
Clients retrieve a reference to an output slot by asking a suitable OutputManager for an output possibility supporting a specific format. Usually, they just "claim" this slot by invoking {{{allocate()}}}, which behind the scenes causes building of the actual output connections and mechanisms. For each such connection -- corresponding to a single channel within the media format handled by this ~OutputSlot -- the client gets a smart-handle {{{DataSink}}}. The concrete ~OutputSlot implementation performs operations quite specific to the kind of output and external interface in question -- thereby calling directly into a specific connection implementation.

!!!timing expectations
Besides the sink handles, allocation of an output slot defines some timing constraints, to which the client must comply. These [[Timings]] are detailed and explicit, including a grid of deadlines for each frame to deliver, plus a fixed //latency.// Within this context, &raquo;latency&laquo; means the requirement to be ahead of the nominal time by a certain amount, to compensate for the processing time necessary to propagate the media to the physical output pin. The output slot implementation itself is bound by external constraints to deliver data at a fixed framerate and aligned to an externally defined timing grid, plus the data needs to be handed over ahead of these time points by an time amount given by the latency. Depending on the data exchange model, there is an additional time window limiting the buffer management.

The assumption is for the client to have elaborate timing capabilities at his disposal. More specifically, the client is assumed to be a job running within the engine scheduler and thus can be configured to run //after// another job has finished, and to run within certain time limits. Thus the client is able to provide a //current nominal time// -- which is suitably close to the actual wall clock time. This enables the output slot implementation to work out from this time specification if the call is timely or overdue -- and react accordingly.

!!!output modes
some concrete output connections and drivers embody a specific operation mode (e.g. sample rate or number of channels). The decision and setup of these operational configuration is initiated together with the [[resolution|OutputMapping]] of an OutputDesignation within the OutputManager, finally leading to an output slot (reference), which can be assumed to be suitably configured, before the client allocates this slot for active use. Moreover, an individual output sink (corresponding to a single channel) may just remain unused -- until there is an {{{emit()}}} call and successful data handover, this channel will just feature silence or remain black. (There are some output systems with additional flexibility, e.g. ~Jack-Audio, that allow to generate an arbitrary number of output pins -- Lumiera will support this by allowing to set up additional output slots and attach this information to the current session &rarr; SessionConfigurationAttachment)

!!!Lifecycle and storage
The concrete OutputSlot implementation is owned and managed by the facility actually providing the output possibility in question. For example, the GUI provides viewer widgets, while some sound output backend provides sound ports. The associated OutputSlot implementation object is required to stay alive as long as it's registered with some OutputManager. It needs to be de-registered explicitly prior to destruction -- and this deregistration may block until all clients using this slot did terminate. Beyond that, an output slot implementation is expected to handle all kinds of failures gracefully -- preferably just emitting a signal (callback functor).
{{red{TODO 7/11: Deregistration is an unsolved problem....}}}

!!!unified data exchange cycle
The planned delivery time of a frame is used as an ID throughout that cycle
# within a defined time window prior to delivery, the client can ''allocate and retrieve the buffer'' from the BufferProvider.
# the client has to ''emit'' within a (short) time window prior to deadline
# now the slot gets exclusive access to the buffer for output, signalling the buffer release to the buffer provider when done.

!!!lapses
This data exchange protocol operates on a rather low level; there is only limited protection against timing glitches
|  !step|!problem ||!consequences | !protection |
| (1)|out of time window ||interference with previous/later use of the buffer | prevent in scheduler! |
|~|does not happen ||harmless as such | emit ignored |
|~|buffer unavailable ||inhibits further operation | ↯ |
| (2)|out of time window ||harmless as such | emit ignored |
|~|out of order ||allowed, unless out of time | -- |
|~|does not happen ||frame treated as glitch | -- |
|~|buffer unallocated ||frame treated as glitch | emit ignored |
| (3)|emit missing ||frame treated as glitch | -- |
|~|fail to release buffer ||unable to use buffer further | mark unavailable |
|~|buffer unavailable ||serious malfunction of playback | request playback stop |

Thus there are two serious problem situations
* allocating the buffer out of time window bears the danger of output data corruption; but the general assumption is for the scheduler to ensure each job start time remains within the defined window and all prerequisite jobs have terminated successfully. To handle clean-up, we additionally need special jobs running always, in order, and be notified of prerequisite failures.
* failure to release a buffer in a timely fashion blocks any further use of that buffer, any further jobs in need of that buffer will die immediately. This situation can only be caused by a serious problem //within the slot, related to the output mechanism.// Thus there should be some kind of trigger (e.g. when this happens 2 times consecutively) to request aborting the playback or render as a whole.
&rarr; SchedulerRequirements
&rarr; OutputSlotDesign
&rarr; OutputSlotImpl
The OutputSlot interface describes a point where generated media data can actually be sent to the external world. It is expected to be implemented by adapters and bridges to existing drivers or external interface libraries, like a viewer widget in the GUI, ALSA or Jack sound output, rendering to file, using an external media format library. The design of this core facility was rather difficult and stretched out over quite some time span -- this page documents the considerations and the decisions taken.

!intention
OutpotSlot is a metaphor to unify the organisation of actual (system level) outputs; using this concept allows to separate and abstract the data calculation and the organisation of playback and rendering from the specifics of the actual output sink. Actual output possibilities (video in GUI window, video fullscreen, sound, Jack, rendering to file) can be added and removed dynamically from various components (Vault, Stage), all using the same resolution and mapping mechanisms (&rarr; OutputManagement)

!design possibilities
!!properties as a starting point
* each OutputSlot is an unique and distinguishable entity. It corresponds explicitly to an external output
* an output slot needs to be provided, configured and registered, using an implementation specifically tailored for the kind of media data
* an output slot is always limited to a single kind of media, and to a single connection unit, but this connection may still be comprised of multiple channels.
* in order to be usable as //output sink,// an output slot needs to be //allocated,// i.e. tied to and locked for a specific client.
* this allocation is exclusive: at any time, there may be only a single client using a given output slot.
* the calculating process feeds its results into //sink handles//&nbsp; provided by the allocated output slot.
* allocation of an output slot leads to very specific [[timing expectations|Timings]]
* the client is required to comply to these timings and operate according to a strictly defined protocol.
* timing glitches will be detected due to this protocol; the output slot provides mechanisms for failing gracefully in these cases

!!data exchange models
Data is handed over by the client invoking an {{{emit(time,...)}}} function on the sink handle. Theoretically there are two different models how this data hand-over might be performed. This corresponds to the fact, that in some cases our own code manages the output and the buffers, while in other situations we intend to use existing library solutions or even external server applications to handle output
;buffer handover model
:the client owns the data buffer and cares for allocation and de-allocation. The {{{emit()}}}-call just propagates a pointer to the buffer holding the data ready for output. The output slot implementation in turn has the liability to copy or otherwise use this data within a given time limit.
;shared buffer model
:here the output mechanism owns the buffer. Within a certain time window prior to the expected time of the {{{emit()}}}-call, the client may obtain this buffer (pointer) to fill in the data. The slot implementation won't touch this buffer until the {{{emit()}}} handover, which in this case just provides the time and signalles the client is done with that buffer. If the data emitting handshake doesn't happen at all, it counts as late and superseded by the next handshake.

!!relation to timing
Besides the sink handles, allocation of an output slot defines some timing constraints, which are binding for the client. These include a grid of deadlines for each frame to deliver, plus a fixed //latency.// The output slot implementation itself is bound by external constraints to deliver data at a fixed framerate and aligned to an externally defined timing grid, plus the data needs to be handed over ahead of these time points by an time amount given by the latency. Depending on the data exchange model, there is an additional time window limiting the buffer management.

The assumption is for the client to have elaborate timing capabilities at his disposal. More specifically, the client is assumed to be a job running within the engine scheduler and thus can be configured to run //after// another job has finished, and to run within certain time limits. Thus the client is able to provide a //current nominal time// -- which is suitably close to the actual wall clock time. The output slot implementation can be written such as to work out from this time specification if the call is timely or overdue -- and react accordingly.

{{red{TODO 6/11}}}in this spec, both data exchange models exhibit a weakness regarding the releasing of buffers. At which time is it safe to release a buffer, when the handover didn't happen? Do we need an explicit callback, and how could this callback be triggered? This is similar to the problem of closing a network connection, i.e. the problem is generally unsolvable, but can be handled pragmatically within certain limits.
{{red{WIP 11/11}}}meanwhile I've worked out the BufferProvider interface in deail. There's now a deatiled buffer handover protocol defined, which is supported by a little state engine tracking BufferMetadata. This mechanism provides a transition to {{{BLOCKED}}} state when order or timing constraints are being violated, which practically solves this problem. How to detect and resolve such a floundered state from the engine point of view still remains to be addressed.

!!!Lifecycle and storage
The concrete OutputSlot implementation is owned and managed by the facility actually providing the output possibility in question. For example, the GUI provides viewer widgets, while some sound output backend provides sound ports. The associated OutputSlot implementation object is required to stay alive as long as it's registered with some OutputManager. It needs to be de-registered explicitly prior to destruction -- and this deregistration may block until all clients using this slot did terminate. Beyond that, an output slot implementation is expected to handle all kinds of failures gracefully -- preferably just emitting a signal (callback functor).
{{red{TODO 7/11: Deregistration is an unsolved problem....}}}

-----
!Implementation / design problems
How to handle the selection of those two data exchange models!
-- Problem is, typically this choice isn't up to the client; rather, the concrete OutputSlot implementation dictates what model to use. But, as it stands, the client needs to cooperate and behave differently to a certain degree. Unless we manage to factor out an common denominator of both models.

Thus: Client gets an {{{OutputSlot&}}}, without knowing the exact implementation type &rArr; how can the client pick up the right strategy?
Solving this problem through //generic programming// -- i.e coding both cases effectively different, but similar -- would require to provide both implementation options already at //compile time!//

{{red{currently}}} I see two possible, yet quite different approaches...
;generic
:when creating individual jobs, we utilise a //factory obtained from the output slot.//
;unified
:extend and adapt the protocol such to make both models similar; concentrate all differences //within a separate buffer provider.//
!!!discussion
the generic approach looks as it's becoming rather convoluted in practice. We'd need to hand over additional parameters to the factory, which passes them through to the actual job implementation created. And there would be a coupling between slot and job (the slot is aware it's going to be used by a job, and even provides the implementation). Obviously, a benefit is that the actual code path executed within the job is without indirections, and all written down in a single location. Another benefit is the possibility to extend this approach to cover further buffer handling models -- it doesn't pose any requirements on the structure of the buffer handling --
On the other hand, if we accept to retrieve the buffer(s) via an indirection, which we kind of do anyway //within the render node implementation// -- the unified model looks more like a clean solution. It's more like doing away with some local optimisations possible if we handle the models explicitly, so it's not much of a loss, given that the majority of the processing time will be spent within the inner pixel calculation loops for frame processing anyway. When following this approach, the BufferProvider becomes a third, independent partner, and the slot cooperates tightly with this buffer provider, while the client (processing node) still just talks to the slot. Basically, this unified solution works like extending the shared buffer model to both cases.
&rArr; __conclusion__: go for the unified approach!

!!!unified data exchange cycle
The planned delivery time of a frame is used as an ID throughout that cycle
# within a defined time window prior to delivery, the client can ''allocate and retrieve the buffer'' from the BufferProvider.
# the client has to ''emit'' within a (short) time window prior to deadline
# now the slot gets exclusive access to the buffer for output, signalling the buffer release to the buffer provider when done.

&rarr; OutputSlotImpl
OutputSlot is an abstraction, allowing unified treatment of various physical output connections from within the render jobs. The actual output slot is a subclass object, created and managed from the "driver code" for a specific output connection. Moreover, each output slot will be outfitted with a concrete BufferProvider to reflect the actual buffer handling policy applicable for this specific output connection. Some output connections might e.g. require delivery of the media data into a buffer residing on external hardware, while others work just fine when pointed to some arbitrary memory block holding generated data.

!operation steps
[>img[Sequenz of output data exchange steps|uml/fig145157.png]]The OutputSlot class defines some standard operations as protected virtual functions. These represent the actual steps in the data exchange protocol corresponding to this output slot.
;lock()
:claims the specified buffer for exclusive use.
:Client may now dispose output data
;transfer()
:transfers control from the client to the actual output connection for feeding data.
:Triggered by the client invoking {{{emit()}}}
;pushout()
:request the actual implementation to push the data to output.
:Usually invoked from the {{{transfer()}}} implementation, alternatively from a separate thread.


!buffer states
While the BufferProvider abstracts away the actual access to the output buffers and just hands out a ''buffer handle'', the server side (here the concrete output slot) is allowed to associate and maintain a ''state flag'' with each buffer. The general assumption is that writing this state flag is atomic, and that other facilities will care for the necessary memory barriers (that is: the output slot and the buffer provider will just access this state flag without much ado). The generic part of the output slot implementation utilises this buffer state flag to implement a state machine, which -- together with the timing constraints established with the [[help of the scheduler|SchedulerRequirements]] -- ensures sane access to the buffer without collisions.
|        !state||>| !lock()   |>| !transfer() |>| !pushout() |
|     {{{NIL}}}||↯| ·        |↯|             | ↯ |          |
|    {{{FREE}}}||✔|↷ locked  |✔|· (ignore)  | · | ·         |
|  {{{LOCKED}}}||↯|↷ emitted |✔|↷ emitted   | · |↷ free    |
| {{{EMITTED}}}||↯|↷ blocked |↯|↷ blocked   | · |↷ free    |
| {{{BLOCKED}}}||↯| ✗        |↯| ✗          | ∅ |↷ free    |
where · means no operation, ✔ marks the standard cases (OK response to caller), ↯ will throw and thus kill the calling job, ∅ will treat this frame as //glitch,// ✗ will request playback stop.
The rationale is for all states out-of-order to transition into the {{{BLOCKED}}}-state eventually, which, when hit by the next operation, will request playback stop.
Right from start, it was clear that //processing// in the Lumiera application need to be decomposed into various subsystems and can be separated into a low-level and a high-level part. At the low-level end is the [[Render Engine|OverviewRenderEngine]] which basically is a network of render nodes, whereas on the high-level side we find several different [[Media Objects|MObjects]] that can be placed into the session, edited and manipulated. This is complemented by the [[Asset Management|Asset]], which is the "bookkeeping view" of all the different "things" within each [[Session|SessionOverview]].

In our early design drafts, we envisioned //all processing// to happen within a middle Layer known as ProcLayer at that time, complemented by a »Backend« as adaptation layer to system-level processing. Over time, with more mature understanding of the Architecture, the purpose and also the names have been adjusted
* the ''Stage'' is the User Interface and serves to give a presentation of content to be handled and manipulated by the user
* in ''Steam''-Layer the [[Session]] contents are maintained, evaluated and prepared for [[playback|Player]] and [[rendering|RenderProcess]]
* while the actual low-level processing is relegated into the ''Vault''-Layer, which coordinates and orchestrates external Libraries and System services.


Throughout the Architecture, there is rather strong separation between high-level and low-level concepts &mdash; <br/>consequently you'll find the data maintained within the application to be organised in two different views, the [[»high-level-model«|HighLevelModel]] and the [[»low-level-model«|LowLevelModel]]
* from users (and GUI) perspective, you'll see a [[Session|SessionOverview]] with a timeline-like structure, where various [[Media Objects|MObjects]] are arranged and [[placed|Placement]]. By looking closer, you'll find that there are data connections and all processing is organized around processing chains or [[pipes|Pipe]], which can be either global (in the Session) or local (in real or [[virtual|VirtualClip]] clips)
* when dealing with the actual calculations in the Engine (&rarr; see OverviewRenderEngine), you won't find any Tracks, Media Objects or Pipes &mdash; rather you'll find a network of interconnected [[render nodes|ProcNode]] forming the low level model. Each structurally constant segment of the timeline corresponds to a separate node network providing an ExitNode corresponding to each of the global pipes; pulling frames from them means running the engine.
* it is the job of the [[Builder]] create and wire up this render nodes network when provided with a given hig-level-model. So, actually the builder (together with the so called [[Fixture]]) form an isolation layer in the middle, separating the //editing part&nbsp;// from the //processing part.//

! Architecture Overview
Arrangement and interaction of components in the three Layers — {{red{envisioned as of 4/2023}}}
 &rarr; IntegrationSlice
<html>
<img title="Lumiera Architecture" src="draw/Lumi.Architecture-2.svg" style="width:90%;"/>
</html>
As can be seen on the [[Architecture Overview|Overview]], the functionality of the »[[Render Engine|RenderEngine]]« is implemented through the collaboration of various components, spanning the Steam-Layer and the Vault-Layer. The rendering is orchestrated by „performing“ the LowLevelModel, which is a //node graph//. This render node graph has been prepared by the [[Builder]] -- thereby reading and interpreting the HighLevelModel, which is maintained and edited as part of the [[Session]]. The actual rendering happens when the [[Scheduler]] invokes individual [[render jobs|RenderJob]] to invoke media processing functions configured and wired as [[Processing Nodes|ProcNode]], thus generating new media data into the [[processing buffers|BufferProvider]] for [[output|OutputManagement]].

Any kind of rendering or playback requires a RenderProcess, which is initiated and controlled by the [[Player]].

see also &rarr; [[Fixture]] &rarr; [[Player]] &rarr; EngineFaçade &rarr; [[Dispatcher|FrameDispatcher]] &rarr; [[Scheduler]] &rarr; RenderToolkit

{{red{TODO 4/23: create a new drawing to reflect current state of the design}}}

!Render Engine Integration
The Engine is unfinished and not in any usable shape {{red{as of 4/2023}}} -- currently an [[»integration slice«|PlaybackVerticalSlice]] is pursued, in an effort to complete foundation work done over the course of several years and achieve the first preliminary integration of rendering functionality.

Why is ''all of this so complicated''?
* the Lumiera application is envisioned as a very flexible setup
* we try to avoid //naive assumptions// -- e.g. that each video is comprised of a single video stream and two audio streams
* the actual rendering is delegated to existing libraries and frameworks, thereby remaining open for future developments
* we avoid hard wired decisions in favour of configuration by rules and default settings
* the application works asynchronous, and all functionality shall be invokable by scripting, without GUI.
<!--{{{-->
<div class='header' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
	<div class='headerShadow'>
		<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
		<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
	</div>
	<div class='headerForeground'>
		<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
		<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
	</div>
</div>
<!-- horizontal MainMenu -->
<div id='topMenu' refresh='content' tiddler='MainMenu'></div>
<!-- original MainMenu menu -->
<!-- <div id='mainMenu' refresh='content' tiddler='MainMenu'></div> -->
<div id='sidebar'>
	<div id='sidebarOptions' refresh='content' tiddler='SideBarOptions'></div>
	<div id='sidebarTabs' refresh='content' force='true' tiddler='SideBarTabs'></div>
</div>
<div id='displayArea'>
	<div id='messageArea'></div>
	<div id='tiddlerDisplay'></div>
</div>
<!--}}}-->
A ParamProvider is the counterpart for (one or many) [[Parameter]] instances. It implements the value access function made available by the Parameter object to its clients.

To give a concrete example: 
* a Fade Plugin needs the actual fade value for Frame t=xxx
* the Plugin has a Parameter Object (from which we could query the information of this parameter being a continuous float function)
* this Parameter Object provides a getValue() function, which is internally linked (i.e. by configuration) to a //Parameter Provider//
* the actual object implementing the ParamProvider Interface could be a Automation MObject located somewhere in the session and would do bezier interpolation on a given keyframe set.
* Param providers are created on demand; while building the Render Engine configuration actually at work, the Builder would have to setup a link between the Plugin Parameter Object and the ParamProvider; he can do so, because he sees the link between the Automation MObject and the corresponding Effect MObject

!!ParamProvider ownership and lifecycle
Actually, ParamProvider is just an interface which is implemented either by a constant or an [[Automation]] function. In both cases, access is via direct reference, while the link to the ParamProvider is maintained by a smart-ptr, which &mdash; in the case of automation may share ownership with the [[Placement]] of the automation data set.

&rarr; see the class diagram for [[Automation]]
&rarr; see EffectHandling
Parameters are all possibly variable control values used within the Render Engine. Contrast this with configuration values, which are considered to be fixed and need an internal reset of the application (or session) state to take effect.

A ''Parameter Object'' provides a descriptor of the kind of parameter, together with a function used to pull the //actual value// of this parameter. Here, //actual// has a two-fold meaning:
* if called without a time specification, it is either a global (but variable) system or session parameter or a default value for automated Parameters. (the intention is to treat this cases uniformly)
* if called with a time specification, it is the query for an &mdash; probably interpolated &mdash; [[Automation]] value at this absolute time. The corresponding ParamProvider should fall back transparently to a default or session value if no time varying data is available

{{red{TODO: define how Automation works}}}
/***
|<html><a name="Top"/></html>''Name:''|PartTiddlerPlugin|
|''Version:''|1.0.6 (2006-11-07)|
|''Source:''|http://tiddlywiki.abego-software.de/#PartTiddlerPlugin|
|''Author:''|UdoBorkowski (ub [at] abego-software [dot] de)|
|''Licence:''|[[BSD open source license]]|
|''TiddlyWiki:''|2.0|
|''Browser:''|Firefox 1.0.4+; InternetExplorer 6.0|
!Table of Content<html><a name="TOC"/></html>
* <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Description',null, event)">Description, Syntax</a></html>
* <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Applications',null, event)">Applications</a></html>
** <html><a href="javascript:;" onclick="window.scrollAnchorVisible('LongTiddler',null, event)">Refering to Paragraphs of a Longer Tiddler</a></html>
** <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Citation',null, event)">Citation Index</a></html>
** <html><a href="javascript:;" onclick="window.scrollAnchorVisible('TableCells',null, event)">Creating "multi-line" Table Cells</a></html>
** <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Tabs',null, event)">Creating Tabs</a></html>
** <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Sliders',null, event)">Using Sliders</a></html>
* <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Revisions',null, event)">Revision History</a></html>
* <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Code',null, event)">Code</a></html>
!Description<html><a name="Description"/></html>
With the {{{<part aPartName> ... </part>}}} feature you can structure your tiddler text into separate (named) parts. 
Each part can be referenced as a "normal" tiddler, using the "//tiddlerName//''/''//partName//" syntax (e.g. "About/Features"). E.g. you may create links to the parts, use it in {{{<<tiddler...>>}}} or {{{<<tabs...>>}}} macros etc.

''Syntax:'' 
|>|''<part'' //partName// [''hidden''] ''>'' //any tiddler content// ''</part>''|
|//partName//|The name of the part. You may reference a part tiddler with the combined tiddler name "//nameOfContainerTidder//''/''//partName//.|
|''hidden''|When defined the content of the part is not displayed in the container tiddler. But when the part is explicitly referenced (e.g. in a {{{<<tiddler...>>}}} macro or in a link) the part's content is displayed.|
|<html><i>any&nbsp;tiddler&nbsp;content</i></html>|<html>The content of the part.<br>A part can have any content that a "normal" tiddler may have, e.g. you may use all the formattings and macros defined.</html>|
|>|~~Syntax formatting: Keywords in ''bold'', optional parts in [...]. 'or' means that exactly one of the two alternatives must exist.~~|
<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!Applications<html><a name="Applications"/></html>
!!Refering to Paragraphs of a Longer Tiddler<html><a name="LongTiddler"/></html>
Assume you have written a long description in a tiddler and now you want to refer to the content of a certain paragraph in that tiddler (e.g. some definition.) Just wrap the text with a ''part'' block, give it a nice name, create a "pretty link" (like {{{[[Discussion Groups|Introduction/DiscussionGroups]]}}}) and you are done.

Notice this complements the approach to first writing a lot of small tiddlers and combine these tiddlers to one larger tiddler in a second step (e.g. using the {{{<<tiddler...>>}}} macro). Using the ''part'' feature you can first write a "classic" (longer) text that can be read "from top to bottom" and later "reuse" parts of this text for some more "non-linear" reading.

<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!!Citation Index<html><a name="Citation"/></html>
Create a tiddler "Citations" that contains your "citations". 
Wrap every citation with a part and a proper name. 

''Example''
{{{
<part BAX98>Baxter, Ira D. et al: //Clone Detection Using Abstract Syntax Trees.// 
in //Proc. ICSM//, 1998.</part>

<part BEL02>Bellon, Stefan: //Vergleich von Techniken zur Erkennung duplizierten Quellcodes.// 
Thesis, Uni Stuttgart, 2002.</part>

<part DUC99>Ducasse, Stéfane et al: //A Language Independent Approach for Detecting Duplicated Code.// 
in //Proc. ICSM//, 1999.</part>
}}}

You may now "cite" them just by using a pretty link like {{{[[Citations/BAX98]]}}} or even more pretty, like this {{{[[BAX98|Citations/BAX98]]}}}.

<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!!Creating "multi-line" Table Cells<html><a name="TableCells"/></html>
You may have noticed that it is hard to create table cells with "multi-line" content. E.g. if you want to create a bullet list inside a table cell you cannot just write the bullet list
{{{
* Item 1
* Item 2
* Item 3
}}}
into a table cell (i.e. between the | ... | bars) because every bullet item must start in a new line but all cells of a table row must be in one line.

Using the ''part'' feature this problem can be solved. Just create a hidden part that contains the cells content and use a {{{<<tiddler >>}}} macro to include its content in the table's cell.

''Example''
{{{
|!Subject|!Items|
|subject1|<<tiddler ./Cell1>>|
|subject2|<<tiddler ./Cell2>>|

<part Cell1 hidden>
* Item 1
* Item 2
* Item 3
</part>
...
}}}

Notice that inside the {{{<<tiddler ...>>}}} macro you may refer to the "current tiddler" using the ".".

BTW: The same approach can be used to create bullet lists with items that contain more than one line.

<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!!Creating Tabs<html><a name="Tabs"/></html>
The build-in {{{<<tabs ...>>}}} macro requires that you defined an additional tiddler for every tab it displays. When you want to have "nested" tabs you need to define a tiddler for the "main tab" and one for every tab it contains. I.e. the definition of a set of tabs that is visually displayed at one place is distributed across multiple tiddlers.

With the ''part'' feature you can put the complete definition in one tiddler, making it easier to keep an overview and maintain the tab sets.

''Example''
The standard tabs at the sidebar are defined by the following eight tiddlers:
* SideBarTabs
* TabAll
* TabMore
* TabMoreMissing
* TabMoreOrphans
* TabMoreShadowed
* TabTags
* TabTimeline

Instead of these eight tiddlers one could define the following SideBarTabs tiddler that uses the ''part'' feature:
{{{
<<tabs txtMainTab 
 Timeline Timeline SideBarTabs/Timeline 
 All 'All tiddlers' SideBarTabs/All 
 Tags 'All tags' SideBarTabs/Tags 
 More 'More lists' SideBarTabs/More>>
<part Timeline hidden><<timeline>></part>
<part All hidden><<list all>></part>
<part Tags hidden><<allTags>></part>
<part More hidden><<tabs txtMoreTab 
 Missing 'Missing tiddlers' SideBarTabs/Missing 
 Orphans 'Orphaned tiddlers' SideBarTabs/Orphans 
 Shadowed 'Shadowed tiddlers' SideBarTabs/Shadowed>></part>
<part Missing hidden><<list missing>></part>
<part Orphans hidden><<list orphans>></part>
<part Shadowed hidden><<list shadowed>></part>
}}}

Notice that you can easily "overwrite" individual parts in separate tiddlers that have the full name of the part.

E.g. if you don't like the classic timeline tab but only want to see the 100 most recent tiddlers you could create a tiddler "~SideBarTabs/Timeline" with the following content:
{{{
<<forEachTiddler 
 sortBy 'tiddler.modified' descending 
 write '(index < 100) ? "* [["+tiddler.title+"]]\n":""'>>
}}}
<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!!Using Sliders<html><a name="Sliders"/></html>
Very similar to the build-in {{{<<tabs ...>>}}} macro (see above) the {{{<<slider ...>>}}} macro requires that you defined an additional tiddler that holds the content "to be slid". You can avoid creating this extra tiddler by using the ''part'' feature

''Example''
In a tiddler "About" we may use the slider to show some details that are documented in the tiddler's "Details" part.
{{{
...
<<slider chkAboutDetails About/Details details "Click here to see more details">>
<part Details hidden>
To give you a better overview ...
</part>
...
}}}

Notice that putting the content of the slider into the slider's tiddler also has an extra benefit: When you decide you need to edit the content of the slider you can just doubleclick the content, the tiddler opens for editing and you can directly start editing the content (in the part section). In the "old" approach you would doubleclick the tiddler, see that the slider is using tiddler X, have to look for the tiddler X and can finally open it for editing. So using the ''part'' approach results in a much short workflow.

<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!Revision history<html><a name="Revisions"/></html>
* v1.0.6 (2006-11-07)
** Bugfix: cannot edit tiddler when UploadPlugin by Bidix is installed. Thanks to José Luis González Castro for reporting the bug.
* v1.0.5 (2006-03-02)
** Bugfix: Example with multi-line table cells does not work in IE6. Thanks to Paulo Soares for reporting the bug.
* v1.0.4 (2006-02-28)
** Bugfix: Shadow tiddlers cannot be edited (in TW 2.0.6). Thanks to Torsten Vanek for reporting the bug.
* v1.0.3 (2006-02-26)
** Adapt code to newly introduced Tiddler.prototype.isReadOnly() function (in TW 2.0.6). Thanks to Paulo Soares for reporting the problem.
* v1.0.2 (2006-02-05)
** Also allow other macros than the "tiddler" macro use the "." in the part reference (to refer to "this" tiddler)
* v1.0.1 (2006-01-27)
** Added Table of Content for plugin documentation. Thanks to RichCarrillo for suggesting.
** Bugfix: newReminder plugin does not work when PartTiddler is installed. Thanks to PauloSoares for reporting.
* v1.0.0 (2006-01-25)
** initial version
<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!Code<html><a name="Code"/></html>
<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>
***/
//{{{
//============================================================================
// PartTiddlerPlugin

// Ensure that the PartTiddler Plugin is only installed once.
//
if (!version.extensions.PartTiddlerPlugin) {



version.extensions.PartTiddlerPlugin = {
 major: 1, minor: 0, revision: 6,
 date: new Date(2006, 10, 7), 
 type: 'plugin',
 source: "http://tiddlywiki.abego-software.de/#PartTiddlerPlugin"
};

if (!window.abego) window.abego = {};
if (version.major < 2) alertAndThrow("PartTiddlerPlugin requires TiddlyWiki 2.0 or newer.");

//============================================================================
// Common Helpers

// Looks for the next newline, starting at the index-th char of text. 
//
// If there are only whitespaces between index and the newline 
// the index behind the newline is returned, 
// otherwise (or when no newline is found) index is returned.
//
var skipEmptyEndOfLine = function(text, index) {
 var re = /(\n|[^\s])/g;
 re.lastIndex = index;
 var result = re.exec(text);
 return (result && text.charAt(result.index) == '\n') 
 ? result.index+1
 : index;
}


//============================================================================
// Constants

var partEndOrStartTagRE = /(<\/part>)|(<part(?:\s+)((?:[^>])+)>)/mg;
var partEndTagREString = "<\\/part>";
var partEndTagString = "</part>";

//============================================================================
// Plugin Specific Helpers

// Parse the parameters inside a <part ...> tag and return the result.
//
// @return [may be null] {partName: ..., isHidden: ...}
//
var parseStartTagParams = function(paramText) {
 var params = paramText.readMacroParams();
 if (params.length == 0 || params[0].length == 0) return null;
 
 var name = params[0];
 var paramsIndex = 1;
 var hidden = false;
 if (paramsIndex < params.length) {
 hidden = params[paramsIndex] == "hidden";
 paramsIndex++;
 }
 
 return {
 partName: name, 
 isHidden: hidden
 };
}

// Returns the match to the next (end or start) part tag in the text, 
// starting the search at startIndex.
// 
// When no such tag is found null is returned, otherwise a "Match" is returned:
// [0]: full match
// [1]: matched "end" tag (or null when no end tag match)
// [2]: matched "start" tag (or null when no start tag match)
// [3]: content of start tag (or null if no start tag match)
//
var findNextPartEndOrStartTagMatch = function(text, startIndex) {
 var re = new RegExp(partEndOrStartTagRE);
 re.lastIndex = startIndex;
 var match = re.exec(text);
 return match;
}

//============================================================================
// Formatter

// Process the <part ...> ... </part> starting at (w.source, w.matchStart) for formatting.
//
// @return true if a complete part section (including the end tag) could be processed, false otherwise.
//
var handlePartSection = function(w) {
 var tagMatch = findNextPartEndOrStartTagMatch(w.source, w.matchStart);
 if (!tagMatch) return false;
 if (tagMatch.index != w.matchStart || !tagMatch[2]) return false;

 // Parse the start tag parameters
 var arguments = parseStartTagParams(tagMatch[3]);
 if (!arguments) return false;
 
 // Continue processing
 var startTagEndIndex = skipEmptyEndOfLine(w.source, tagMatch.index + tagMatch[0].length);
 var endMatch = findNextPartEndOrStartTagMatch(w.source, startTagEndIndex);
 if (endMatch && endMatch[1]) {
 if (!arguments.isHidden) {
 w.nextMatch = startTagEndIndex;
 w.subWikify(w.output,partEndTagREString);
 }
 w.nextMatch = skipEmptyEndOfLine(w.source, endMatch.index + endMatch[0].length);
 
 return true;
 }
 return false;
}

config.formatters.push( {
 name: "part",
 match: "<part\\s+[^>]+>",
 
 handler: function(w) {
 if (!handlePartSection(w)) {
 w.outputText(w.output,w.matchStart,w.matchStart+w.matchLength);
 }
 }
} )

//============================================================================
// Extend "fetchTiddler" functionality to also recognize "part"s of tiddlers 
// as tiddlers.

var currentParent = null; // used for the "." parent (e.g. in the "tiddler" macro)

// Return the match to the first <part ...> tag of the text that has the
// requrest partName.
//
// @return [may be null]
//
var findPartStartTagByName = function(text, partName) {
 var i = 0;
 
 while (true) {
 var tagMatch = findNextPartEndOrStartTagMatch(text, i);
 if (!tagMatch) return null;

 if (tagMatch[2]) {
 // Is start tag
 
 // Check the name
 var arguments = parseStartTagParams(tagMatch[3]);
 if (arguments && arguments.partName == partName) {
 return tagMatch;
 }
 }
 i += tagMatch[0].length;
 }
}

// Return the part "partName" of the given parentTiddler as a "readOnly" Tiddler 
// object, using fullName as the Tiddler's title. 
//
// All remaining properties of the new Tiddler (tags etc.) are inherited from 
// the parentTiddler.
// 
// @return [may be null]
//
var getPart = function(parentTiddler, partName, fullName) {
 var text = parentTiddler.text;
 var startTag = findPartStartTagByName(text, partName);
 if (!startTag) return null;
 
 var endIndexOfStartTag = skipEmptyEndOfLine(text, startTag.index+startTag[0].length);
 var indexOfEndTag = text.indexOf(partEndTagString, endIndexOfStartTag);

 if (indexOfEndTag >= 0) {
 var partTiddlerText = text.substring(endIndexOfStartTag,indexOfEndTag);
 var partTiddler = new Tiddler();
 partTiddler.set(
 fullName,
 partTiddlerText,
 parentTiddler.modifier,
 parentTiddler.modified,
 parentTiddler.tags,
 parentTiddler.created);
 partTiddler.abegoIsPartTiddler = true;
 return partTiddler;
 }
 
 return null;
}

// Hijack the store.fetchTiddler to recognize the "part" addresses.
//

var oldFetchTiddler = store.fetchTiddler ;
store.fetchTiddler = function(title) {
 var result = oldFetchTiddler.apply(this, arguments);
 if (!result && title) {
 var i = title.lastIndexOf('/');
 if (i > 0) {
 var parentName = title.substring(0, i);
 var partName = title.substring(i+1);
 var parent = (parentName == ".") 
 ? currentParent 
 : oldFetchTiddler.apply(this, [parentName]);
 if (parent) {
 return getPart(parent, partName, parent.title+"/"+partName);
 }
 }
 }
 return result; 
};


// The user must not edit a readOnly/partTiddler
//

config.commands.editTiddler.oldIsReadOnlyFunction = Tiddler.prototype.isReadOnly;

Tiddler.prototype.isReadOnly = function() {
 // Tiddler.isReadOnly was introduced with TW 2.0.6.
 // For older version we explicitly check the global readOnly flag
 if (config.commands.editTiddler.oldIsReadOnlyFunction) {
 if (config.commands.editTiddler.oldIsReadOnlyFunction.apply(this, arguments)) return true;
 } else {
 if (readOnly) return true;
 }

 return this.abegoIsPartTiddler;
}

config.commands.editTiddler.handler = function(event,src,title)
{
 var t = store.getTiddler(title);
 // Edit the tiddler if it either is not a tiddler (but a shadowTiddler)
 // or the tiddler is not readOnly
 if(!t || !t.abegoIsPartTiddler)
 {
 clearMessage();
 story.displayTiddler(null,title,DEFAULT_EDIT_TEMPLATE);
 story.focusTiddler(title,"text");
 return false;
 }
}

// To allow the "./partName" syntax in macros we need to hijack 
// the invokeMacro to define the "currentParent" while it is running.
// 
var oldInvokeMacro = window.invokeMacro;
function myInvokeMacro(place,macro,params,wikifier,tiddler) {
 var oldCurrentParent = currentParent;
 if (tiddler) currentParent = tiddler;
 try {
 oldInvokeMacro.apply(this, arguments);
 } finally {
 currentParent = oldCurrentParent;
 }
}
window.invokeMacro = myInvokeMacro;

// Scroll the anchor anchorName in the viewer of the given tiddler visible.
// When no tiddler is defined use the tiddler of the target given event is used.
window.scrollAnchorVisible = function(anchorName, tiddler, evt) {
 var tiddlerElem = null;
 if (tiddler) {
 tiddlerElem = document.getElementById(story.idPrefix + tiddler);
 }
 if (!tiddlerElem && evt) {
 var target = resolveTarget(evt);
 tiddlerElem = story.findContainingTiddler(target);
 }
 if (!tiddlerElem) return;

 var children = tiddlerElem.getElementsByTagName("a");
 for (var i = 0; i < children.length; i++) {
 var child = children[i];
 var name = child.getAttribute("name");
 if (name == anchorName) {
 var y = findPosY(child);
 window.scrollTo(0,y);
 return;
 }
 }
}

} // of "install only once"
//}}}

/***
<html><sub><a href="javascript:;" onclick="scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!Licence and Copyright
Copyright (c) abego Software ~GmbH, 2006 ([[www.abego-software.de|http://www.abego-software.de]])

Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:

Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.

Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or other
materials provided with the distribution.

Neither the name of abego Software nor the names of its contributors may be
used to endorse or promote products derived from this software without specific
prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.

<html><sub><a href="javascript:;" onclick="scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>
***/
Facility guiding decisions regarding the strategy to employ for rendering or wiring up connections. The PathManager is querried through the OperationPoint, when executing the connection steps within the Build process.
Pipes play an central role within the Steam-Layer, because for everything placed and handled within the session, the final goal is to get it transformed into data which can be retrieved at some pipe's exit port. Pipes are special facilities, rather like inventory, separate and not treated like all the other objects.
We don't distinguish between "input" and "output" ports &mdash; rather, pipes are thought to be ''hooks for making connections to''. By following this line of thought, each pipe has an input side and an output side and is in itself something like a ''Bus'' or ''processing chain''. Other processing entities like effects and transitions can be placed (attached) at the pipe, resulting them to be appended to form this chain. Likewise, we can place [[wiring requests|WiringRequest]] to the pipe, meaning we want it connected so to send it's output to another destination pipe. The [[Builder]] may generate further wiring requests to fulfil the placement of other entities.
Thus //Pipes are the basic building blocks// of the whole render network. We distinguish ''global available'' Pipes, which are like the sum groups of a mixing console, and the ''lokal pipe'' or [[source ports|ClipSourcePort]] of the individual clips, which exist only within the duration of the corresponding clip. The design //limits the possible kinds of pipes // to these two types &mdash; thus we can build local processing chains at clips and global processing chains at the global pipes of the session and that's all we can do. (because of the flexibility which comes with the concept of [[placements|Placement]], this is no real limitation)

Pipes are denoted and addressed by [[pipe IDs|PipeID]], implemented as ID of a pipe asset. Besides explicitly named pipes, there are some generic placeholder ~IDs for a standard configured pipe of a given type. This is done to avoid creating a separate ~Pipe-ID for each and every clip to build.

While pipes are rather rigid building blocks -- and especially are limited to a single StreamType without conversions, the interconnections or ''routing'' links to the contrary are more flexible. They are specified and controled through the use of an OutputDesignation, which, when fully resolved, should again yield a target ~Pipe-ID to connect. The mentioned resolution of an output designation typically involves an OutputMapping.

Similar structures and mechanisms are extended beyond the core model: The GUI can connect the viewer(s) to some pipe (and moreover can use [[probe points|ProbePoint]] placed like effects and connected to some pipe), and likewise, when starting a ''render'', we get the opportunity to specify the ModelPort (exit point of a GlobalPipe) to pull the data from. Pulling data from some pipe is the (only) way to activate the render nodes network reachable from this pipe.

&rarr; [[Handling of Tracks|TrackHandling]]
&rarr; [[Handling of Pipes|PipeHandling]]

!Identification
Pipes are distinct objects and can be identified by their asset ~IDs. Besides, as for all [[structural assets|StructAsset]] there are extended query capabilities, including a symbolic pipe-id and a media (stream) type id. Any pipe can accept and deliver exactly one media stream kind (which may be inherently structured though, e.g. spatial sound systems or stereoscopic video)

!creating pipes
Pipe assets are created automatically by being used and referred. Each [[Timeline]] holds a collection of global pipes, attached to the BindingMO, which is the representation or attachement point of the Timeline within the HighLevelModel ([[Session]]) ({{red{todo: implementation missing as of 11/09}}}), and further pipes can be created by using a new pipe reference in some placement. Moreover, every clip has an (implicit) [[source port|ClipSourcePort]], which will appear as pipe asset when first used (referred) while [[building|BuildProcess]]. Note that creating a new pipe implies using a [[processing pattern|ProcPatt]], which will be queried from the [[Defaults Manager|DefaultsManagement]] (resulting in the use of some preconfigured pattern or maybe the creation of a new ProcPatt object if necessary)

!removal
Deleting a Pipe is an advanced operation, because it includes finding and "detaching" all references, otherwise the pipe will leap back into existence immediately. Thus, global pipe entries in the Session and pipe references in [[locating pins|LocatingPin]] within any placement have to be removed, while clips using a given source port will be disabled. {{red{todo: implementation deferred}}}

!using Pipes
there is not much you can do directly with a pipe asset. It is an point of reference, after all. Any connection or routing to a target pipe is done by a placement with a suitable WiringPlug in some part of the timeline (&rarr; OutputDesignation), so it isn't stored with the pipe. You can edit the (user visible) description an you can globally disable a pipe asset. The pipe's ID and media stream type of course are fixed, because any connection and referral (via the asset ID) is based on them. Later on, we should provide a {{{rewire(oldPipe, newPipe)}}} to search any ref to the {{{oldPipe}}} and try to rewrite it to use the {{{newPipe}}}, possibly with a new media stream type.
Pipes are integrated with the [[management of defaults|DefaultsManagement]]. For example, any pipe implicitly uses some [[processing pattern|ProcPatt]] &mdash; it may default to the empty pattern. This way, any kind of standard wiring might be applied to the pipes (e.g a fader for audio, similar to the classic mixing consoles). This //is // a global property of the pipe, but &mdash; contrary to the stream type &mdash; this pattern may be switched {{red{really? -- lots of open questions here as of 11/10}}}
A straight processing chain or ''Pipe'' is a core concept in Lumiera's HighLevelModel. Like most of the other building blocks in that model, pipes are represented in the asset view, which leads to having unique pipe-asset ~IDs for all pipes encountered within the model. Now, because of the special building pattern used within that model, which is comprised of pipes and flexible interconnections, the routing and wiring can be done in terms of pipe-~IDs, using them as a short representation of an OutputDesignation. As a consequence, by these relations, the pipe-~IDs are promoted to an universal key for wiring, routing and output targets, usable from the high-level objects down to the low level nodes within the engine.
* pipe-~IDs are used to label the global busses
* pipe-~IDs are used to denote output designations for routing
* pipe-~IDs are attached to the real output possibilities, available when resolving such a designation
* pipe-~IDs serve to denote the ExitNode corresponding to a global pipe within a single segment of the low-level model
* pipe-~IDs consequently can be used to address a ModelPort, corresponding both to such a global bus and the corresponding exit nodes
A Placement represents a //relation:// it is always linked to a //Subject// (this being a [[Media Object|MObject]]) and has the meaning to //place// this Subject in some manner, either relatively to other Media Objects, by some Constraint or simply absolute at (time, output). The latter case is especially important for the build process and thus represented by a special [[Sub-Interface ExplicitPlacement|ExplicitPlacement]]. Besides this simple cases, Placements can also express more specific kinds of "locating" an object, like placing a sound source at a pan position or placing a video clip at a given layer (above or below another video clip)

So basically placements represent a query interface: you can allways ask the placement to find out about the position of the related object in terms of (time, output), and &mdash; depending on the specific object and situation &mdash; also about these additional [[placement derived dimensions|PlacementDerivedDimension]] like sound pan or layer order or similar things which also fit into the general concept of "placing" an object.

The fact of being placed in the [[Session|SessionOverview]] is constitutive for all sorts of [[MObject]]s, without Placement they make no sense. Thus &mdash; technically &mdash; Placements act as ''smart pointers''. Of course, there are several kinds of Placements and they are templated on the type of MObject they are refering to. Placements can be //aggregated// to increasingly constrain the resulting "location" of the refered ~MObject. See &rarr; [[handling of Placements|PlacementHandling]] for more details

!Placements as instance
Effectively, the placement of a given MObject into the Session acts as setting up an concrete instance of this object. This way, placements exhibit a dual nature. When viewed on themselves, like any reference or smart-pointer they behave like values. But, by adding a placement to the session, we again create a unique distinguishable entity with reference semantics: there could be multiple placements of the same object but with varying placement properties. Such a placement-bound-into-the-session is denoted by an generic placement-ID or (as we call it) &rarr; PlacementRef; behind the scenes there is a PlacementIndex keeping track of those "instances" &mdash; allowing us to hand out the PlacementRef (which is just an opaque id) to client code outside the Steam-Layer and generally use it as an shorthand, behaving as if it was an MObject instance
For any [[media object|MObject]] within the session, we can allways at least query the time (reference/start) point and the output destination from the [[Placement]], by which the object is being handled. But the simple act of placing an object in some way, can &mdash; depending on the context &mdash; create additional degrees of freedom. To list some important examples:
* placing a video clip overlapping with other clips on other tracks creates the possibility for the clip to be above another clip or to be combined in various other ways with the other clips at the same time position
* placing a mono sound object plugged to a stereophoic output destination creates the freedom to define the pan position
The Placement interface allows to query for these additional //parameter values derived from the fact of being placed.//

!defining additional dimensions
probably a LocatingPin but... {{red{TODO any details are yet unknown as of 5/08}}}
!querying additional dimensions
basically you resolve the Placement, yielding an ExplicitPlacement... {{red{TODO but any details of how additional dimensions are resolved is still undefined as of 5/08}}}
[[Placement]]s are at the very core of all [[editing operations|EditingOperations]], because they act as handles (smart pointers) to access the [[media objects|MObject]] to be manipulated. Placements themselves are lightweight and can be handled with //value semantics//. But, when adding a Placement to the [[Session]], it gains an distinguishable identity and should be treated by reference from then on: changing the location properties of this placement has a tangible effect on the way the placed object appears in the context of the session. Besides the direct (language) references, there is a special PlacementRef type which builds on this registration of the placement within the session, can be represented as POD and thus passed over external interfaces. Many editing tasks include finding some Placement in the session and reference as parameter. By acting //on the Placement object,// we can change parameters of the way the media object is placed (e.g. adjust an offset), while by //dereferencing//&nbsp; the Placement object, we access the "real" media object (e.g. for trimming its length). Placements are ''templated'' on the type of the actual ~MObject they refer to, thus defining the interface/methods usable on this object.

Actually, the way each Placement ties and locates its subject is implemented by one or several small LocatingPin objects, where subclasses of LocatingPin implement the various different methods of placing and resolving the final location. Notably, we can give a ~FixedLocation or we can atach to another ~MObject to get a ~RelativeLocation, etc. In the typical use case, these ~LocatingPins are added to the Placement, but never retrieved directly. Rather the Placement acts as a ''query interface'' for determining the location of the related object. Here, "location" can be thought of as encompassing multiple dimenstions at the same time. An object can be
* located at a specific point in time
* related to and plugged into a specific output or global bus
* defined to have a position within some [[context-dependant additional dimensions|PlacementDerivedDimension]] like
** the pan position, either on the stereophoic base, or within a fully periphoic (spatial) sound system
** the layer order and overlay mode for video (normal, additive, subtractive, masking)
** the stereoscopic window position (depth parameter) for 3D video
** channel and parameter selection for MIDI data

Placements have //value semantics,// i.e. we don't stress the identity of a placement object (~MObjects on the other hand //do have// a distinguishable identity): initially, you create a Placement parametrized to some specific kind by adding [[LocatingPin]]s (fixed, relative,...) and possibliy you use a subclass of {{{Placement<MObject>}}} to encode additional type information, say {{{Placement<Clip>}}}, but later on, you treat the placement polymorphically and don't care about its kind. The sole purpose of the placement's kind is to select some virtual function implementing the desired behaviour. There is no limitation to one single Placement per ~MObject, indeed we can have several different Placements of the same MObject (from a users point of view, these behave like being clones). Besides, we can ''aggregate'' additional [[LocatingPin]]s to one Placements, resulting in their properties and constraints being combined to yield the actual position of the referred ~MObject.

!design decisions
* the actual way of placing is implemented similar to the ''State Pattern'' by small embedded LocatingPin objects.
* these LocatingPin objects form a ''decorator'' like chain
* resolving into an ExplicitPlacement traverses this chain
* //overconstraining// a placement is not an error, we just stop traversing the chain (ignoring the remaining additional Placements) at the moment the position is completely defined.
* placements can be treated like values, but incorporate an identity tag for the purpose of registering with the session.
* we provide subclasses to be able to form collections of e.g. {{{Placement<Effect>}}}, but we don't stress polymorphism here. &rarr; PlacementType
* Why was the question how to access a ~MObject subinterface a Problem?
*# we want the Session/Fixture to be a collection of Placements. This means, either we store pointers, or Placement needs to be //one// unique type!
*# but if Placement is //a single type//, then we can get only MObjects from a Placement.
*# then either we had to do everything by a visitor (which gets the concrete subtype dynamically), or we'd end up switching on type.

An implementation facility used to keep track of individual Placements and their relations.
Especially, the [[Session]] maintains such an index, allowing to use the (opaque) PlacementRef tags for referring to a specific "instance" of an MObject, //placed// in a unique way into the current session. And, moreover, this index allows for one placement referring to another placement, so to implement a //relative// placement mode. Because there is an index behind the scenes, it is possible to actually access such a referral in the reverse direction, which is necessary for implementing the desired placement behaviour (if an object instance used as anchor is moved, all objects placed relatively to it have to move accordingly, which necessitates finding those other objects).

Besides searching, [[placement instances|Placement]] can be added to the index, thereby creating a copy managed by the backing data structure. Thus, the session's PlacementIndex is the backbone of the session data structure, and the session's contents are actually contained within it.

!rationale of the choosen implementation
What implementation approach to take for the index largely depends on the usage pattern. Generally speaking, the Lumiera [[Session]] is a collection of MObjects attached by [[Placement]]; any relations between these objects are established on a logical level, implemented as markers within the respective placements. For [[building the render nodes graph|BuildProcess]], a consolidated view of the session's effective contents is created ("[[Fixture]]"), then to be traversed while emitting the LowLevelModel. The GUI is expected to query the index to discover parts of the structure; the [[object reference tags|MObjectRef]] returned by these queries will be used as arguments to any mutating operation on the objects within the session.

Using a ''flat hashtable'' allows to access a Placement denoted by ID in O(1). This way we get the Placement, but nothing more. So, additionally we'd have to set up an data record holding additional information:
* the [[scope|PlacementScope]] containing this placement
* allowing to create a path "up" from this scope, which is used for resolving any queries
* (maybe/planned) relations to other placements

Alternatively, we could try to use a ''structure based index'', thereby avoiding the mentioned description record by folding any of the contained information into the surrounding data structure:
* the scope would be obvious from the index, resp. from the path used to resolve this index
* any other information, especially the relations, would be folded into the placement
* this way, the "index" could be reduced to being the session data structure itself.

//does a placement need to know it's own ID?//&nbsp; Obviously, there needs to be a way to find out the ID for a given placement, especially in the following situations:
* for most of the operations above, when querying additional information from index for a given placement
* to create a PlacementRef (this is a variant of the "shared ptr from this"-problem)
On second sight, this problem turns out to be more involved, because either we have to keep a second index table for the reverse lookup (memory address -> ID), or have to tie the placement by a back-link when adding it to the index/session data structure, or (alternatively) it forces us to store a copy of the ID //within// the placement itself. The last possibility seems to create the least impact; but implementing it this way effectively gears the implementation towards a hashtable based approach.


!supported operations
The placement index is utilized by editing operations and by executing the build process. Besides these core operations it allows for resolving PlacementRef objects. This latter functionality is used by all kinds of relative placements and for dealing with them while building, but it is also used to resolve [[object reference tags|MObjectRef]], which possibly may have been handed out via an external API or may have crossed layer boundaries. From these use cases we derive the following main operations to support:
* find the actual [[Placement]] object for a given ID
* find the //scope//&nbsp; a given placement resides in. More specifically, find the [[placement defining this scope|PlacementScope]]
* find (any/all) other placements referring to a given placement ("within this scope")
* add a new placement to a scope given as parameter
* remove a placement from index
* (planned) move a placement to a different scope within the session

!!!Handling of Subtypes
While usually type relations don't carry over to smart-poitner like types, in case of Placement I used a special definition pattern to artificially create such type relations ({{{Placement<session::Clip>}}} is subclass of {{{Placement<MObject>}}}). Now, as we're going to copy and maintain Placements within the storage backing the index, the question is: do we actually store subtypes (all having the same size btw) or do we use a vtable based mechanism to recover the type information on access?

Actually, the handling of placement types quite flexible; the actual hierarchy of Placement types can be determined in the //usage context// &mdash; it is not really stored within the placement, and there is no point in storing it within the index. Only the type of the //pointee//&nbsp; can be checked with the help of Placement's vtable.

Thus, things just fall into place here, without the need of any additional implementation logic. The index stores {{{Placement<MObject>}}} instances. The usage context will provide a suitable meaning for more specifically typed placements, and as long as this is in line with the type relations on the pointee(s), as checked by the {{{Placement::isCompatible<TY>()}}} call, the placement relations will just work out right by the the cast happening automatically on results retrieval.
&rarr; see PlacementType

!implementation
Consequently, we incorporate a random hash (implemented as {{{LUID}}}) into the individual placement, this way creating an distinguishable //placement identity,// which is //not retained on copying.// The actual ID tag is complemented by a compile time type (template parameter), thus allowing to pass on additional context information through API calls. Placements themselves use a vtable (and thus RTTI), allowing to re-discover the exact type at runtime. Any further relation information is contained within the placement's [[locating pins|LocatingPin]], thus, any further description records can be avoided by storing the placements immediately //within the index.// To summarise, the implementation is comprised of
* a main table resolving hash-ID to storage location
* information about the enclosing scope for each placement, stored within the main entry
* a reverse lookup table to find all placements contained within a given scope
* an instance holding and managing facility based on pooled allocation
A generic reference mechanism for Placements, as added to the current session.
While this reference itself is not tied to the actual memory layout (meaning it's //not// a disguised pointer), the implementation relies on a [[placement index facility|PlacementIndex]] for tracking and retrieving the actual Placement implementation object. As a plus, this approach allows to create active interconnections between placements. We utilise this possibility to create a system of [[nested scopes|PlacementScope]]. The index facility allows to //reverse// the relation denoted by such a reference, inasmuch it is possible to retrieve all other placements referring to a given target placement. But for an (external) user, this link to an index implementation is kept transparent and implicit.

!implementation considerations
From the usage context it is clear that the PlacementRef needs to incorporate a simple ID as the only actual data in memory, so it can be downcasted to a POD and passed as such via LayerSeparationInterfaces. And, of course, this ID tag should be the one used by PlacementIndex for organising the Placement index entries, thus enabling the PlacementRef to be passed immediately to the underlying index for resolution. Thus, this design decision is interconnected with the implementation technique used for the index (&rarr; PlacementIndex). From the requirement of the ID tag to be contained in a fixed sized storage, and also from the expected kinds of queries Ichthyo (5/09) choose to incorporate a {{{LUID}}} as a random hash immediately into the placement and build the ID tag on top of it. 

!using placement references
Placement references can be created directly from a given placement, or just from an {{{Placement::ID}}} tag. Creation and dereferencing can fail, because the validity of the reference is checked with the index. This implies accessing the //current session// behind the scenes. Placement references have value semantics. Dereferencing searches the denoted Placement via index, yielding a direct (language) ref.

Placement references mimic the behaviour of a real placement, i.e. they proxy the placement API (actually the passive, query-related part of placement's API functions), while directly forwarding calls to the pointee (~MObejct or subclass) when using {{{operator->()}}}. They can be copied and especially allow a lot of assignments (from placement, placement-ID or even plain LUID), even including a dynamic downcast on the pointee. Bottom line is that a placement ref can pretty much be used in place of a language ref to a real placement, which is crucial for implementing MObjectRef.

MObjects are attached into the [[Session]] by adding a [[Placement]]. Because this especially includes the possibility of //grouping or container objects,// e.g. [[sequences|Sequence]] or [[forks ("tracks")|Fork]] or [[meta-clips|VirtualClip]], any placement may optionally define and root a scope, and every placement is at least contained in one encompassing scope &mdash; of course with the exception of the absolute top level, which can be thought off as being contained in a scope of handling rules.

Thus, while the [[sequences|Sequence]] act as generic container holding a pile of placments, actually there is a more fine grained structure based on the nesting of the tracks, which especially in Lumiera's HighLevelModel belong to the sequence (they aren't a property of the top level timeline as one might expect). Building upon these observations, we actually require each addition of a placement to specify a scope. Consequently, for each Placement at hand it is possible to determine an //containing scope,// which in turn is associated with some Placement of a top-level ~MObject for this scope. The latter is called the ''scope top''. An example would be the {{{Placement<Track>}}} acting as scope of all the clips placed onto this track. The //implementation//&nbsp; of this tie-to-scope is provided by the same mechanism as utilised for relative placements, i.e. an directional placement relation. Actually, this relation is implemented by the PlacementIndex within the current [[Session]].


[>img[Structure of Placment Scopes|draw/scopeStructure1.png]]
!Kinds of scopes
There is only a limited number of situations constituting a scope
* conceptually, the very top level is a scope of general rules.
* the next level is the link of [[binding|BindingMO]] of a [[Sequence]] into either a (top-level) [[Timeline]] or as virtual media into a VirtualClip. It is implemented through a {{{Placement<Binding>}}}.
* each sequence has at least one (manadtory) top-level placement holding its root track
* tracks may contain nested sub tracks.
* clips and (track-level) effects likewise are associated with an enclosing track.
* an important special case of relative placement is when an object is [[attached|AttachedPlacementProblem]] to another leading object, like e.g. an effect modifying a clip
__note__: attaching a Sequence in multiple ways &rarr; [[causes scoping problems|BindingScopeProblem]]

!Purpose of Placement scoping
Similar to the common mechanisms of object visibility in programming languages, placement scopes guide the search for and resolution of properties of placement. Any such property //not defined locally// within the placement is queried ascending through the sequence of nested scopes. Thus, global definitions can be shadowed by local ones.
Placement is a smart-ptr. As such, usually smart-pointers are templated on the pointee type, but a type relation between different target types doesn't carry over into a type relation on the corresponding smart-pointers. Now, as a [[Placement]] or a PlacementRef often is used to designate a specific "instance" of an MObject placed into the current session, the type parametrisation plays a crucial role when it comes to processing the objects contained within the session. Because the session deliberately has not much additional structure, besides the structure created by [[scopes and aggregations|PlacementScope]] within the session's contents.

To this end, we're using a special definition pattern for Placements, so
* a placement can refer to a specific sub-Interface like Fork ("track"), Clip, Effect
* a specialised placement can stand-in for the more generic type.

!generic handling
Thus, ~MObject and Placement<~MObject> relate to the "everything is an object" view of affairs. More specific placements are registered, searched and retrieved within the session through this generic interface. In a similar vein, ~PlacementRef<~MObject> and MObjectRef is used on LayerSeparationInterfaces. This works, because it is possible to re-discover the more fine-grained target type. 
* ''active type rediscovery'' happens when a [[using visitors|VisitorUse]], which requires support by the pointee types (~MObject subclasses), so the visitor implementation is able to build a trampoline table to dispatch into a specifically typed context.
* ''passive type rediscovery'' is possible whenever the usage context //is already specifically typed.// Because in this case we can check the type (by RTTI) and filter out any placement not convertible to the type requested within the given context.

!downcasting and slicing
Deliberately, all Placements have the same runtime size. Handling them value-like under certain circumstances is intended and acceptable. Of course then slicing on the level of the Placement will happen. But because the Placement actually is a smart-pointer, the pointee remains unaffected, and can be used later to re-gain the fully typed context.

On the other hand, care has to be taken when ''downcasting'' a placement. When possible, this should be preceded by a {{{Placement::isCompatible<TY>()}}}-call, which checks based on the pointee's RTTI. Client code is encouraged to avoid explicit casting and rather rely on the provided facilities:
* invoking one of the templated access functions of the PlacementIndex
* using the QueryFocus to issue a specifically typed {{{ScopeQuery<TY>}}} (which yields an suitable iterator)
* create an specifically typed MObjectRef and bind it to some reference source (~Placement-ID, LUID, Placement instance within the session)
* implementing a visitor (~BuilderTool)

//This page is a scrapbook for working out the implementation of how to (re)build the [[Fixture]]//
Structurally, (re)building the Fixture rather belongs to [[Session]], but it is implemented very similar to the render engine build process: by treating all ~MObjects found in the various [[sequences|Sequence]] with a common [[visiting tool|VisitorUse]], this tool collects a simplified view with everyting made explicit, which can be pulled of as Fixture, i.e. (special kind of sequence list) afterwards.
* there is a //gathering phase// and a //solving phase//, the gathering is done by visiting.
* during the gathering phase, there ''need to be a lock'' preventing any other edit operation.
* the solving is delegated to the individual ~Placements. It is effectively a {{{const}}} operation creating a ExplicitPlacement (copy)
* thus the Fixture contains these newly created ~ExplicitPlacements, refering to ~MObjects shared with the original Placements within the sequences

!!!prerequisites
* Session and sequences exist.
* Pipes exist and are configured

!!!postconditions
* the Fixture contains one sorted timeline of ExplicitPlacement instances
* Anything in this list is actually to be rendered
* {{red{TODO: how to store and group the effects?}}}
* any meta-clips or other funny things have been resolved to normal clips with placement
* any multichannel clips has been broken down to elementary clips {{red{TODO: what is "elementary". e.g. stereo sound streams?}}}
* any globally or otherwise strangely placed effects have been attached either to a clip or to some pipe
* we have one unified list of tracks

<<tasksum start>>
<<taskadder below>>
<<task >> work out how to get the processing of effects chained to some clip right
<<task >> work out how to handle multichannel audio (and stereo video)

!gathering phase
!!preparing
<<task>>what data collections to build?

!!treating a Track
<<task>>work out how to refer to pipes and do other config
<<task>>get some uniqe identifier and get relevant properties

!!treating a {{{Placement<Clip>}}}
<<task>>check the direct enablement status
<<task>>asses the compound status, maybe process recursively

!!treating an {{{Placement<Effect>}}}
<<task>>find out the application point {{red{really?}}}

!solving phase
<<task>>trigger solving on all placements
<<task>>sort the resulting ~ExplicitPlacements

<<tasksum end>>
//This page is a scrapbook for working out the implementation of the builder//

* NodeCreatorTool is a [[visiting tool|VisitorUse]]
* the render engine to be built is contained as state within this tool object while it is passed around
!!!prerequisites
* Session and sequences exist.
* Pipes exist and are configured
* Fixture contains ExplicitPlacement for every MObject to be rendered, and nothing else

<<tasksum start>>
<<taskadder>>

!!preparing
We need a way of addressing existing [[pipes|Pipe]]. Besides, as the Pipes and Tracks are referred by the Placements we are processing, they are guaranteed to exist.

!!treating a Pipe
<<task>>get the [[processing pattern|ProcPatt]] of the pipe by accessing the underlying pipe asset.
<<task>>process this ProcPatt recursively

!!treating a processing pattern
<<task>>{{red{finally go ahead and define what a ProcPatt need to be...}}}

!!treating a {{{Placement<Clip>}}}
<<task>>get the ProcPatt of the underlying media (asset)
<<task>>process the ProcPatt recursively
<<task>>access the ClipSourcePort (which may be created on-the-fly)
<<task>>enqueue an WiringRequest for connecting the source pipeline to the source port
<<task>>process the clip's render pipe recursively (thus adding the camera etc.)
<<task>>enqueue an WiringRequest for any placement to some pipe for this clip.
* __note__: we suppose
** all wiring requests will be done after the processing of entities
** all effects placed to this clip will be processed after this clip (but before the wiring requests)

!!treating an {{{Placement<Effect>}}}
<<task>>{{red{how to assure that effecs are processed after clips/pipes??}}}
<<task>>find out the application point
<<task>>build a transforming node for the effect and insert it there

!!postprocessing
<<task>>sort and group the assembled list of [[wiring requests|WiringRequest]] by pipes

<<tasksum end>>
With //play process//&nbsp; we denote an ongoing effort to calculate a stream of frames for playback or rendering.
The play process is an conceptual entity linking together several activities in the Vault-Layer and the RenderEngine. Creating a play process is the central service provided by the [[player subsystem|Player]]: it maintains a registration entry for the process to keep track of associated entities, resources allocated and calls [[planned|FrameDispatcher]] and [[invoked|RenderJob]] as a consequence, and it wires and exposes a PlayController to serve as an interface and information hub.

''Note'': the player is in no way engaged in any of the actual calculation and management tasks necessary to make this [[stream of calculations|CalcStream]] happen. The play process code contained within the player subsystem is largely comprised of organisational concerns and not especially performance critical.
* the [[engine backbone|RenderBackbone]] is responsible for [[dispatching|FrameDispatcher]] the [[calculation stream|CalcStream]] and preparing individual calculation jobs
* the [[Scheduler]] at the [[engine core|RenderEngine]] has the ability to trigger individual frame calculation carry out individual [[frame calculation jobs|RenderJob]].
* the OutputSlot exposed by the [[output manager|OutputManagement]] is responsible for accepting timed frame delivery

[>img[Anatomy of a Play Process|uml/fig144005.png]]
!Anatomy of a Play Process
The Controller is exposed to the client and acts as frontend handle, while the play process body groups and manages all the various parts cooperating to generate output. For each of the participating global pipes we get a [[feed|Feed]] to drive that pipeline to deliver media of a specific kind.

Right within the play process, there is a separation into two realms, relying on different programming paradigms. Obviously the play controller is a state machine, and similarily the body object (play process) has a distinct operation state. Moreover, the current collection of individual objects hooked up at any given instance is a stateful variable. To the contrary, when we enter the realm of actual processing, operations are carried out in parallel, relying on stateless descriptor objects, wired into individual calculation jobs, to be scheduled as non-blocking units of operation. For each series of consecutive frames to be calculated, there is a descriptor object, the CalcStream, which also links to a specificaly tailored dispatcher table, allowing to schedule the individual frame jobs. Whenever the controller determines a change in the playback plan (speed change, skip, scrubbing, looping, ...), a new CalcStream is created, while the existing one is just used to mark any not-yet processed job as superseded.

&rarr; for overview see also OutputManagement
The [[Player]] is an independent [[Subsystem]] within Lumiera, located at Steam-Layer level. A more precise term would be "rendering and playback coordination subsystem". It provides the capability to generate media data, based on a high-level model object, and send this generated data to an OutputDesignation, creating an continuous and timing controlled output stream. Clients may utilise these functionality through the ''play service'' interface.

!subject of performance
Every play or render process will perfrom a part of the session. This part can be specified in varios ways, but in the end, every playback or render boils down to //performing some model ports.// While the individual model port as such is just an identifier (actually implemented as ''pipe-ID''), it serves as a common identifier used at various levels and tied into several related contexts. For one, by querying the [[Fixture]], the ModelPort leads to the actual ExitNode -- the stuff actually producing data when being pulled. Besides that, the OutputManager used for establishing the play process is able to resolve onto a real OutputSlot -- which, as a side effect, also yields the final data format and data implementation type to use for rendering or playback.


!provided services
* creating a PlayProcess
* managing existing play processes
* convenience short-cuts for //performing//&nbsp; several kinds of high-level model objects

!creating a play process
This is the core service provided by the player subsystem. The purpose is to create a collaboration between several entities, creating media output
;data producers
:a set of ModelPort elements to ''pull'' for generating output. They can either be handed in direcly, or resolved from a set of OutputDesignation elements
;data sinks
:to be able to create output, the PlayProcess needs to cooperate with [[output slots|OutputSlot]]
:physical outputs are never handled directly, rather, the playback or rendering needs an OutputManager to resolve the output designations into output slots
;controller
:when provided with these two prerequisites, the play service is able to build a PlayProcess.
:for clients, this process can be accessed and maintained through a PlayController, which acts as (copyable) handle and front-end.
;engine
:the actual processing is done by the RenderEngine, which in itself is a compound of several services within Vault-Layer and Steam-Layer
:any details of this processing remain opaque for the clients; even the player subsystem just accesses the EngineFaçade
//Integration effort to promote the development of rendering, playback and video display in the GUI//
This IntegrationSlice was started in {{red{2023}}} as [[Ticket #1221|https://issues.lumiera.org/ticket/1221]] to coordinate the completion and integration of various implementation facilities, planned, drafted and built during the last years; this effort marks the return of development focus to the lower layers (after years of focussed UI development) and will implement the asynchronous and time-bound rendering coordinated by the [[Scheduler]] in the [[Vault|Vault-Layer]]

<html>
<img title="Components participating in the »Playback Vertical Slice«" src="draw/VerticalSlice.Playback.svg" style="width:90%;"/>
</html>


!Ascent
__12.Apr.23__: At start, this is a dauntingly complex effort, demanding to reconcile several unfinished design drafts from years ago, unsuccessful attempts at that time towards a first »breakthrough«. Including a first run-up towards node invocation, the drafts regarding BuilderMechanics and FixtureDatastructure, a complete but never actually implemented OutputManagement concept and the groundwork pertaining to the [[Player]].  At that time, it occurred to me that the planning of render jobs exhibits structures akin to the //Monads// known from functional programming -- seemingly a trending topic. Following this blueprint, it was indeed straight forward to hook up all functional dependencies into a working piece of code -- a piece of code however, that turns out almost impenetrable after completion, since while it can be //verified// step by step, it does not support understanding and convey meaning. This experience (and a lot of similar ones) make me increasingly wary towards the self-proclaimed superiority of functional programming. Especially the Monads might be considered an Anti-pattern, something superficially compelling that lures into fostering unhealthy structures.
&rarr; see the critical review in AboutMonads

So the difficulties to understand my own (finished, working) code after several years compelled me to attempt a [[#1276|https://issues.lumiera.org/ticket/1276#comment:1]] refactoring of the FrameDispatcher, which I use as entrance point into the implementation of this //vertical slice//. This time I will approach the task as //on-demand processing pipeline// with //recursive expansion// -- attempting to segregate better what the monadic approach tended to interweave.

__May.23__: taking a //prototyping approach// now, since further development was hampered by incomplete requirements analysis interlocked with incomplete implementation drafts. Based on a rough preconception (drafted &rarr; [[here|FrameDispatcher]] and in my //Mindmap//), a hierarchy of mocked data structures will be built up, which can then support erecting the remoulded internals of the dispatcher. After these are back to working state, the focus will move downwards, thereby step by step replacing the mocked structures by real structures -- only then will it be possibly to assess the new design. So presumably the next steps will be
* ✔ augment the {{{DummyJob}}} to allow tracing Job invocations in tests
* ✔ build a {{{MockJobTicket}}} on top, implemented as subclass of the actual JobTicket
* ✔ build a {{{MockSegmentation}}} to hold onto ~JobTickets, which can be created as Mock
* ✔ define a simple specification language (based on the existing {{{GenNode}}}-DSL to define segments, tickets and prerequisite jobs
* ✔ implement a »~Split-Splice« algorithm for &rarr; SegmentationChange, rigged accordingly to generate a mocked Segementation for now
* ✔ create a testbed to assemble a JobPlanningPipeline step by step (&rarr; [[#920|https://issues.lumiera.org/ticket/920]] and [[#1275|https://issues.lumiera.org/ticket/1275|]])

__June.23__: building upon this prototyping approach, the dispatcher pipeline could be rearranged in the form of a pipeline builder, allowing to retract the originally used implementation scheme based on »Monads«. The implementation of the Dispatcher is complete, yet the build up of the [[»Render Drive« #1301|https://issues.lumiera.org/ticket/1301]] could not reasonably be completed, due to lack of a clear-shaped ''Scheduler interface''.

__July.23__: this leads to a shift of work focus towards implementing the [[Scheduler]] itself.
The Scheduler will be structured into two Layers, where the lower layer is implemented as //priority queue// (using the STL). So the most tricky part to solve is the representation of //dependencies// between jobs, with the possible extension to handling IO operations asynchronously. Analysis and planning of the implementation indicate that the [[scheduler memory managment|SchedulerMemory]] can be based on //Extents//, which are interpreted as »Epochs« with a deadline. These considerations imply what steps to take next for building up Scheduler functionality and memory management required for processing a simple job
* ✔ build a first working draft for the {{{BlockFlow}}} allocation scheme [[#1311|https://issues.lumiera.org/ticket/1311]]
* ✔ define and cover the basic [[Activities|RenderActivity]] necessary to implement a plain-simple-Job (without dependencies)
* ✔ pass such an Activity through the two layers of the Scheduler
* ✔ establish a baseline for //performance measurements//
* ⌛ adapt the [[job planning pipeline|JobPlanningPipeline]] implemented thus far to produce the appropriate {{{Activity}}} records for the scheduler

__December.23__: building the Scheduler required time and dedication, including some related topics like a [[suitable memory management scheme|SchedulerMemory]], rework and modernisation of the [[#1279 thread handling framework|https://issues.lumiera.org/ticket/1279]], using a [[worker pool|SchedulerWorker]] and developing the [[foundation for load control|SchedulerLoadControl]]. This amounts to the creation of a considerable body of new code; some &rarr;[[load- and stress testing|SchedulerTest]] helps to establish &rarr;[[performance characteristics and traits|SchedulerBehaviour]].

__April.24__: after completing an extended round of performance tests for the new Scheduler, development focus is shifted now shifted upwards to the [[Render Node Network|ProcNode]], where Engine activity is carried out. This part was addressed at the very start of the project, and later again -- yet could never be completed, due to a lack of clear reference points and technical requirements. Hope to achieve a breakthrough rests on this integration effort now.
__June.24__: assessment of the existing code indicated some parts not well suited to the expected usage. Notably the {{{AllocationCluster}}}, which is the custom allocator used by the render nodes network, was reworked and simplified. Moreover, a new custom container was developed, to serve as //link to connect the nodes.// Beyond that, in-depth review validated the existing design for the render nodes, while also implying some renaming and rearrangements
* 🗘 establish a test setup for developing render node functionality
* 🗘 build, connect and invoke some dummy render nodes directly in a test setup
* ⌛ introduce a middle layer for linking the JobTicket to the actual invocation
* ⌛ rework and complete the existing node invocation code

!Decisions
;Scheduler
:is understood as a high-level Service, not a bare bone implementation mechanism
:* shall support concerns of process- and memory management
:* thus needs to //understand Job dependencies//
:* will be decomposed into several implementation layers
:SchedulerMemory will be managed by an //Extent scheme.//
;Asynchronous IO
:is treated as a responsibility and may block dependent planned activities
:a dependency check is operationalised as activity primitive and also used to hold allocations alive
:operational control and data management is //performed by the workers// interspersed with render activities
:however, only one worker at any time is allowed to perform these meta tasks, avoiding further synchronisation
;Workers
:are active agents in the Lumiera ~Render-Engine and drive the processing collaboratively
:there is no central »manager« or »dispatcher« thread, rather work is //pulled// and management work is handled alongside
:Load and capacity management is [[handled stochastically|SchedulerLoadControl]] -- workers „sample the timeline“
Within Lumiera, &raquo;Player&laquo; is the name for a [[Subsystem]] responsible for organising and tracking //ongoing playback and render processes.// &rarr; [[PlayProcess]]
The player subsystem does not perform or even manage any render operations, nor does it handle the outputs directly.
Yet it addresses some central concerns:

;uniformity
:all playback and render processes are on equal footing, handled in a similar way.
;integration
:the player cares for the necessary integration with the other subsystems
:it consults the OutputManagement, retrieves the necessary information from the [[Session]] and coordinates the forwarding of Vault-Layer calls.
;time quantisation
:the player translates continuous time values into discrete frame counts.
:to perform this [[quantisation|TimeQuant]], the help of the session for building a TimeGrid for each output channel is required.


!{{red{WIP 4/2023}}} still not finished
The design of the Player subsystem was settled several years ago, together with a draft implementation of the FrameDispatcher and some details regarding [[render jobs|RenderJob]] and [[processing nodes|ProcNode]]. The implementation could not be finished at that time, since too much further details in other parts of the engine were not quite settled yet. After focussing on the GUI for several years, a new effort towards [[integration of rendering|PlaybackVerticalSlice]] has been started...
&rarr; [[Rendering]]
__Joelholdsworth__ and __Ichthyo__ created this player mockup in 1/2009 to find out about the implementation details regarding integration and colaboration between the layers. There is no working render engine yet, thus we use a ~DummyImageGenerator for creating faked yuv frames to display. Within the GUI, there is a ~PlaybackController hooked up with the transport controls on the timeline pane. 
# first everything was contained within ~PlaybackController, which spawns a thread for periodically creating those dummy frames
# then, a ~PlayerService was factored out, now implemented within ~Steam-Layer (later to delegate to the emerging real render engine implementation).<br/>A new LayerSeparationInterface called ''~DummyPlayer'' was created and set up as a [[Subsystem]] within main().
# the next step was to support multiple playback processes going on in parallel. Now, the ~PlaybackController holds an smart-handle to the ~PlayProcess currently generating output for this viewer, and invokes the transport control functions and the pull frame call on this handle.
# then, also the tick generation (and thus the handling of the thread which pulls the frames) was factored out and pushed down into the mentioned ~PlayProcess. For this to work, the ~PlaybackController now makes a display slot available on the public GUI DisplayFacade interface, so the ~PlayProcessImpl can push up the frames for display within the GUI
[img[Overview to the dummy player operation|draw/playerArch1.png]]

!when playing...
As a prerequisite, a viewer has to be prepared within the GUI. A XV video display widget is wired up to a sigc++ signal slot, using the Glib::Dispatcher to forward calls from the play process thread to the GTK main event loop thread. All of this wiring actually is encapsulated as a DisplayerSlot, created and registered with the DisplayService.

When starting playback, the display slot handle created by these preparations is used to create a ~PlayProcess on the ~DummyPlayer interface. Actually
* a ~PlayProcessImpl object is created down within the player implementation
* this uses the provided slot handle to actually //allocate// the display slot via the Display facade. Here, //allocating// means registering and preparing it for output by //one single// ~PlayProcess. For the latter, this allocation yields an actually opened display handle.
* moreover, the ~PlayProcessImpl aquires an TickService instance, which is still trotteling (not calling the periodic callback)
* probably, a real player at this point would initiate a rendering process, so he can fetch the actual output frames periodically.
* on the "upper" side of the ~DummyPlayer facade, a lib::Handle object is created to track and manage this ~PlayProcesImpl instance
The mentioned handle is returned to the ~PlaybackController within the GUI, which uses this handle for all further interactions with the Player. The handle is ref counting and has value semantics, so it can be stored away, passed as parameter and so on. All such handles corresponding to one ~PlayProcess form a family; when the last goes out of scope, the ~PlayProcess terminates and deallocates any resources. Conceptually, this corresponds to pushing the "stop" button. Handles can be deliberately disconnected by calling {{{handle.close()}}} &mdash; this has the same effect as deleting a handle (when all are closed or deleted the process ends).

All the other play control operations are simply forwarded via the handle and the ~PlayProcessImpl. For example, "pause" corresponds to setting the tick frequency to 0 (thus temporarily discontinuing the tick callbacks). When allocating the display slot in the course of creating the ~PlayProcessImpl, the latter only accesses the Display facade. It can't access the display or viewer directly, because the GUI lives within an plugin; lower layers aren't allowed to call GUI implementation functions directly. Thus, within the Display facade a functor (proxy) is created to represent the output sink. This (proxy) Displayer can be used within the implementation of the perodic callback function. As usual, the implementation of the (proxy) Displayer can be inlined and doesn't create runtime overhead. Thus, each frame output call has to pass though two indirections: the function pointer in the Display facade interface, and the Glib::Dispatcher.

!rationale
There can be multiple viewer widgets, to be connected dynamically to multiple play-controllers. (the latter are associated with the timeline(s)). Any playback can require multiple playback processes to work in parallel. The playback controller(s) should not be concerned with managing the play processes, which in turn should neither care for the actual rendering, nor manage the display frequency and synchronisation issues. Moreover, the mentioned parts live in different layers and especially the GUI needs to remain separated from the core. And finally, in case of a problem within one play process, it should be able to unwind automatically, without interfering with other ongoing play processes.
!!!follw-up work
This architecture experiment highlighted various problematic aspects, which became the starting point for the actual [[Player]] design
* the activity of //rendering// and //delivering frames// will be at the centre of a new //player subsystem//
* the ~DisplayService from the dummy player design moves down into Steam-Layer and becomes the OutputManager
* likewise, the ~DisplayerSlot is transformed into the interface OutputSlot, with various implementations to be registered with the OutputManager
* one core idea is retained: a play controler acts as the frontend and handle connected to a autonomous PlayProcess.
!!!!new entities to build {{red{#805}}}
* the ~DisplayService needs to be generalised into an OutputManager interface
* we need to introduce a new kind of StructAsset, the ViewerAsset.
* then, of course, the PlayService interface needs to be created
* several interfaces need to be shaped: the ''engine façade'' and a ''scheduler frontend''.
* the realms of player and rendering are interconnected within the [[dispatch operation|FrameDispatcher]]
* as a temporary soltion, a ''~DummyPlayConnection'' will be created hard-wired, within the ~DummyPlayer service. This dumy container is a test setup, explicitly creating a OutputManager implementation, possibly a data generator (thus omitting the not-yet-implemented Builder) and the corresponding ModelPort. This way, the ~DummyPlayer service can be evolved gradually to use the real PlayService for creating the dummy generated output data. It will then pass back the resulting PlayController to the existing GUI setup.
Within Lumiera, we distinguish between //model state// and //presentation state.// Any conceivable UI is stateful and reshapes itself through interaction -- but the common UI toolkits just give us this state as //transient state,// maybe with some means to restore state. For simple CRUD applications this might be sufficient, as long as the data is self contained and the meaning of data is self evident. But a work environment, like the NLE we're building here, layers additional requirements on top of mere data access. To be able to work, not only you need tools, you need //enablement.// Which, in a nutshell, means that things-at-hand need to be at hand. Sounds simple, yet is a challenge still not adequately fulfilled by contemporary computer based interfaces and  environments.

A fundamental design decision in Lumiera is to distinguish between engine, model and working environment. The //model// is oriented towards the practicalities of film making, not towards the practicalities of implementing video processing. Also, the model needs to be self contained to the degree that //anything with a tangible influence on the final rendered result// needs to be represented within the model. So the task of editing a film is the task of building such a model. This is an ongoing effort over an extended period of time, which in fact turns this effort into a project. Yet, as such, any project also pertains to the way you are working within that project. These might be fundamental decisions about what material to use, about formats and technologies, but also some set of conventions emerging under way. Beyond that, any project develops habits of handling matters, which, over time, turn into rules. And even beyond that, over time, you expect to find common things at common places. The latter is what presentation state is all about.

!dealing with presentation state
The backbone of the Lumiera UI is arranged such as to produce a data feed with state transition notifications relevant for presentation state. This is not the raw data feed of UI signals, like a click on this or that button (such is kept confined within the realm of the UI toolkit, here GTK). Rather, it is a custom made, pre-filtered stream of messages on the UI-Bus, organised and structured and outfitted with meaning by the [[tangible interfeace elements|UI-Element]]. Technically speaking, these messages are known as ''state mark messages'', and they are handled through a specific set of API functions on the bus interface. It is a bidirectional data exchange protocol, encompassed by
;state notifications
:whenever an UI-Element deems a state transition ''relevant for persistent presentation state'', a state mark message will be emitted
:* the originating element, as designated by its ID, is evident from the API call
:* the message itself is a GenNode
:** the ~ID-symbol of this message element was chosen by the emitting entity; it has distinctive meaning for this entity, e.g. »expand«, »reOrder«,...
:** the payload of the message is a data element, sufficient for the originator to restore itself to precisely the state as represented through this state mark.
;state marking
:whenever the [[state manager|PresentationStateManager]] deems some state eligible to be restored, it casts the relevant state mark back at its originator
:* so the originator needs to be discernibly by its ID, and this ID needs to be fabricated in a way such as to be reproducible within a later editing session
:* the UI-Bus allows just to cast some state mark message at "someone" -- this is also used by the lower layers for notifications and error results
:* the receiver is assumed to "understand" the actual meaning of that message and reshape itself to comply
//The PresentationStateManager creates the ability to build persistent PresentationState//
Run as part of the UiCoreServices, it is attached to the UI-Bus and listens to all ''state mark messages'' to distil the notion of relevant current state.
On a basic level, the task is to group and store those messages in a way as to overwrite previous messages with new updates on the level of individual properties. Beyond that, there is the sensitivity to context, presentation perspective and work site, which means to impose an additional structure on this basic ''state snapshot'', to extract and replicate structured sets of state information.
Open issues, Things to be worked out, Problems still to be solved... 

!!Parameter Handling
The requirements are not quite clear; obviously Parameters are the foundation for getting automation right and for providing effect editing interfaces, so it seems to me we need some sort of introspection, i.e. Parameters need to be discovered, enumerated and described at runtime. (&rarr; see [[tag:automation|automation]])

''Automation Type'': Directly connected is the problem of handling the //type// of parameters sensible, including the value type of automation data. My first (somewhat naive) approach was to "make everything a double". But this soon leads into quite some of the same problems haunting the automation solution implemented in the current Cinelerra codebase. What makes the issue difficult is the fact we both need static diversity as well as dynamic flexibility. Usually, when combining hierarchies and templates, one has to be very careful; so I just note the problem down at the moment and will revisit it later, when I have a more clear understanding of the demands put onto the [[ProcNode]]s

!!Treatment of Time (points) and Intervals
At the moment we have no clear picture what is needed and what problems we may face in that domain.
From experience, mainly with other applications, we can draw the following conclusions
* drift and rounding errors are dangerous, because time in our context usually is understood as a fixed grid (Frames, samples...)
* fine grained time values easily get very large
* Cinelerra currently uses the approach of simply counting natural values for each media type separately. In an environment mixing several different media types freely, this seems a bit too simplistic (because it actually brings in the danger of rounding errors, just think at drop frame TC)

!!Organizing of Output Channels
How to handle the simultaneous rendering of several output streams (video, audio channels). Shall we treat the session as one entity containing different output channels, or should it rather be seen as a composite of several sub-sessions, each for only one output channel? This decision will be reflected in the overall structure of the network of render nodes: We could have a list of channel-output generating pipelines in each processor (for every segment), or we could have independently segmented lists of Processors for every output channel/type. The problem is, //it is not clear what approach to prefer at the moment//  because we are just guessing.

!!Tracks, Channels, Layers
Closely related to this is the not-so-obvious problem how to understand the common global structures found in most audio and video editing applications. Mostly, they stem from imitating hardware recording and editing solutions, thus easing the transition for professionals grown up with analogue hardware based media. But as digital media are the de-facto standard nowadays, we could rethink some of this accidental complexity introduced by sticking to the hardware tool metaphor.
* is it really necessary to have fixed global tracks?
* is it really helpful to feed "source tracks" into global processing busses/channels?
Users accustomed with modern GUI applications typically expect that //everything is a object//  and can be pulled around and  manipulated individually. This seems natural at start, but raises the problem of providing a efficient workflow for handling larger projects and editing tasks. So, if we don't have a hard wired multitrack+bus architecture, we need some sort of templating to get the standard editing use case done efficiently.

!!Compound and Multiplicity
Simple relations can be hard wired. But, on the contrary, it would be as naive to define a Clip as having a Video track and two audio tracks, as it would be naive to overlook the problem of holding video and corresponding audio together. And, moreover, the default case has to be processed in a //straight forward// fashion, with as few tests and decisions as possible. So, basically each component participating in getting the core processing done has to mirror the structure pattern of the other parts, so that  processing can be done without testing and forking. But this leaves us with the problem where to put the initial knowledge about the structural pattern used for building up the compound structures and &mdash; especially &mdash; the problem how to treat different kinds of structural patterns, how to detect the pattern to be applied and how to treat multiple instances of the same structural pattern.

One example of this problem is the [[handling of multichannel media|MultichannelMedia]]. Following the above reasoning, we end with having a [["structural processing pattern"|ProcPatt]], typically one video stream with MPEG decoder and a pair of audio streams which need either to be routed to some "left" and "right" output pipes, or have to be passed through a panning filter accordingly. Now the problem is: //create a new instance of this structure for each new media, or detect which media to subsume under a existing pattern instance.//

!!Parallelism
We need to work out guidelines for dealing with operations going on simultaneously. Certainly, this will divide the application in several different regions. As always, the primary goal is to avoid multithread problems altogether. Typically, this can be achieved by making matters explicit: externalizing state, make the processing subsystems stateless, queue and schedule tasks, use isolation layers.
* the StateProxy is a key for the individual render processes state, which is managed in separate [[StateFrame]]s in the Vault. The [[processing network|ProcNode]] is stateless.
* the [[Fixture]] provides an isolation layer between the render engine and the Session / high-level model
* all EditingOperations are not threadsafe intentionally, because they are [[scheduled|ProcLayerScheduler]]

!!the perils of data representation
In software development, there is a natural inclination to cast "reality" into data, the structure of which has to be nailed down first. Then, everyone might "access reality" and work on it. Doing so might sounds rational, natural, even self-evident and sound, yet as compelling as it might be, this approach is fundamentally flawed. It is known to work well only for small, "handsome" projects, where you clearly know up-front what you're up to: namely to get away, after being paid, before anyone realises the fact you've built something that looks nice but does not fit.
So the challenge of any major undertaking in software construction is //not to build an universal model of truth.// Rather, we want to arrive at something that can be made to fit.
Which can be remoulded, over and over again, without breaking down.

More specifically, we start building something, and while under way, our understanding sharpens, and we learn that actually we want something entirely different. Yet still we know what we need and  we don't want just something arbitrary. There is a constant core in what we're headed at, and we need the ability to //settle matters.// We need a backbone to work against, a skeleton to support us with its firmness, while also offering joints and links, to be bent and remoulded without breakage. The distinctive idea to make such possible is the principle of ''Subsidiarity''. The links and joints between autonomous centres can be shaped to be in fact an exchange, a handover based on common understanding of the //specific matters to deal with,// at that given joint.
All Assets of kind asset::Proc represent //processing algorithms// in the bookkeeping view. They enable loading, browsing and maybe even parametrizing all the Effects, Plugins and Codecs available for use within the Lumiera Session.

Besides, they provide an __inward interface__ for the [[ProcNode]]s, enabling them to dispatch the actual processing call while rendering. Actually, this interface is always accessed via an ~Effect-MObject; mostly it is investigated and queried in the build process when creating the corresponding processor nodes. &rarr; see EffectHandling for details

{{red{todo: the naming scheme??}}}

[img[Asset Classess|uml/fig131077.png]]
{{red{Note 3/2010}}} it is very unlikely we'll organise the processing nodes as a class hierarchy. Rather it looks like we'll get several submodules/special capabilities configured in within the Builder
The middle Layer in the Lumiera Architecture plan was initially called »Proc Layer«, since it was conceived to perform //the processing.// Over time, while elaborating the Architecture, the components and roles were clarified step by step. It became apparent that Lumiera is not so much centred around //media processing.// The focus is rather about building and organising the film edit -- which largely is a task of organising and transforming symbolic representations and meta information.

In 2018, the middle Layer was renamed into &rarr; Steam-Layer

//A data processing node within the Render Engine.// Its key feature is the ability to pull a [[Frame]] of calculated data -- which may involve the invocation of //predecessor nodes// and the evaluation of [[Parameter Providers|ParamProvider]]. Together, those nodes form a »node-network«, a ''D''irected ''A''cyclic ''G''raph (DAG) known as the LowLevelModel, which is //explicit// to the degree that it can be //performed// to generate or process media. As such it is an internal structure and //compiled// by evaluating and interpreting the HighLevelModel within the [[Session]], which is the symbolic or logical representation of »the film edit« or »media product« established and created by the user.

So each node and all interconnections are //oriented.// Calculation starts from the output side and propagates to acquire the necessary inputs, thereby invoking the predecessor nodes (also known as »leads«). This recursive process leads to performing a sequence of calculations -- just to the degree necessary to deliver the desired result. And since the Render Node Network is generated by a compilation step, which is conducted repeatedly and on-demand by the part of the Lumiera application known as the [[Builder]], all node connectivity can be assumed to be adequate and fitting, each predecessor can be assumed to deliver precisely the necessary data into buffers with the correct format to be directly consumed for the current calculation step. The actual processing algorithm working on these buffers is provided by an ''external Media-processing-library'' -- the LowLevelModel can thus be seen as a pre-determined organisation and orchestration structure to enact externally provided processing functionality.
&rarr; [[Description of the render process|RenderProcess]]

!Structure of a Render Node
A {{{steam::engine::ProcNode}}} is an arrangement of descriptors, connectivity and {{{Port}}} objects, residing at a fixed memory location. It holds an //identity record// (the {{{NodeID}}}) and exposes an interface to »pull« and thus invoke calculations, abstracted by a layer of indirection (a virtual method call).
;Lead
:each node holds a (possibly empty) collection of //predecessor nodes// known as »leads« to provide prerequisite source data;
:however, a node can be a //generator// (data producer or data source), in which case it has no leads.
;Port
:while each node offers a single well-delineated functionality, the actual calculation can (optionally) be provided in several flavours or configuration variations typically related to data format. E.g. a sound filtering node can be able to process stereo sound, but also deliver the filtering on the left or right channel in isolation. Or video image processing can be provided to work on various image resolutions, each requiring a different buffer layout. Any such processing variants are exposed as »ports« of the node. Since the node network is pre-canned, individual port entries are only allocated when actually required for currently expected rendering functionality; ports are designated by number, and conceivably in practice many nodes will just expose a single port-#0
;Turnout
:abstracting by virtual call interface, the {{{Turnout}}} instance is the actual implementation of the interface {{{Port}}}. Only at this level the //actual processing function// is known, and the number and arrangement of input- and output-buffers. The Turnout thus encapsulates a »blue print« for the invocation, especially how to retrieve or provide buffers, how to invoke the external library function and where to drop off the result data. Notably the Turnout embeds a sequence of descriptors to designate //which leads// are actually to be pulled in //what sequence// and especially the number of the port to use on each lead node, which obviously varies to accommodate the actual data format requirements.
;Feed Manifold
:when actually invoked, the Turnout creates transient data structures on the call stack, before jumping into recursive invocation of lead ports. A series of input- and output connections is prepared, actually represented as a pointer to some data buffer. The »input slots« are populated with the //result buffers// retrieved from the recursive {{{pull()}}} calls to the lead ports, while the »output slots« are then allocated from a {{{BufferProvider}}} (and will be passed-up the recursive invocation structure when done)
;Invocation Adapter
:each Media-processing-library entails a very specific protocol how to invoke processing functionality, which becomes embedded into an individual adaptor object, including the {{{FeedManifold}}}. Typically, the actual invocation will be bound into a λ-closure, and thus the whole compound can be aggressively optimised.
;Weaving Pattern
:while each processing variant is materialised into an individual {{{Turnout}}} instance, some more generic wiring- and invocation schemes come to play when adapting the processing functions of any given Media-processing-library. The {{{WeavingPattern}}} thus defines a distinct scheme how to arrange and connect inputs and outputs and how to actually invoke the processing function. Notably, some operations allow for in-place processing, where the input-buffer is also the output-buffer and data is directly manipulated in-place, while other libraries require some elaborate layout scheme for the data buffers to work efficiently. And while Lumiera provides a set of //building blocks for weaving patterns,// each Media-processing-library actually must be adapted through a Lumiera plug-in, which entails using those building blocks or even completely different yet compatible weaving patterns as appropriate. In the end, each instantiation of such a weaving pattern becomes embedded into the render node as a {{{Turnout}}} instance and can be invoked through the {{{Port}}} interface.

!Building a Render Node
!The render invocation
{{red{⚠ In-depth rework underway as of 10/2024...}}}
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^

!! {{red{open questions}}}
* how to address a node
* how to type them
* how to discover the number and type of the ports
* how to discover the possible parameter ports
* how to define and query for additional capabilities

&rarr; see also the [[open design process draft|http://www.pipapo.org/pipawiki/Lumiera/DesignProcess/DesignRenderNodesInterface]]
&rarr; see [[mem management|ManagementRenderNodes]]
&rarr; see RenderProcess
This special type of [[structural Asset|StructAsset]] represents information how to build some part of the render engine's processing nodes network. Processing patterns can be thought of as a blueprint or micro program for construction. Most notably, they are used for creating nodes reading, decoding and delivering source media material to the render network, and they are used for building the output connection via faders, summation or overlay nodes to the global pipes (busses). Each [[media Asset|MediaAsset]] has associated processing patterns describing the codecs and other transformations needed to get at the media data of this asset. (and because media assets are typically compound objects, the referred ~ProcPatt will be compound too). Similarily, for each stream kind, we can retrieve a processing pattern for making output connections. Obviously, the possibilities opened by using processing patterns go far beyond.

Technically, a processing pattern is a list of building instructions, which will be //executed// by the [[Builder]] on the render node network under construction. This implies the possibility to define further instruction kinds when needed in future; at the moment the relevant sorts of instructions are
* attach the given sequence of nodes to the specified point
* recursively execute a nested ~ProcPatt
More specifically, a sequence of nodes is given by a sequence of prototypical effect and codec assets (and from each of them we can create the corresponding render node). And the point to attach these nodes is given by an identifier &mdash; in most cases just "{{{current}}}", denoting the point the builder is currently working at, when treating some placed ~MObject which in turn yielded this processing pattern in question.

Like all [[structural assets|StructAsset]], ~ProcPatt employs a special naming scheme within the asset name field, which directly mirrors its purpose and allows to bind to existing processing pattern instances when needed. {{red{TODO: that's just the general idea, but really it will rather use some sort of tags. Yet undefined as of 5/08}}} The idea is letting all assets in need of a similar processing pattern refer to one shared ~ProcPatt instance. For example, within a MPEG video media asset, at some point there will be a ~ProcPatt labeled "{{{stream(mpeg)}}}". In consequence, all MPEG video will use the same pattern of node wiring. And, of course, this pattern could be changed, either globally, or by binding a single clip to some other processing pattern (for making a punctual exception from the general rule) 

!!defining Processing Patterns
The basic working set of processing patterns can be expected to be just there (hard wired or default configuration, mechanism for creating an sensible fallback). Besides, the idea is that new processing patterns can be added via rules in the session and then referred to by other rules controlling the build process. Any processing pattern is assembled by adding individual build instructions, or by including another (nested) processing pattern.

!!retrieving a suitable Processing Pattern
For a given situation, the necessary ProcPatt can be retrieved by issuing a [[configuration query|ConfigQuery]]. This query should include the needed capabilities in predicate form (technically this query is a Prolog goal), but it can leave out some pieces of information by just requesting "the default" &rarr; see DefaultsManagement

!!how does this actually work?
Any processing pattern needs the help of a passive holder tool suited for a specific [[building situation|BuilderPrimitives]]; we call these holder tools [[building moulds|BuilderMould]]. Depending on the situation, the mould has been armed up by the builder with the involved objects to be connected and extended. So, just by issuing the //location ID// defined within the individual build instruction (in most cases simply {{{"current"}}}), the processing pattern can retrieve the actual render object to use for building from the mould it is executed in.

!!errors and misconfiguration
Viewed as a micro program, the processing patterns are ''weak typed'' &mdash; thus providing the necessary flexibility within an otherwise strong typed system. Consequently, the builder assumes they are configured //the right way// &mdash; and will just bail out when this isn't the case, marking the related part of the high-level model as erroneous.
&rarr; see BuilderErrorHandling for details

The Quantiser implementation works by determining the grid interval containing a given raw time.
These grid intervals are denoted by ordinal numbers (frame numbers), with interval #0 starting at the grid's origin and negative ordinals allowed.

!frame quantisation convention
Within Lumiera, there is a fixed convention how these frame intervals are to be defined (&rArr; [[time handling RfC|http://lumiera.org/documentation/devel/rfc/TimeHandling.html]])
[img[Lumiera's frame quantisation convention|draw/framePositions1.png]]
Especially, this convention is agnostic of the actual zero-point of the scale and allows direct length calculations and seamless sequences of intervals.
The //nominal coordinate// of an interval is also the starting point -- for automation keys frames we'll utilise special provisions.

!range limitation problems
because times are represented as 64bit integers, the time points addressable within a given scale grid can be limited, compared with time points addressable through raw (internal) time values. As an extreme example, consider a time scale with origin at {{{Time::MAX}}} -- such a scale is unable to represent any of the original scale's value above zero, because the resulting coordinates would exceed the range of the 64bit integer. Did I mention that 64bit micro ticks can represent about 300000 years?

Now the actual problem is that using 64bit integers already means pushing to the limit. There is no easy escape hatch, like using a larger scale data type for intermediaries -- it //is// the largest built-in type. Basically we're touching the topic of ''safe integer arithmetics'' here, which is frequently discussed as a security concern. The situation is as follows:
* every programmer promises "I'll do the checks when necessary" -- just to forget doing so in practice then.
* for C programming, the situation is hopeless. Calling functions for simple arithmetics is outright impractical -- that won't happen volountarily
* it is possible to build a ~SafeInt datatype in C++ though. While theoretically fine, in practice this also creates a host of problems:
** actually detecting all cases and coding the checks is surprisingly hard and intricate.
** the attempt to create a smooth integration with the built-in data types drives us right into one of the most problematic areas of C++
** the performance hit is considerable (factor 2 - 4)  -- again luring people into "being clever".

There is an existing [[SafeInt class by David LeBlanc|http://safeint.codeplex.com/]], provided by Microsoft through the ~CodePlex platform. It is part of the libraries shipped with ~VisualStudio and extensively used in Office 2010 and Windows, but also provided under a somewhat liberal license (you may use it but any derived work has to reproduce copyright and usage terms). Likely this situation also hindered the further [[development|http://thread.gmane.org/gmane.comp.lib.boost.devel/191010]] of a comparable library in boost (&rarr; [[vault|https://svn.boost.org/trac/boost/wiki/LibrariesUnderConstruction#Boost.SafeInt]]).

!!!solution possibilities
;abort operation
:an incriminating calculation need to be detected somehow (see below).
:the whole usage context gets aborted by excaption in case of an alarm, similar to an out of memory...
:thus immediate corruption is avoided, but the user has to realise and avoid the general situation.
;limit values
:provide a limiter to kick in after detecting an alarm (see below).
:time values will just stick to the maximum/minimum boundary value....
:the rest of the application needs to be prepared for timing calculations to return degenerate results
;~SafeInt
:base ~TimeValue on a ~SafeInt<gavl_time_t>
:this way we could easily guarantee for detecting any //situation.//
:of course the price is the performance hit on all timing calculations.
:Moreover -- if we want to limit values instead of raising an exception, we'd need to write our own ~SaveInt.
;check some operations
:time values are mostly immutable, thus it would likely be sufficient only to check some strategically important points
:#parsing or de-serialising
:#calculations in ~TimeVar
:#quantisation
:#build Time from framecount
;limit time range
:because the available time range is so huge, it wouldn't hurt to reduce it by one or two decimals
:this way both the implementation of the checks could be simplified and the probability of an overflow reduced
;ignore the problem
:again, because of the huge time range, the problem could generally be deemed irrelevant
:we could apply a limiter to enforce a reduced time range just in the quantisation and evaluation of timecodes

A combination of the last approaches seems to be most appropriate here:
Limit the officially allowed time range and perform simple checks when quantising a time value and when applying a scale factor.
We limit {{{Time::MAX}}} by factor 1/30 and define the minimum //symmetrically to this.// This still leaves us with ''±9700 years allowed timerange''.
{{red{WIP as of 10/09}}}...//brainstorming about the first ideas towards a query subsystem//

!use case: discovering the contents of a container in the HighLevelModel
In the course of shaping the session API, __joel__ and __ichthyo__ realised that we're moving towards some sort of discovery or introspection. This gives rise to the quest for a //generic// pattern how to issue and run these discovery operations. The idea is to understand such a discovery as running a query &mdash; using this specific problem to shape the foundation of a query subsystem to come.
* a ''query'' is a polymorphic, noncopyable, non-singleton type; a query instance corresponds to one distinctly issued query
* issuing a query yields a result set, which is hidden within the concrete query implementation.
* the transactional behaviour needs still to be defined: how to deal with concurrent modifications? COW?
* the query instance remains property of the entity exposing the query capability.
* client code gets a result iterator, which can be explored //only once until exhaustion.//
* the handed out result iterator is used to manage the allocation for the query result set by sideeffect (smart handle). &rarr; Ticket #353

For decoupling the query invocation from the facility actually processing the query, we need to come up with common pattern. In 10/09, there is an immediate demand for such a solution pattern for implementing the QueryFocus and PlacementScope framework, which is crucial for contents discovery in general on the session interface. &rarr; QueryResolver was shaped to deal with this situation, but has the potential to evolve into a general solution for issuing queries.

!use case: retrieving object to fulfil a condition
The requirement is to retrieve one (or multiple) objects of a specific kind, located within a scope, and fulfilling some additional condition. For example: find a sub-track with {{{id(blubb)}}}.
This is a special case of the general discovery (described in the previous use case), but it is also a common situation in a general ConfigQuery (&rArr; an object of a specific type and with additional capabilities...). The tricky question seems to be how to specify and resolve these additional conditions or capabilities.
* in a generic query handled by a resolution engine, we might represent these capabilities as a nested //goal.//
* if we approach the problem as a filter pipeline, then the condition becomes a functor (or closure). &rarr; QueryResolver
On second consideration, these two approaches don't contradict each other, because they live in different contexts and levels of abstraction. Performance-wise, both are bad and degenerate on large models, because both effectively cause a full table scan. Only specialised search functions for hardcoded individual properties could improve that situation, by backing them with an additional sub-index.
__Conclusion__: no objection against providing the functor/filter solution right now, even on the QueryFocus API &mdash;
notwithstanding the fact we need a better solution later.

----
See also the notes on
&nbsp; &rarr; QueryImplProlog
&nbsp; &rarr; QueryRegistration
While there are various //specialised queries// to be issued and resolved efficiently, as a common denominator we use a common ''syntactic representation'' of queries based on predicate logic. This allows for standardised processing when applicable, since a //generic query// can be used as a substitute of any given specialised kind of query. Moreover, using the predicate logic format as common definition format allows for programmatic extension, reshaping, combining and generic meta processing of queries within the system.

!the textual syntactic form
{{red{WIP 12/2012}}} for the moment we don't use any kind of  "real" resolution engine, thus the syntactic representation is kind of a design draft.
The plan is to use a syntax which can be fed directly to a Prolog interpreter or similar existing rules based systems.

!the necessity of using an internal representation
Since the intention is to use queries pervasively as a means of orchestrating the interplay of the primary controlling facilities, the internal representation of this syntactic exchange format might become a performance bottleneck. The typical usage pattern is just to create and issue a query to retrieve some result value -- thus, in practice, we rarely need the syntactic representation, while being bound to create this representation for sake of consistency.

!textual vs parsed-AST representation
For the initial version of the implementation, just storing a string in Prolog syntax is enough to get us going. But on the long run, for the real system, a pre-parsed representation in the style of an __A__bstract __S__yntax __T__ree seems more appropriate: through the use of some kind of symbol table, actual queries can be tagged with the common syntactic representation just by attaching some symbol numbers, without the overhead of creating, attaching and parsing a string representation in most cases
There is a common denominator amongst all queries: they describe a pattern of relations, which can be //satisfied// by providing a //solution.// But queries can be used in a wide variety of situations, each binding them to a more specific meaning and each opening the possibility to use a specialised resolution mechanism. For example, a query //might// actually mean to fetch and filter the contents of some sub-section of the session model. Or it might cause the fabrication an registration of some new model content object. In order to tie those disjoint kinds of queries together, we need a mechanism to exchange one kind of query with another one, or to derive one from another one. This ''exchange of queries'' actually is a way of ''remoulding'' and rebuilding a query -- which is based on a [[common syntactic representation|QueryDefinition]] of all queries, cast into terms of predicate logic. Besides, there is a common, unspecific, generic resolution mechanism ({{red{planned 11/12}}}) , which can delegate to more specialised resolvers when applicable.

!handling of queries
Queries are are always exposed or revealed at some point or facility allowing to pose queries. This facility remains the owner of the query instances and knows its concrete flavour (type). But queries can be referred to and exchanged through the {{{Query<TY>}}}-abstraction.
When querying contents of the session or sub-containers within the session, the QueryFocus follows the current point-of-query. As such queries can be issued to explore the content of container-like objects holding other MObjects, the focus is always attached to a container, which also acts as [[scope|PlacementScope]] for the contained objects. QueryFocus is an implicit state (the current point of interrest). This sate especially remembers the path down from the root of the HighLevelModel, which was used to access the current scope. Because this path constitutes a hierarchy of scopes, it can be relevant for querying and resolving placement properties. (&rarr; SessionStructureQuery)

!provided operations
* shift to a given scope-like object. Causes the current focus to //navigate//
* open a new focus, thereby pushing the existing focus onto a [[focus stack|QueryFocusStack]]
* return (pop) to the previous focus
* get the current scope, represented by the "top" Placement of this scope
* get the current ScopePath from root (session globals) down to the current scope
* (typed) content discovery query on the current scope
[>img[Scope Locating|uml/fig136325.png]]
&rarr; [[more|SessionContentsQuery]] regarding generic scope queries
!!!relation to Scope
There is a tight integration with PlacementScope through the ScopeLocator, which establishes the //current focus.// But while the [[scope|PlacementScope]] just decorates the placement defining a scope (called //&raquo;scope top&laquo;//), QueryFocus is more of a //binding// &mdash; it links or focusses the current state into a specific scope with a ScopePath in turn depending on this current state. Thus, while Scope is just a passive container allowing to locate and navigate, QueryFocus by virtue of this binding allows to [[Query]] at this current location.

!implementation notes
we provide a static access API, meaning that there is a singleton (the ScopeLocator) behind the scenes, which holds the mentioned scope stack. The current focus stack top, i.e. the current ScopePath is managed through an ref-counting handle embedded into each QueryFocus instance. Thus, effectively QueryFocus is an frontend object for accessing this state. Moreover, embedded into ScopeLocator, there is an link to the current session. But this link is kept opaque; it works by the current session exposing an [[query service|QueryResolver]], while QueryFocus doesn't rely on knowledge about the session, allowing the focus to be unit tested.

The stack of scopes must not be confused with the ScopePath. Each single frame on the stack can be seen and accessed as a QueryFocus and as such relates to a current ScopePath. The purpose of the stack is to make the scope handling mostly transparent; especially this stack allows to write dedicated query functions directed at a given object: they work by pushing and then navigating to the object to use as starting point for the query, i.e. the //current scope.//

!!!simplifications
The full implementation of this scope navigation is tricky, especially when it comes to determining the relation of two positions. It should be ''postponed'' and replaced by a ''dummy'' (no-op) implementation for the first integration round.
The ScopeLocator uses a special stack of ScopePath &raquo;frames&laquo; to maintain the //current focus.//
What is the ''current'' QueryFocus and why is it necessary? There is a state-dependent part involved, inasmuch the effective ScopePath depends on how the invoking client has navigated the //current location// down into the HighLevelModel structures. Especially, when a VirtualClip is involved, there can be discrepancies between the paths resulting when descending down through different paths. (See &rarr; BindingScopeProblem).

Thus, doing something with the current location, and especially descending or querying adjacent scopes can modify this current path state. Thus we need a means of invoking a query in a way not interfering with the current path state, otherwise we wouldn't be able to provide side-effect free query operations accessible on individual objects within the model.

!maintaining the current QueryFocus
As long as client code is just interested to use the current query location, we can provide a handle referring to it. But when a query needs to be run without side effect on the current location, we //push//&nbsp; it aside and start using a new QueryFocus on top, which starts out at a new initial location. Client code again gets a handle (smart-ptr) to this location, and additionally may access the new //current location.// When all references are out of scope and gone, we'll drop back to the focus put aside previously.

!implementation of ref-counting and clean-up
Actually, client code should use QueryFocus instances as frontend to access this &raquo;current focus&laquo;. Each ~QueryFocus instance incorporates a smart-ptr. But as in this case we're not managing objects allocated somewhere, we use an {{{boost::intrusive_ptr}}} and maintain the ref-count immediately within the target objects to be managed. These target objects are ScopePath instances and are living within the QueryFocusStack, which in turn is managed by the ScopeLocator singleton (see the UML diagram &rarr;[[here|QueryFocus]]). We use an (hand-written) stack implementation to ensure the memory locations of these ScopePath &raquo;frames&laquo; remain valid (and also to help with providing strong exception guarantees). The stack is aware of these ref-count and takes it into account on performing the {{{pop_unused()}}} operation: any unused frame on top will be evicted, stopping at the first frame still in use (which may be even just the old top). This cleanup also happens automatically when accessing the current top, re-initialising an potentially empty stack with a default-constructed new frame if necessary. This way, just accessing the stack top always yields the ''current focus location'', which thereby is //defined as the most recently used focus location still referred.//

!concurrency
This concept deliberately ignores parallelism. But, as the current path state is already encapsulated (and ref-counting is in place), the only central access point is to reach the current scope. Instead of using a plain-flat singleton here, this access can easily be routed through thread local storage.
{{red{As of 10/09 it is not clear if there will be any concurrent access to this discovery API}}} &mdash; but it seems not unlikely to happen...
//obviously, getting this one to work requires quite a lot of technical details to be planned and implemented.// This said...
The intention is to get much more readable ("declarative") and changeable configuration as by programming the decision logic literately within the implementation of some object.

!Draft
As an example, specifying how a Track can be configured for connecting automatically to some "mpeg" bus (=pipe)
{{{
resolve(O, Cap) :- find(O), capabilities(Cap).
resolve(O, Cap) :- make(O), capabilities(Cap).
capabilities(Q) :- call(Q).

stream(T, mpeg) :- type(T, track), type(P, pipe), resolve(P, stream(P,mpeg)), placed_to(P, T).
}}}

Then, running the goal {{{:-resolve(T, stream(T,mpeg)).}}} would search a Track object, try to retrieve a pipe object with stream-type=mpeg and associate the track with this pipe. This relies on a predicate "stream(P,mpeg)" implemented (natively) for the pipe object. So, "Cap" is the query issued from calling code &mdash; here {{{stream(T,mpeg)}}}, the type guard {{{type(T, track)}}} will probably be handled or inserted automatically, while the predicate implementations for find/1, make/1, stream/2, and placed_to/2 are to be provided by the target types.
* __The supporting system__ had to combine several code snippets into one rule system to be used for running queries, with some global base rules, rules injected by each individual participating object kind and finally user provided rules added by the current session. The actual query is bound to "Cap" (and consequently run as a goal by {{{call(Q)}}}). The implementation needs to provide a symbol table associating variable terms (like "T" or "P") to C/C++ object types, enabling the participating object kinds to register their specific predicate implementations. This is crucial, because there can be no general scheme of object-provided predicates (for each object kind different predicates make sense, e.g. [[pipes|PipeHandling]] have other possibilities than [[wiring requests|WiringRequest]]). Basically, a query issues a Prolog goal, which in turn evaluates domain specific predicates provided by the participating objects and thus calls back into C/C++ code. The supporting system maintains the internal connection (via the "type" predicate) such that from Prolog viewpoint it looks as if we were binding Variables directly to object instances. (there are some nasty technical details because of the backtracking nature of Prolog evaluations which need to be hidden away)
* Any __participating object kind__ needs a way to declare domain specific predicates, thus triggering the registration of the necessary hooks within the supporting system. Moreover, it should be able to inject further prolog code (as shown in the example above with the {{{strem(T, mpeg)}}} predicate. For each of these new domain specific predicates, there needs to be a functor which can be invoked when the C implementation of the predicate is called from Prolog (in some cases even later, when the final solution is "executed", e.g. a new instance has been created and now some properties need to be set).

!!a note on Plugins
In the design of the Lumiera ~Steam-Layer done thus far, we provide //no possibility to introduce a new object kind// into the system via plugin interface. The system uses a fixed collection of classes intended to cover all needs (Clip, Effect, Track, Pipe, Label, Automation, ~Macro-Clips). Thus, plugins will only be able to provide new parametrisations of existing classes. This should not be any real limitation, because the whole system is designed to achieve most of its functionality by freely combining rather basic object kinds. As a plus, it plays nicely with any plain-C based plugin interface. For example, we will have C++ adapter classes for the most common sorts of effect plugin (pull system and synchronous frame-by-frame push with buffering) with a thin C adaptation layer for the specific external plugin systems used. Everything beyond this point can be considered "configuration data" (including the actual plugin implementation to be loaded)
//Querying for some suitable element,// instead of relying on hard wired dependencies, is considered a core pattern within Lumiera.
But we certainly can't expect to subsume every query situation to one single umbrella interface, so we need some kind of ''query dispatch'' and a registration mechanism to support this indirection. This could lead to a situation, where the term &raquo;query&laquo; is just a vaguely defined umbrella, holding together several disjoint subsistems. Yet, while in fact this describes the situation in the code base as of this writing, {{red{11/2012}}}, actually all kinds of queries are intended to share a //common semantic space.// So, in the end, we need a mechanism to [[exchange queries|QueryExchange]] with one another, and this mechanism ought to be based on the common //syntactic representation through terms of predicate logic.//
Common definition forma &rarr; QueryDefinition
Closely related is the &rarr; TypedQueryProblem

!Plans and preliminary implementation
As of 6/10, the intention is to be able just to //pose queries eventually.// Behind the scenes, a suitable QueryResolver should then be picked to process the query and yield a resultset. Thus the {{{Goal}}} and {{{Query<TY>}}} interfaces are to become the access point to a generic dispatching service and a bundle of specialised resolution mechanisms.

But implementing this gets a bit involved, for several reasons
* we don't know the kinds of queries, their frequency and performance requirements
* we don't know the exact usage pattern with respect to memory management of the resultsets.
* we can't asses the relevance of //lock contention,// created by using a central dispatcher facility.
We might end up with a full blown subsystem, and possibly with a hierarchy of dispatchers.

But for now the decision is to proceed with isolated and specialised QueryResolver subclasses, and to pass a basically suitable resolver to the query explicitly when it comes to retrieving results. This resolver should be obtained by some system service suitable for the concrete usage situation, like e.g. the ~SessionServiceExploreScope, which exposes a resolver to query session contents through the PlacementIndex. Nonetheless, the actual dispatch mechanism is already implemented (by using an ~MultiFact instance), and each concrete resolution mechanism is required to do an registration within the ctor, by calling the inherited {{{QueryResolver::installResolutionCase(..)}}}.
Within the Lumiera Steam-Layer, there is a general preference for issuing [[queries|Query]] over hard wired configuration (or even mere table based configuration). This leads to the demand of exposing a //possibility to issue queries// &mdash; without actually disclosing much details of the facility implementing this service. For example, for shaping the general session interface (in 10/09), we need a means of exposing a hook to discover HighLevelModel contents, without disclosing how the model is actually organised internally (namely by using an PlacementIndex).

!Analysis of the problem
The situation can be decomposed as follows.[>img[QueryResolver|uml/fig137733.png]]
* first off, we need a way to state //what kind of query we want to run.// This includes stipulations on the type of the expected result set contents
* as the requirement is to keep the facility actually implementing the query service hidden behind an interface, we're forced to erase specific type information and pass on an encapsulated version of the query
* providing an iterator for exploring the results poses the additional constraint of having an fairly generic iterator type, while still being able to communicate with the actual query implementation behind the interface.


!!!Difficulties
*the usage pattern is not clear &mdash; mostly it's just //planned//
*# client might create a specific {{{Query<TY>}}} and demand resolution
*# client might create just a goal, which is then translated into a specific query mechanism behind the invocation interface
*# client issues a query and expect it just to be handled by //some//&nbsp; suitable resolver
* thus it's difficult to determine, //what// part of the issued query needs automatic management. More specifically, is it possible for the client to dispose the query after issuing it, but keeping and exploring the iterator obtained as result of the query?
* and then there is the notorious problem of re-gaining the specifically typed context //behind//&nbsp; the invocation interface. Especially, the facility processing the query needs to know both the expected result type and details about the concrete query and its parametrisation. <br/>&rarr; TypedQueryProblem

!!!Entities and Operations
The //client// &nbsp;(code using query-resolver.hpp) either wants a ''goal'' or ''query'' to be resolved; the former is just implicitly typed and usually given in predicate logic from ({{red{planned as of 11/09}}}), while the latter may be a specialised subclass templated to yield objects of a specific type as results. A ''query resolver'' is an (abstracted) entity capable of //resolving//&nbsp; such a goal. Actually, behind the scenes there is somehow a registration of the concrete resolving facilities, which are asumed to decide about their ability of handling a given goal. Issuing a goal or query yields a ''resolution'' &mdash; practically speaking, a set of indivitual solutions. These individual solution ''results'' can be explored by ''iteration'', thereby moving an embedded ''cursor'' through the ''result set''. Any result can be retrieved at most once &mdash; after that, the resolution is ''exhausted'' and will be released automatically when the expolration iterator goes out of scope.

!!!Decisions
* while, in the use case currently at hand, the query instance is created by the client on the stack, the possibility of managing the queries internally is deliberately kept open. Because otherwise, we had to commit to a specific way of obtaining results, for example by assuming always to use an embedded STL iterator.
* we endorse that uttermost performance is less important than clean separation an extensibility. Thus we accept accessing the current position pointer through reference and we use a ref-counting mechanism alongside with the iterator to be handed out to the client
* the result set is not tied to the query &mdash; at least not by design. The query can be discarded while further exploring the result set.
* for dealing with the TypedQueryProblem, we require the concrete resolving facilities to register with a system startup hook, to build a dispatcher table on the implementation side. This allows us to downcast to the concrete Cursor type on iteration and results retrieval.
* the intention is to employ a mix of generic processing (through a common generic [[syntactic query representation|QueryDefinition]]) and optimised processing of specialised queries relying on concrete query subtypes. The key for achieving this goal is the registration menchanism, which could evolve into a generic query dispatch system -- but right now the exact balance of this two approaches remains a matter of speculation...

/***
|''Name:''|RSSReaderPlugin|
|''Description:''|This plugin provides a RSSReader for TiddlyWiki|
|''Version:''|1.1.1|
|''Date:''|Apr 21, 2007|
|''Source:''|http://tiddlywiki.bidix.info/#RSSReaderPlugin|
|''Documentation:''|http://tiddlywiki.bidix.info/#RSSReaderPluginDoc|
|''Author:''|BidiX (BidiX (at) bidix (dot) info)|
|''Credit:''|BramChen for RssNewsMacro|
|''License:''|[[BSD open source license|http://tiddlywiki.bidix.info/#%5B%5BBSD%20open%20source%20license%5D%5D ]]|
|''~CoreVersion:''|2.2.0|
|''OptionalRequires:''|http://www.tiddlytools.com/#NestedSlidersPlugin|
***/
//{{{
version.extensions.RSSReaderPlugin = {
	major: 1, minor: 1, revision: 1,
	date: new Date("Apr 21, 2007"),
	source: "http://TiddlyWiki.bidix.info/#RSSReaderPlugin",
	author: "BidiX",
	coreVersion: '2.2.0'
};

config.macros.rssReader = {
	dateFormat: "DDD, DD MMM YYYY",
	itemStyle: "display: block;border: 1px solid black;padding: 5px;margin: 5px;", //useed  '@@'+itemStyle+itemText+'@@'
	msg:{
		permissionDenied: "Permission to read preferences was denied.",
		noRSSFeed: "No RSS Feed at this address %0",
		urlNotAccessible: " Access to %0 is not allowed"
	},
	cache: [], 	// url => XMLHttpRequest.responseXML
	desc: "noDesc",
	
	handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		var desc = params[0];
		var feedURL = params[1];
		var toFilter = (params[2] ? true : false);
		var filterString = (toFilter?(params[2].substr(0,1) == ' '? tiddler.title:params[2]):'');
		var place = createTiddlyElement(place, "div", "RSSReader");
		wikify("^^<<rssFeedUpdate "+feedURL+" [[" + tiddler.title + "]]>>^^\n",place);
		if (this.cache[feedURL]) {
			this.displayRssFeed(this.cache[feedURL], feedURL, place, desc, toFilter, filterString);
		}
		else {
			var r = loadRemoteFile(feedURL,config.macros.rssReader.processResponse, [place, desc, toFilter, filterString]);
			if (typeof r == "string")
				displayMessage(r);
		}
		
	},

	// callback for loadRemoteFile 
	// params : [place, desc, toFilter, filterString]
	processResponse: function(status, params, responseText, url, xhr) { // feedURL, place, desc, toFilter, filterString) {	
		if (window.netscape){
			try {
				if (document.location.protocol.indexOf("http") == -1) {
					netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");
				}
			}
			catch (e) { displayMessage(e.description?e.description:e.toString()); }
		}
		if (xhr.status == httpStatus.NotFound)
		 {
			displayMessage(config.macros.rssReader.noRSSFeed.format([url]));
			return;
		}
		if (!status)
		 {
			displayMessage(config.macros.rssReader.noRSSFeed.format([url]));
			return;
		}
		if (xhr.responseXML) {
			// response is interpreted as XML
			config.macros.rssReader.cache[url] = xhr.responseXML;
			config.macros.rssReader.displayRssFeed(xhr.responseXML, params[0], url, params[1], params[2], params[3]);
		}
		else {
			if (responseText.substr(0,5) == "<?xml") {
				// response exists but not return as XML -> try to parse it 
				var dom = (new DOMParser()).parseFromString(responseText, "text/xml"); 
				if (dom) {
					// parsing successful so use it
					config.macros.rssReader.cache[url] = dom;
					config.macros.rssReader.displayRssFeed(dom, params[0], url, params[1], params[2], params[3]);
					return;
				}
			}
			// no XML display as html 
			wikify("<html>" + responseText + "</html>", params[0]);
			displayMessage(config.macros.rssReader.msg.noRSSFeed.format([url]));
		}
	},

	// explore down the DOM tree
	displayRssFeed: function(xml, place, feedURL, desc, toFilter, filterString){
		// Channel
		var chanelNode = xml.getElementsByTagName('channel').item(0);
		var chanelTitleElement = (chanelNode ? chanelNode.getElementsByTagName('title').item(0) : null);
		var chanelTitle = "";
		if ((chanelTitleElement) && (chanelTitleElement.firstChild)) 
			chanelTitle = chanelTitleElement.firstChild.nodeValue;
		var chanelLinkElement = (chanelNode ? chanelNode.getElementsByTagName('link').item(0) : null);
		var chanelLink = "";
		if (chanelLinkElement) 
			chanelLink = chanelLinkElement.firstChild.nodeValue;
		var titleTxt = "!![["+chanelTitle+"|"+chanelLink+"]]\n";
		var title = createTiddlyElement(place,"div",null,"ChanelTitle",null);
		wikify(titleTxt,title);
		// ItemList
		var itemList = xml.getElementsByTagName('item');
		var article = createTiddlyElement(place,"ul",null,null,null);
		var lastDate;
		var re;
		if (toFilter) 
			re = new RegExp(filterString.escapeRegExp());
		for (var i=0; i<itemList.length; i++){
			var titleElm = itemList[i].getElementsByTagName('title').item(0);
			var titleText = (titleElm ? titleElm.firstChild.nodeValue : '');
			if (toFilter && ! titleText.match(re)) {
				continue;
			}
			var descText = '';
			descElem = itemList[i].getElementsByTagName('description').item(0);
			if (descElem){
				try{
					for (var ii=0; ii<descElem.childNodes.length; ii++) {
						descText += descElem.childNodes[ii].nodeValue;
					}
				}
				catch(e){}
				descText = descText.replace(/<br \/>/g,'\n');
				if (desc == "asHtml")
					descText = "<html>"+descText+"</html>";
			}
			var linkElm = itemList[i].getElementsByTagName("link").item(0);
			var linkURL = linkElm.firstChild.nodeValue;
			var pubElm = itemList[i].getElementsByTagName('pubDate').item(0);
			var pubDate;
			if (!pubElm) {
				pubElm = itemList[i].getElementsByTagName('date').item(0); // for del.icio.us
				if (pubElm) {
					pubDate = pubElm.firstChild.nodeValue;
					pubDate = this.formatDateString(this.dateFormat, pubDate);
					}
					else {
						pubDate = '0';
					}
				}
			else {
				pubDate = (pubElm ? pubElm.firstChild.nodeValue : 0);
				pubDate = this.formatDate(this.dateFormat, pubDate);
			}
			titleText = titleText.replace(/\[|\]/g,'');
			var rssText = '*'+'[[' + titleText + '|' + linkURL + ']]' + '' ;
			if ((desc != "noDesc") && descText){
				rssText = rssText.replace(/\n/g,' ');
				descText = '@@'+this.itemStyle+descText + '@@\n';				
				if (version.extensions.nestedSliders){
					descText = '+++[...]' + descText + '===';
				}
				rssText = rssText + descText;
			}
			var story;
			if ((lastDate != pubDate) && ( pubDate != '0')) {
				story = createTiddlyElement(article,"li",null,"RSSItem",pubDate);
				lastDate = pubDate;
			}
			else {
				lastDate = pubDate;
			}
			story = createTiddlyElement(article,"div",null,"RSSItem",null);
			wikify(rssText,story);
		}
	},
	
	formatDate: function(template, date){
		var dateString = new Date(date);
		// template = template.replace(/hh|mm|ss/g,'');
		return dateString.formatString(template);
	},
	
	formatDateString: function(template, date){
		var dateString = new Date(date.substr(0,4), date.substr(5,2) - 1, date.substr(8,2)
			);
		return dateString.formatString(template);
	}
	
};

config.macros.rssFeedUpdate = {
	label: "Update",
	prompt: "Clear the cache and redisplay this RssFeed",
	handler: function(place,macroName,params) {
		var feedURL = params[0];
		var tiddlerTitle = params[1];
		createTiddlyButton(place, this.label, this.prompt, 
			function () {
				if (config.macros.rssReader.cache[feedURL]) {
					config.macros.rssReader.cache[feedURL] = null; 
			}
			story.refreshTiddler(tiddlerTitle,null, true);
		return false;});
	}
};

//}}}
What is the Role of the asset::Clip and how exactly are Assets and (Clip)-MObjects related?

First of all: ~MObjects are the dynamic/editing/manipulation view, while Assets are the static/bookkeeping/searching/information view of the same entities. Thus, the asset::Clip contains the general configuration, the ref to the media and descriptive properties, while all parameters being "manipulated" belong to the session::Clip (MObject). Besides that, the practical purpose of asset::Clip is that you can save and remember some selection as a Clip (Asset), maybe even attach some information or markup to it, and later be able to (re)create a editable representation in the Session (the GUI could implement this by allowing to drag from the asset::Clip GUI representation to the timeline window)

!!dependencies
The session::Clip (frequently called "clip-MO", i.e. the MObject) //depends on the Asset.// It can't exist without the Asset, because the Asset is needed for rendering. The other direction is different: the asset::Clip knows that there is a dependant clip-MO, there could be //at most one// such clip-MO depending on the Asset, but the Asset can exist without the clip-MO (it gives the possibility to re-create the clip-MO).

!!deletions
When the Asset or the corresponding asset::Media is deleted, the dependant clip-MO has to disappear. And the opposite direction?
* Model-1: asset::Clip has a weak ref to the clip-MO. Consequently, the clip-MO can go out of scope and disappear, so the asset::Clip has to maintain the information of the clip's dimensions (source position and length) somewhere. Because of MultichannelMedia, this is not so simple as it may look at first sight.
* Model-2: asset::Clip holds a smart ptr to the clip-MO, thus effectively keeping it alive. __obviously the better choice__
In either case, we have to solve the ''problem of clip asset proliferation''

!!multiplicity and const-ness
The link between ~MObject and Asset should be {{{const}}}, so the clip can't change the media parameters. Because of separation of concerns, it would be desirable that the Asset can't //edit// the clip either (meaning {{{const}}} in the opposite direction as well). But unfortunately the asset::Clip is in power to delete the clip-MO and, moreover, handles out a smart ptr ([[Placement]]) referring to the clip-MO, which can (and should) be used to place the clip-MO within the session and to manipulate it consequently...

At first sight the link between asset and clip-MO is a simple logical relation between entities, but it is not strictly 1:1 because typical media are [[multichannel|MultichannelMedia]]. Even if the media is compound, there is //only one asset::Clip//, because in the logical view we have only one "clip-thing". On the other hand, in the session, we have a compound clip ~MObject comprised of several elementary clip objects, each of which will refer to its own sub-media (channel) within the compound media (and don't forget, this structure can be tree-like)
{{red{open question:}}} do the clip-MO's of the individual channels refer directly to asset::Media? does this mean the relation is different from the top level, where we have a relation to a asset::Clip??

{{red{Note 1/2015}}} several aspects regarding the relation of clips and single/multichannel media are not yet settled. There is a preliminary implementation in the code base, but it is not sure yet how multichnnel media will actually be modelled. Currently, we tend to treat the channel multiplicity rather as a property of the involved media, i.e we have //one// clip object.
//Render Activities define the execution language of the render engine.//
The [[Scheduler]] maintains the ability to perform these Activities, in a time-bound fashion, observing dependency relations; activities allow for notification of completed work, tracking of dependencies, timing measurements, re-scheduling of other activities -- and last but not least the dispatch of actual [[render jobs|RenderJob]]. Activities are what is actually enqueued with priority in the scheduler implementation, they are planned for a »µ-tick slot«, activated once when the activation time is reached, and then forgotten. Each Activity is a //verb//, but can be inhibited by conditions and carry operation object data. Formally, activating an Activity equates to a predication, and the subject of that utterance is »the render process«.

!catalogue of Activities
;invoke
:dispatches a JobFunctor into an appropriate worker thread, based on the job definition's execution spec
:no further dependency checks; Activities attached to the job are re-dispatched after the job function's completion
;workstart
:signal start of some processing -- for the purpose of timing measurement, but also to detect crashed tasks
;workstop
:correspondingly signal end of some processing
;notify
:push a message to another Activity or process record
;gate
:probe a launch window [start…deadline[ and check a count-down latch ⟹activate next Activity | else re-schedule @self into the future
;feed
:supply additional payload data for a preceding Activity
;post
:post a message providing a chain of further time-bound Activities
;hook
:invoke a callback, passing invocation information. Intended for testing.
;tick
:internal engine »heart beat« -- invoke internal maintenance hook(s)

!Data organisation
Activities are processed within a //performance critical part of the application// -- and are thus subject to [[Scheduler memory management|SchedulerMemory]].
While Activities are logically polymorphic, they are implemented as »POD with constructor« -- meaning that they are classes with [[standard layout|https://en.cppreference.com/w/cpp/named_req/StandardLayoutType]] and at least a [[trivial destructor|https://en.cppreference.com/w/cpp/language/destructor#Trivial_destructor]], allowing to just place them into a memory block and forget about them (without need to call any non-trivial functions). The ''Argument data'' depends on the //actual verb// and is thus placed into a union, with access functions to ensure proper usage of the data fields (while it is always possible to access the data field directly). Since Activities are allocated a lot, small memory footprint is favoured, and thus some significant information -- notably the //time window// for activation of each Activity -- is defined //contextually.//

Activities are organised into ''chains'', allowing to express relations based on their respective verbs.
There are //standard usage patters,// hard coded into the {{{ActivityLang}}} and expected by the {{{SchedulerCommutator}}}, to express all relevant [[patterns of operational logic|RenderOperationLogic]] necessary to represent time-bound and dependent playback and render tasks.

!The Activity Language
While the Activities are low-level primitives and can be handled directly by the scheduler, any actual rendering invocation must arrange several Activities into a suitable chain of operations. Thus the actual rendering invocation can be seen as a //sentence of the Activity Language.// Formally speaking, it is a //symbolic term.// Not every possible term (and thus sentence) leads to semantically sound behaviour, and thus the ''Scheduler Interface Setup'' is organised in the form of a //builder notation to construct viable Activity terms.// {{{vault::gear::ActivityLang}}} provides the framework for such builder invocations, and allows to create such terms as transient objects -- connected to the durable {{{Activity}}} records allocated into the [[»BlockFlow« memory manager|SchedulerMemory]] backing the Scheduler operation. The language term is thus a front-end, and exposes suitable extension and configuration points for the JobPlanningPipeline to instruct the necessary Scheduler operations in order to enact a specific [[render Job|RenderJob]].

The //meaning// of Activities can be understood on two levels. For one, there is the abstract, //conceptual level:// Each Activity represents a verb to express something //performed by »the render process«// -- which in turn appears as a combination and connection of these elementary expressions. Activity verbs can be linked together in a limited number of ways
* chaining means sequencing -- first //this// Activity, followed by //that// Activity
* guarding means inhibiting and releasing -- once the prerequisites of a {{{GATE}}} have been ticket off, and unless the deadline is violated
* notification implies a trigger -- an impulse, either to //decrement// a {{{GATE}}} or to //activate// another Activity
* posting anchors a pattern -- a complete structure of further Activities is put underway, possibly with an explicit //start time// and //deadline.//
However, the //meaning// of Activities can also be understood at the //operational level:// This is what the [[operational logic of Activity execution|RenderOperationLogic]] entails, what is implemented as interpretation of the Activity Language and what is actually backed and implemented by the [[internals of the Scheduler|SchedulerProcessing]].
//the active core of each [[calculation stream|CalcStream]]//
When establishing a CalcStream, a suitable {{{Dispatcher}}} implementation is retrieved (indirectly) from the from the EngineFaçade -- behind the scenes this Dispatcher is wired to the [[Segmentation]] data structure in the [[Fixture]] and thus allows to '''build'' the JobPlanningPipeline. The latter is an iterator and allows to pull a sequence of [[frame render jobs|RenderJob]], ready for hand-over to the [[Scheduler]]

Besides housing the planning pipeline, the RenderDrive is also a JobFunctor for (re)invocation of //the next job planning chunk// -- it re-activates itself repeatedly and thereby drives the stream of calculation and thus the rendering ahead.
Conceptually, the Render Engine is the core of the application. But &mdash; surprisingly &mdash; we don't even have a distinct »~RenderEngine« component in our design. Rather, the engine is formed by the cooperation of several components spread out over two layers (Vault and ~Steam-Layer): The [[Builder]] creates a network of [[render nodes|ProcNode]], the [[Scheduler]] triggers individual [[calculation jobs|RenderJob]], which in turn pull data from the render nodes, thereby relying on the [[Vault services|Vault-Layer]] for data access and using plug-ins for the actual media calculations.
&rarr; OverviewRenderEngine
&rarr; EngineFaçade 
&rarr; [[Rendering]]
&rarr; [[Player]]
{{red{⚠ In-depth rework underway as of 10/2024...}}}
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
The [[Render Engine|Rendering]] only carries out the low-level and performance critical tasks. All configuration and decision concerns are to be handled by [[Builder]] and [[Dispatcher|SteamDispatcher]]. While the actual connection of the Render Nodes can be highly complex, basically each Segment of the Timeline with uniform characteristics is handled by one Processor, which is a graph of [[Processing Nodes|ProcNode]] discharging into a ExitNode. The Render Engine Components as such are //stateless// themselves; for the actual calculations they are combined with a StateProxy object generated by and connected internally to the Controller {{red{really?? 2018}}}, while at the same time holding the Data Buffers (Frames) for the actual calculations.

{{red{🗲🗲 Warning: what follows is an early draft from 2009, obsoleted by actual plans and development as of 10/2024 🗲🗲}}}
Currently the Render/Playback is beeing targetted for implementation; almost everything in this diagram will be implemented in a slightly differently way....
[img[Entities comprising the Render Engine|uml/fig128389.png]]
{{red{⚠ In-depth rework underway as of 7/2024...}}}
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
Below are some notes regarding details of the actual implementation of the render process and processing node operation. In the description of the [[render node operation protocol|NodeOperationProtocol]] and the [[mechanics of the render process|RenderMechanics]], these details were left out deliberately.
{{red{WIP as of 9/11 -- need to mention the planning phase more explicitly}}}

!Layered structure of State
State can be seen as structured like an onion. All the [[StateAdapter]]s in one call stack are supposed to be within one layer: they all know of a "current state", which in turn is a StateProxy (and thus may refer yet to another state, maybe accros the network or in the backend or whatever). The actual {{{process()}}} function "within" the individual nodes just sees a single StateAdapter and thus can be thought to be a layer below.

!Buffer identification
For the purpose of node operation, Buffers are identified by a [[buffer-handle|BuffHandle]], which contains both the actual buffer pointer and an internal indes and classification of the source providing the buffer; the latter information is used for deallocation. Especially for calling the {{{process()}}} function (which is supposed to be plain C) the node invocation needs to prepare and provide an array containing just the output and input buffer pointers. Typically, this //frame pointer array//&nbsp; is allocated on the call stack.

!Problem of multi-channel nodes
Some data processors simply require to work on multiple channels simultanously, while others work just on a single channel and will be replicated by the builder for each channel invoved. Thus, we are struck with the nasty situation that the node graph may go through some nodes spanning the chain of several channels. Now the decision is //not to care for this complexity within a single chain calculating a single channel.// We rely solely on the cache to avoid duplicated calculations. When a given node happens to produce multiple output buffers, we are bound to allocate them for the purpose of this node's {{{process()}}} call, but afterwards we'r just "letting go", releasing the buffers not needed immediately for the channel acutally to be processed. For this to work, it is supposed that the builder has wired in a caching, and that the cache will hit when we touch the same node again for the other channels.

Closely related to this is the problem how to number and identify nodes and thus to be able to find calculated frames in cache (&rarr; [[here|NodeFrameNumbering]])

!Configuration of the processing nodes
[>img[uml/fig132357.png]]
Every node is actually decomposed into three parts
* an interface container of a ProcNode subclass
* an {{{const}}} WiringDescriptor, which is actually parametrized to a subtype encoding details of how to carry out the intended operation
* the Invocation state created on the stack for each {{{pull()}}} call. It is comprised of references to an StateAdapter object  and the current overall process state, the WiringDescriptor, and finally a table of suitable buffer handles
Thus, the outer container can be changed polymorphically to support the different kinds of nodes (large-scale view). The actual wiring of the nodes is contained in the WiringDescriptor, including the {{{process()}}} function pointer. Additionally, this WiringDescriptor knows the actual type of the operation Strategy, and this actual type has been chosen by the builder such as to select details of the desired operation of this node, for example caching / no caching or maybe ~OpenGL rendering or the special case of a node pulling directly from a source reader. Most of this configuration is done by selecting the right template specialisation within the builder; thus in the critical path most of the calls can be inlined

!!!! composing the actual operation Strategy
As shown in the class diagram to the right, the actual implementation is assembled by chaining together the various policy classes governing parts of the node operation, like Caching, in-Place calculation capability, etc. (&rarr; see [[here|WiringDescriptor]] for details). The rationale is that the variable part of the Invocation data is allocated at runtime directly on the stack, while a precisely tailored call sequence for "calculating the predecessor nodes" can be defined out of a bunch of simple building blocks. This helps avoiding "spaghetti code", which would be especially dangerous and difficult to get right because of the large number of different execution paths. Additionally, a nice side effect of this implementation technique is that a good deal of the implementation is eligible to inlining.
We //do employ//&nbsp; some virtual calls for the buffer management in order to avoid coupling the policy classes to the actual number of in/out buffers. (As of 6/2008, this is mainly a precaution to be able to control the number of generated template instances. If we ever get in the region of several hundred individual specialisations, we'd need to separate out further variable parts to be invoked through virtual functions.)

!Rules for buffer allocation and freeing
* only output buffers are allocated. It is //never necessary//&nbsp; to allocate input buffers!
* buffers are to be allocated as late as possible, typically just before invoking {{{process()}}}
* buffers are allways allocated by activating a [[buffer handle|BuffHandle]], preconfigured already during the planning phase
* {{{pull()}}} returns a handle at least for the single output requested by this call, allowing the caller to retrieve the result data
* any other buffers filled with results during the same {{{process()}}} call can be released immediately before returning from {{{pull()}}}
* similar, any input buffers are to be released immediately after the {{{process()}}} call, but before returing from this {{{pull()}}}
* while any handle contains the necessary information for releasing or "committing" this buffer, this has to be triggered explicitly.

@@clear(right):display(block):@@
An unit of operation, to be [[scheduled|Scheduler]] for calculating media frame data just in time.
Within each CalcStream, render jobs are produced by the associated FrameDispatcher, based on the corresponding JobTicket used as blue print (execution plan).

!Anatomy of a render job
Basically, each render job is a //closure// -- hiding all the prepared, extended execution context and allowing the scheduler to trigger the job as a simple function.
When activated, by virtue of this closure, the concrete ''node invocation'' is constructed, which is a private and safe execution environment for the actual frame data calculations. This (mostly stack based) environment embodies a StateProxy, acting as communication hub for accessing anything possibly stateful within the larger scope of the currently ongoing render process. The node invocation sequence is what actually implements the ''pulling of data'': on exit, all cacluated data is expected to be available in the output buffers. Typically (but not necessarily) each node embodies a ''calculation function'', holding the actual data processing algorithm.

!{{red{open questions 2/12}}}
* what are the job's actual parameters?
* how is prerequisite data passed?  &rarr; maybe by an //invocation key?//

!Input and closure
each job gets only the bare minimum information required to trigger the execution: the really variable part of the node's invocation. The job uses this pieces of information to re-activate a pre-calculated closure, representing a wider scope of environment information. Yet the key point is for this wider scope information to be //quasi static.// It is shared by a whole [[segment|Segmentation]] of the timeline in question, and it will be used and re-used, possibly concurrently. From the render job's point of view, the engine framework just ensures the availability and accessibility of all this wider scope information.

Prerequisite data for the media calculations can be considered just part of that static environment, as far as the node is concerned. Actually, this prerequisite data is dropped off by other nodes, and the engine framework and the builder ensure the availability of this data just in time.

!observations
* the job's scope represents a strictly local view
* the job doesn't need to know about its output
* the job doesn't need to know anything about the frame grid or frame number
* all it needs to know is the ''effective nominal time'' and an ''invocation instance ID''
{{red{⚠ In-depth rework underway as of 7/2024...}}}
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
While the render process, with respect to the dependencies, the builder and the processing function is sufficiently characterized by referring to the ''pull principle'' and by defining a [[protocol|NodeOperationProtocol]] each node has to adhere to &mdash; for actually get it coded we have to care for some important details, especially //how to manage the buffers.// It may well be that the length of the code path necessary to invoke the individual processing functions is finally not so important, compared with the time spent at the inner pixel loop within these functions. But my guess is (as of 5/08), that the overall number of data moving and copying operations //will be//&nbsp; of importance.
{{red{WIP as of 9/11 -- need to mention the planning phase more explicitly}}}

!requirements
* operations should be "in place" as much as possible
* because caching necessitates a copy, the points where this happens should be controllable.
* buffers should accommodate automatically to provide the necessary space without clipping the image.
* the type of the media data can change while passing through the network, and so does the type of the buffers.
On the other hand, the processing function within the individual node needs to be shielded from these complexities. It can expect to get just //N~~I~~// input buffers and //N~~O~~// output buffers of required type. And, moreover, as the decision how to organize the buffers certainly depends on non-local circumstances, it should be preconfigured while building.

!data flow
[>img[uml/fig131973.png]]
Not everything can be preconfigured though. The pull principle opens the possibility for the node to decide on a per call base what predecessor(s) to pull (if any). This decision may rely on automation parameters, which thus need to be accessible prior to requesting the buffer(s). Additionally, in a later version we plan to have the node network calculate some control values for adjusting the cache and backend timings &mdash; and of course at some point we'll want to utilize the GPU, resulting in the need to feed data from our processing buffers into some texture representation.

!buffer management
{{red{NOTE 9/11: the following is partially obsolete and needs to be rewritten}}} &rarr; see the BufferTable for details regarding new buffer management...

Besides the StateProxy representing the actual render process and holding a couple of buffer (refs), we employ a lightweight adapter object in between. It is used //for a single {{{pull()}}}-call// &mdash; mapping the actual buffers to the input and output port numbers of the processing node and for dealing with the cache calls. While the StateProxy manages a pool of frame buffers, this interspersed adapter allows us to either use a buffer retrieved from the cache as an input, possibly use a new buffer located within the cache as output, or (in case no caching happens) to just use the same buffer as input and output for "in-place"-processing. The idea is that most of the configuration of this adapter object is prepared in the wiring step while building the node network.

The usage patern of the buffers can be stack-like when processing nodes require multiple input buffers. In the standard case, which also is the simplest case, a pair of buffers (or a single buffer for "in-place" capable nodes) suffices to calculate a whole chain of nodes. But &mdash; as the recursive descent means depth-first processing &mdash; in case multiple input buffers are needed, we may encounter a situation where some of these input buffers already contain processed data, while we have to descend into yet another predecessor node chain to pull the data for the remaining buffers. Care has to be taken //to allocate the buffers as late as possible,// otherwise we could end up holding onto a buffer almost for each node in the network. Effectively this translates into the rule to allocate output buffers only after all input buffers are ready and filled with data; thus we shouldn't allocate buffers when //entering// the recursive call to the predecessor(s), rather we have to wait until we are about to return from the downcall chain.
Besides, these considerations also show we need a means of passing on the current buffer usage pattern while calling down. This usage pattern not only includes a record of what buffers are occupied, but also the intended use of these occupied buffers, especially if they can be modified in-place, and at which point they may be released and reused.
__note__: this process outlined here and below is still an simplification. The actual implementation has some additional [[details to care for|RenderImplDetails]]

!!Example: calculating a 3 node chain
# Caller invokes calculation by pulling from exit node, providing the top-level StateProxy
# node1 (exit node) builds StateAdapter and calls retrieve() on it to get the desired output result
# this StateAdapter (ad1) knows he could get the result from Cache, so he tries, but it's a miss
# thus he pulls from the predecessor node2 according to the [[input descriptor|ProcNodeInputDescriptor]] of node1
# node2 builds its StateAdapter and calls retrieve()
# but because StateAdapter (ad2) is configured to directly forward the call down (no caching), it pulls from node3
# node3 builds its StateAdapter and calls retrieve()
# this StateAdapter (ad3) is configured to look into the Cache...
# this time producing a Cache hit
# now StateAdapter ad2 has input data, but needs a output buffer location, which re requests from its //parent state// (ad1)
# and, because ad1 is configured for Caching and is "in-place" capable, it's clear that this output buffer will be located within the cache
# thus the allocation request is forwarded to the cache, which provides a new "slot"
# now node2 has both a valid input and a usable output buffer, thus the process function can be invoked
# and after the result has been rendered into the output buffer, the input is no longer needed
# and can be "unlocked" in the Cache
# now the input data for node1 is available, and as node1 is in-place-capable, no further buffer allocation is necessary prior to calculating
# the finished result is now in the buffer (which happens to be also the input buffer and is actually located within the Cache)
# thus it can be marked as ready for the Cache, which may now provide it to other processes (but isn't allowed to overwrite it)
# finally, when the caller is done with the data, it signalles this to the top-level State object
# which forwards this information to the cache, which in turn may now do with the released Buffer as he sees fit.
[img[uml/fig132229.png]]

@@clear(right):display(block):@@
__see also__
&rarr; the [[Entities involved in Rendering|RenderEntities]]
&rarr; additional [[implementation details|RenderImplDetails]]
&rarr; [[Memory management for render nodes|ManagementRenderNodes]]
&rarr; the protocol [[how to operate the nodes|NodeOperationProtocol]]
//The operational logic of Activity execution is the concrete service provided by the [[Scheduler]] to implement interwoven [[render Activities|RenderActivity]] and [[Job  execution|RenderJob]].//
* logically, each {{{Activity}}} record represents a //verb// to describe some act performed by »the render process«
* the {{{ActivityLang}}} provides a //builder notation// to build „sentences of activities“ and it sets the framework for //execution// of Activities
* the ''Scheduler Layer-2'' ({{{SchedulerCommutator}}}) backs this //language execution// with the elementary //effects// to setup the processing
* the ''Scheduler Layer-1'' ({{{SchedulerInvocation}}}) provides the low-level coordination and invocation mechanics to launch [[render Jobs|RenderJob]].

!Framework for Activity execution
The individual {{{Activity}}} records serve as atomic execution elements; an Activity shall be invoked once, either by time-bound trigger in the Scheduler's priority queue, or by receiving an activation message (directly when in //management mode,// else indirectly through the invocation queue). The data structure of the {{{Activity}}} record (&rarr; [[description|RenderActivity]]) is maintained by the [[»block flow« memory allocation scheme|SchedulerMemory]] and can be considered stable and available (within the logical limits of its definition, which means until the overarching deadline has passed). The ''activation'' of an Activity causes the invocation of a hard-wired execution logic, taking into account the //type field// of the actual {{{Activity}}} record to be »performed«. This hard-wired logic however can be differentiated into a //generic// part (implemented directly in {{{class Activity}}}) and a //contextual// part, which is indirected through a ''λ-binding'', passed as ''execution context'', yet actually implemented by functions of ''Scheduler Layer-2''.
!!!execution patterns
Since the render engine can be considered performance critical, only a fixed set of //operational patterns// is supported, implemented with a minimum of indirections and thus with limited configurability. It seems indicated to confine the scope of this operational logic to a finite low-level horizon, assuming that //all relevant high-level render activities// can actually be //expressed in terms of these fundamental patterns,// in combination with an opaque JobFunctor.
;Frame Render Job
:Invocation of a ~CPU-bound calculation function working in-memory to generate data into output buffers
:* {{{POST}}} : defines the ''contextual timing parameters'', which are the //start time// and the //deadline//
:** causes a chain of Activities to be put in action
:** these Activities are chained-up (hooked onto the {{{next}}} pointer)
:** depending on the //invocation context...//
:*** in ''grooming mode'' (i.e. the current worker holds the {{{GroomingToken}}}) the follow-up activation happens synchronously
:*** in ''work mode'' (i.e. {{{GroomingToken}}} has been dropped) the Scheduler internals //must not be altered;// the chain thus has to be dispatched
:** ⟹ the Activity Language invokes the ''λ-post''.
:* {{{GATE}}} : provides a check-point to ensure the preconditions are met
:** the current //scheduler-time// (≈ system time) is checked against the //deadline//
:** moreover, a //prerequisite count// is checked, allowing passage only if the internal countdown latch has been ticked off to zero;
:** while surpassing the deadline simply obliterates all chained Activities, unmet prerequisites cause a „spinning“ delay-and-recheck
:** the count-down of prerequisites is caused by receiving a ''notification'' (either externally, or from a {{{NOTIFY}}}-Activity)
:** receiving such a notification causes re-evaluation of the condition and possibly activation of the chain
:** ⟹ the Activity Language evaluates the condition {prerequisite ≡ 0 and before deadline}.
:*** when still blocked, the {{{GATE}}} is re-issued through invocation of ''λ-post'' with a re-check delay
:*** otherwise the chained Activities are activated
:* {{{WORKSTART}}} : mark the start of ''render calculations'' and transition ''grooming mode'' ⟼ ''work mode''
:** the current time and a payload argument is emitted as message -- for self-regulation of the render engine
:** the {{{GroomingToken}}} is dropped, allowing other [[workers|SchedulerWorker]] to enter grooming mode and to retrieve further jobs.
:** ⟹ the Activity Language invokes the ''λ-work''.
:* {{{INVOKE}}} : actually invoke the JobFunctor given as immediate argument
:** a fixed number of further {{{uint64_t}}} arguments are retrieved from the {{{FEED}}}-Activities following next
:** all currently foreseeable render operations invoked from here do not need more than 4 fixed arguments (the rest being prepared by the Builder)
:**  ⟹ the Activity Language directly invokes the ''~JobFunctor'', in the current thread, which thus might be blocked indefinitely.
:* {{{FEED}}} : used to transport argument data for render operations
:**  ⟹ the Activity Language skips this {{{Activity}}} record, passing activation
:* {{{WORKSTOP}}} : mark the end of ''render calculations''
:** ⟹ the Activity Language invokes the ''λ-done''.
:** the current time and a payload argument is emitted as message -- for self-regulation of the render engine
:** logically, the current worker is now free for further work -- which is attained implicitly
:* {{{NOTIFY}}} (optional) : pass a trigger event to some other Activity dependent on the current render operation's results
:** the actual trigger is //double dispatched// -- considering the target Activity given as argument
:**  ⟹ the Activity Language ...
:*** for a {{{GATE}}} as target ⟹ //triggers// the gate, which implies to decrement the dependency counter, possibly activating the chain
:*** ┉┉ //further special trigger cases may be {{red{added later}}}// ┉┉
:*** for any other target Activity ⟹ invoke the ''λ-post'' ⟹ contend for acquiring the {{{GroomingToken}}} to re-enter ''grooming mode''
;Media Reader Job
:Retrieve raw media data from external storage, using ''asynchronous IO''
:this task is encoded in two separate Activity chains, where the second chain shall be activated by an asynchronous callback
:* {{{POST}}} : similar to Frame Rendering, this is the entrance point and defines ''contextual timing parameters''
:* {{{GATE}}} (optional) : can be inserted to hold back the data loading until some further prerequisites are met or to enforce a deadline
:* {{{WORKSTART}}} : mark the start of ''IO activity'' and enter ''work mode'' (including ramifications for error handling)
:* {{{INVOKE}}} : in this case implies launching a system call to post the asynchronous IO request
:** the coordinates of the media to be accessed are not given directly; they are rather encoded into the render nodes by the Builder
:** yet the actual frame number is retrieved as argument from the following {{{FEED}}}
:** typically, a block of data is loaded, while subsequent Media Reading Jobs will re-use this block (without actually causing IO)
:**  ⟹ the Activity Language directly invokes the ''~JobFunctor'', in the current thread, typically non-blocking, but possibly causing failure
:* {{{FEED}}} : argument data package for the preceding invocation, otherwise ignored and passed
;IO Callback
:Entrance point activated from an external callback closure after asynchronous operations
:the external callback is required somehow to include the specific {{{Activity}}} record starting this chain...
:* {{{WORKSTOP}}} : mark the end of ''IO activities''
:* {{{NOTIFY}}} : pass the trigger event to signal availability of the retrieved data for processing
:**  ⟹ the {{{Activity}}} record given as receiver represents the follow-up calculation (or IO) task to commence
;Planning Job
:Periodically establish the next chunk of ~Activity-Terms for an continuously ongoing [[stream of calculations|CalcStream]]
:notably this kind of Job entails enqueueing further {{{Activity}}} records and thus must be performed entirely in ''grooming mode''
:* {{{POST}}} : similar to the other cases, this is the entrance point and defines ''contextual timing parameters''
:* {{{INVOKE}}} : invokes a special ''~JobFunctor'' acting as //continuation closure//
:** here the further arguments are used to link back to a {{{RenderDrive}}} maintained as part of a running CalcStream
:** further continuation information is embedded into the JobPlanningPipeline enclosed therein
:**  ⟹ the Activity Language directly invokes the ''~JobFunctor'', in this case to perform the planning and scheduling definition
:* {{{FEED}}} : argument data package for the preceding planning task
;Tick
:Internal periodical maintenance duty cycle
:* {{{TICK}}} : a special marker Activity, which re-inserts itself on each activation and performs an internal hook of the Render Engine
:**  ⟹ the Activity Language invokes the ''λ-tick''

!!!Post and dispatch
An Activity is //performed// by invoking its {{{activate(now, ctx)}}} function -- however, there is a twist: some Activities require interaction with the Scheduler queue or may even alter this queue -- and such Activities must be performed //in »management mode«// (single threaded, holding the {{{GroomingToken}}}). These requirements can be fulfilled by //dispatching// an Activity through the ''λ-post'', which attempts to acquire the {{{GroomingToken}}} to proceed directly -- and otherwise sends the action through the //dispatch queue.// Moreover, some follow-up activities need to happen //later// -- and this can be encoded by using a {{{POST}}}-Activity, which both defines a time window of execution and causes its chain-Activity to be sent through ''λ-post''. All the Lambda functions mentioned here are part of the //execution context,// which represents and abstracts the scheduler environment.

In a similar vein, also ''dependency notifications'' need to happen decoupled from the activity chain from which they originate; thus the Post-mechanism is also used for dispatching notifications. Yet notifications are to be treated specially, since they are directed towards a receiver, which in the standard case is a {{{GATE}}}-Activity and will respond by //decrementing its internal latch.// Consequently, notifications will be sent through the ''λ-post'' -- which operationally re-schedules a continuation as a follow-up job. Receiving such a notification may cause the Gate to become opened; in this case the trigger leads to //activation of the chain// hooked behind the Gate, which at some point typically enters into another calculation job. Otherwise, if the latch (in the Gate) is already zero (or the deadline has passed), nothing happens. Thus the implementation of state transition logic ensures the chain behind a Gate can only be //activated once.//
At a high level, the Render Process is what „runs“ a playback or render. Using the EngineFaçade, the [[Player]] creates a descriptor for such a process, which notably defines a [[»calculation stream«|CalcStream]] for each individual //data feed// to be produced. To actually implement such an //ongoing stream of timed calculations,// a series of data frames must be produced, for which some source data has to be loaded and then individual calculations will be scheduled to work on this data and deliver results within a well defined time window for each frame. Thus, on the implementation level, a {{{CalcStream}}} comprises a pipeline to define [[render jobs|RenderJob]], and a self-repeating re-scheduling mechanism to repeatedly plan and dispatch a //chunk of render jobs// to the [[Scheduler]], which cares to invoke the individual jobs in due time.

This leads to a even more detailed description at implementation level of the ''render processing''. Within the [[Session]], the user has defined the »edit« or the definition of the media product as a collection of media elements placed and arranged into a [[Timeline]]. A repeatedly-running, demand-driven, compiler-like process (in Lumiera known as [[the Builder|Builder]]) consolidates this [[»high-level definition«|HighLevelModel]] into a [[Fixture]] and a [[network of Render Nodes|LowLevelModel]] directly attached below. The Fixture hereby defines a [[segment for each part of the timeline|Segmentation]], which can be represented as a distinct and non-changing topology of connected render nodes. So each segment spans a time range, quantised into a range of frames -- and the node network attached below this segment is capable of producing media data for each frame within definition range, when given the actual frame number, and some designation of the actual data feed required at that point. Yet it depends on the circumstances what this »data feed« //actually is;// as a rule, anything which can be produced and consumed as compound will be represented as //a single feed.// The typical video will thus comprise a video feed and a stereo sound feed, while another setup may require to deliver individual sound feeds for the left and right channel, or whatever channel layout the sound system has, and it may require two distinct beamer feeds for the two channels of stereoscopic video. However -- as a general rule of architecture -- the Lumiera Render Engine is tasked to perform //all of the processing work,// up to and including any adaptation step required to reach the desired final result. Thus, for rendering into a media container, only a single feed is required, which can be drawn from an encoder node, which in turn consumes several data feeds for its constituents.

To summarise this break-down of the rendering process defined thus far, the [[Scheduler]] ''invokes'' individual [[frame render jobs|RenderJob]], each of which defines a set of »coordinates«:
* an ID of the calculation stream allowing to retrieve the output sink
* a frame number relative to the timeline, encoding a //nominal frame time//
* a specification of the Segment and the ExitNode to pull


{{red{⚠ In-depth rework underway as of 7/2024...}}}
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
For each segment (of the effective timeline), there is a Processor holding the exit node(s) of a processing network, which is a "Directed Acyclic Graph" of small, preconfigured, stateless [[processing nodes|ProcNode]]. This network is operated according to the ''pull principle'', meaning that the rendering is just initiated by "pulling" output from the exit node, causing a cascade of recursive downcalls or prerequisite calculations to be scheduled as individual [[jobs|RenderJob]]. Each node knows its predecessor(s), thus the necessary input can be pulled from there. Consequently, there is no centralized "engine object" which may invoke nodes iteratively or table driven &mdash; rather, the rendering can be seen as a passive service provided for the Vault, which may pull from the exit nodes at any time, in any order (?), and possibly multithreaded.
All State necessary for a given calculation process is encapsulated and accessible by a StateProxy object, which can be seen as the representation of "the process". At the same time, this proxy provides the buffers holding data to be processed and acts as a gateway to the Vault to handle the communication with the Cache. In addition to this //top-level State,// each calculation step includes a small [[state adapter object|StateAdapter]] (stack allocated), which is pre-configured by the builder and serves the purpose to isolate the processing function from the detals of buffer management.


__see also__
&rarr; the [[Entities involved in Rendering|RenderEntities]]
&rarr; the [[mechanics of rendering and buffer management|RenderMechanics]]
&rarr; the protocol [[how to operate the nodes|NodeOperationProtocol]]
The rendering of input sources to the desired output ports happens within the &raquo;''Render Engine''&laquo;, which can be seen as a collaboration of Steam- amd ~Vault-Layer together with external/library code for the actual data manipulation. In preparation of the RenderProcess, the [[Builder]] as wired up a network of [[processing nodes|ProcNode]] called the ''low-level model'' (in contrast to the high-level model of objects placed within the session). Generally, this network is a "Directed Acyclic Graph" starting at the //exit nodes// (output ports) and pointing down to the //source readers.// In Lumiera, rendering is organized according to the ''pull principle'': when a specific frame of rendered data is requested from an exit node, a recursive calldown happens, as each node asks his predecessor(s) for the necessary input frame(s). This may include pulling frames from various input sources and for several time points, thus pull rendering is more powerful (but also more difficult to understand) than push rendering, where the process would start out with a given source frame.

Rendering can be seen as a passive service available to the Vault, which remains in charge what to render and when. Render processes may be running in parallel without any limitations. All of the storage and data management falls into the realm of the Vault. The render nodes themselves are ''completely stateless'' -- state is translated into structure or attached at the level of individual invocations in the form of invocation parameters, automation and input data.
a special kind of [[render job|RenderJob]], used to retrieve input data relying on external IO.
Since a primary goal for the design of Lumiera's render engine is to use the available resources effectively, we try to avoid the situation where a working thread gets blocked waiting  for external IO to deliver data. Thus we create special marker jobs to keep track of any prerequisites and start the actual render calculations only if all these prerequisites are already fulfilled and all the required input data is available in RAM.
A distinct property of the Lumiera application is to rely on a rules based approach rather then on hard wired logic. When it comes to deciding and branching, a [[Query]] is issued, resulting either immediately in a {{{bool}}} result, or creating a //binding// for the variables used within the query. Commonly, there is more than one solution for a given query, allowing the result set to be enumerated.




!current state {{red{WIP as of 10/09}}}
We are still fighting to get the outline of the application settled down.
For now, the above remains in the status of a general concept and typical solution pattern: ''create query points instead of hard wiring things''.
Later on we expect a distinct __query subsystem__ to emerge, presumably embedding a YAP Prolog interpreter.
A facility allowing the Steam-Layer to work with abstracted [[media stream types|StreamType]], linking (abstract or opaque) [[type tags|StreamTypeDescriptor]] to an [[library|MediaImplLib]], which provides functionality for acutally dealing with data of this media stream type. Thus, the stream type manager is a kind of registry of all the external libraries which can be bridged and accessed by Lumiera (for working with media data, that is). The most basic set of libraries is instelled here automatically at application start, most notably the [[GAVL]] library for working with uncompressed video and audio data. //Later on, when plugins will introduce further external libraries, these need to be registered here too.//
A scale grid controls the way of measuring and aligining a quantity the application has to deal with. The most prominent example is the way to handle time in fixed atomic chunks (''frames'') addressed through a fixed format (''timecode''): while internally the application uses time values of sufficiently fine grained resolution, the acutally visible timing coordinates of objects within the session are ''quantised'' to some predefined and fixed time grid.

&rarr; QuantiserImpl
//Invoke and control the dependency and time based execution of  [[render jobs|RenderJob]]//
The Scheduler acts as the central hub in the implementation of the RenderEngine and coordinates the //processing resources// of the application. Regarding architecture, the Scheduler is located in the Vault-Layer and //running// the Scheduler is equivalent to activating the »Vault Subsystem«. An EngineFaçade acts as entrance point, providing high-level render services to other parts of the application: [[render jobs|RenderJob]] can be activated under various timing and dependency constraints. Internally, the implementation is organised into two layers:
;Layer-2: Coordination
:maintains a network of interconnected [[activities|RenderActivity]], tracks dependencies and observes timing constraints
:coordinates a [[pool of active Workers|SchedulerWorker]] to dispatch the next activities
;Layer-1: Invocation
:operates a low-level priority scheduling mechanism for time-bound execution of [[activities|RenderActivity]]

!Event based vs. time based operational control
Time bound delivery of media data is an important aspect of editing and playback -- yet other concerns are of similar importance: the ability to make optimum use of scarce resources and to complete extended processing in reasonable time, while retaining some overall responsiveness of the system. And, especially for the //final render,// it is tantamount to produce 100% correct results without any glitches -- a goal that can not be reconciled with the demand for perfect timing. These considerations alone are sufficient to indicate, that strict adherence to a pre-established calculation plan is not enough to build a viable render engine. As far as media processing is concerned, the limited I/O bandwidth is the most precious resource -- reading and writing data from persistent storage requires a lot of time, with generally unpredictable timings, necessitating asynchronous processing.

This leads to the observation that every render or playback process has to deal with rather incompatible processing patterns and trends: for one, processing has to start as soon as the event of an completed I/O-operation is published, yet on the other hand, limited computational resources must be distributed and prioritised in a way suitable to deliver the completed data as close as possible to a pre-established timing deadline, under the constraint of limited in-memory buffer capacity. The //control structure// of such a render engine is thus not only a time based computation plan -- first and foremost it should be conceived as an asynchronous messaging system, with the ability however to prioritise some messages based on urgency or approaching deadlines.

The time-based ordering and prioritisation of [[render activities|RenderActivity]] is thus used as a //generic medium and agent// to support and implement complex interwoven computational tasks. On the layer-1 mentioned above, a combination of a lock-free dispatch queue is used, feeding into a single threaded priority queue organised by temporal deadlines. Most render activities are lightweight and entail quick updates to some state flags, while certain activities are extremely long running -- and those are shifted into worker threads based on priority.
&rarr; [[the Activity-Language|RenderActivity]]
&rarr; [[implementing Activities|RenderOperationLogic]]
!!!Managing the Load
At large, these considerations hint at a diverse and unspecific range of usages -- necessitating to rely on organisational schemes able to adapt to a wide array of load patterns, including sudden and drastic changes. Moreover the act of scheduling involves operating on time scales spanning several orders of magnitude, from dispatching and cascading notifications working on the µs scale up to a planning horizon for render calculations reaching out into a dimension of several seconds. And while the basic construction of the scheduler can be made to //behave correct// and accommodate without failure, it seems way more challenging to establish an arrangement of functionality able to yield good performance on average. Certainly this goal can not be achieved in one step, and by a single layer of implementation. Rather a second layer of operational control may be necessary to keep the machinery within suitable limits, possibly even some degree of dynamic optimisation of working parameters.
&rarr; [[Load management and operational control|SchedulerLoadControl]]

!Usage pattern
The [[Language of render activities|RenderActivity]] forms the interface to the scheduler -- new activities are defined as //terms// and handed over to the scheduler. This happens as part of the ongoing job planning activities -- and thus will be performed //from within jobs managed by the scheduler.// Thus the access to the scheduler happens almost entirely from within the scheduler's realm itself, and is governed by the usage scheme of the [[Workers|SchedulerWorker]].

These ''Worker Threads'' will perform actual render activities most of the time (or be idle). However -- idle workers contend for new work, and for doing so, they //also perform the internal scheduler management activities.// As a consequence, all Scheduler coordination and [[memory management|SchedulerMemory]] is ''performed non-concurrent'': only a single Worker can acquire the {{{GroomingToken}}} and will then perform managment work until the next render activity is encountered at the top side of the //priority queue.//

This leads to a »chaotic«, event-driven processing style without a central managing entity -- the scheduler does not „run the shop“ and rather fulfils a role as bookkeeping and balancing guide. Operationally, the internals of the Scheduler are grouped into several components and arranged towards two entrance lanes: defining a new schedule entry and retrieving work.
&rarr; SchedulerProcessing

!!!Performance
Processing is not driven by a centralised algorithm, but rather stochastically by workers becoming available for further work. This allows for adaptation to a wide array of load patterns, but turns assessment of the Scheduler's capabilities and limitations into a challenging task. The speed and throughput of the processing are largely defined by the actual scheduler -- up to the point where the Scheduler is overloaded permanently and the //schedule slips away.// Based on that observation, a framework for [[Scheduler Load- and Performance-testing|SchedulerTest]] was^^{{red{12/2023 rather is about to be}}}^^ established.

!!!Instructing the Scheduler
The Scheduler is now considered an implementation-level facility with an interface specifically tailored at the JobPlanningPipeline: the [[»Render Activity Language«|RenderActivity]]. This //builder-style// setup allows to construct an ''~Activity-Term'' to model all the structural properties of an individual rendering invocation -- it is compriesed of a network of {{{Activity}}} records, which can be directly handled by the Scheduler.

!!!!Discussion of further details
&rarr; [[Activity|RenderActivity]]
&rarr; [[Memory|SchedulerMemory]]
&rarr; [[Workers|SchedulerWorker]]
&rarr; [[Internals|SchedulerProcessing]]
&rarr; [[Behaviour|SchedulerBehaviour]]
&rarr; [[Testing|SchedulerTest]]
//Characteristic behaviour traits of the [[Scheduler]] implementation//
The design of the scheduler was chosen to fulfil some fundamental requirements..
* flexibility to accommodate a wide array of processing patterns
* direct integration of some notion of //dependency//
* ability to be re-triggered by external events (IO)
* reactive, but not over-reactive response to load peaks
* ability to withstand extended periods of excessive overload
* roughly precise timing with a margin of ≈ ''5''ms
The above list immediately indicates that this scheduler implementation is not oriented towards high throughput or extremely low latency, and thus can be expected to exhibit some //traits of response and behaviour.// These traits were confirmed and further investigated in the efforts for [[stress testing|SchedulerTest]] of the new Scheduler implementation. {{red{As of 4/2024}}} it remains to be seen, if the characteristics of the chosen approach are beneficial or even detrimental to the emerging actual usage -- it may well turn out that some adjustments must be made or even a complete rewrite of the Scheduler may be necessary.

!Randomised capacity distribution
The chosen design relies on //active workers// and a //passive scheduling service.// In line with this approach, the //available capacity// is not managed actively to match a given scheduler -- workers perform slightly randomised sleep cycles rather, which are guided by the current distance to the next Activity on schedule. These wait-cycles are delegated to the OS scheduler, which is known to respond flexibly, yet with some leeway depending on current situation and given wait duration, typically around some 100 microseconds. Together this implies that start times can be ''slightly imprecise'', while deadlines will be decided at the actual time of schedule. Assuming some pre-roll, then the first entry on the schedule will be matched within some 100µs, while further follow-up activities depend on available capacity, which can be scattered within the »work horizon« amounting to 5 milliseconds. A ramp-up to full concurrent capacity thus requires typically 5ms -- and can take up to 20ms after an extended period of inactivity. The precise temporal ordering of entries on the schedule however will be observed strictly; when capacity becomes available, it is directed towards the first entry in row.

!Worker stickiness and downscaling
The allocation of capacity strongly favours active workers. When returning from active processing, a worker gets precedence when looking for further work -- yet before looking for work, any worker must acquire the {{{GroomingToken}}} first. Workers failing those hurdles will retry, but step down on repeated failure. Together this (deliberately) creates a strong preference of keeping ''only a small number of workers active'', while putting excess workers into sleep cycles first, and removing them from the work force altogether, after some seconds of compounded inactivity. When a schedule is light, activity will rather be stretched out to fill. It takes //a slight overload,// with schedule entries becoming overdue repeatedly for about 5 milliseconds, in order to flip this preference and scale up to the full workforce. Notably also a ''minimum job length'' is required to keep an extended work force in active processing state. The [[stress tests|SchedulerTest]] indicate that it takes a seamless sequence of 2ms-jobs to bring more than four workers into sustained active work state.

To put this into proportion, the tests (with debug build) indicate a typical //turnover time// of 100µs, spent on acquiring the {{{GroomingToken}}}, then reorganising the queue and further to process through the administrative activities up to the point where the next actual {{{JobFunctor}}} is invoked. Taking into account the inherent slight randomness of timings, it thus takes a »window« of several 100µs to get yet another worker reliably into working state. This implies the theoretical danger of clogging the Scheduler with tiny jobs, leading to build-up of congestion and eventually failed deadlines, while most of the worker capacity remains in sleep state. Based on the expected requirements for media processing however this is not considered a relevant threat. {{red{As of 4/2024}}} more practical experience are required to confirm this assessment.

!Limitations
The [[memory allocation scheme|SchedulerMemory]] is tied to the deadlines of planned activities. To limit the memory pool size and keep search times for allocation blocks within reasonable bounds, this arrangement imposes a ''hard limit'' for planning ''deadlines into the future'', set at roughly 20 seconds from //current time.// It is thus necessary to break computations down into manageable chunks, and to perform schedule planning as an ongoing effort, packaged into ''planning jobs'' scheduled alongside with the actual work processing. Incidentally, the mentioned limitation is related to setting deadlines (and by extension also for defining start times) -- no limitation whatever exists on //actual run times,// other than availability of work capacity. Once started, a job may run for an hour, but it is not possible to schedule a follow-up job in advance for such an extended span into the future.

A similar and related limitation stems from handling of internal administrative work. A regular »tick« job is repeatedly scheduled +50ms into the future. This job is declared with a deadline of some 100 milliseconds -- if a load peak happens to cause an extended slippage beyond that tolerance frame, a ''Scheduler Emergency'' is triggered, discarding the complete schedule and pausing all current calculation processes. The Scheduler thus allows for ''local and rather limited overload only''. The job planning -- which must be performed as an ongoing process with continuation -- is expected to be reasonably precise and must include a capacity management on a higher level. The Scheduler is not prepared for handling a ''workload of unknown size''.
The scheduling mechanism //requires active control of work parameters to achieve good performance on average.//
In a nutshell, the scheduler arranges planned [[render activities|RenderActivity]] onto a time axis -- and is complemented by an [[active »work force«|SchedulerWorker]] to //pull and retrieve// the most urgent next task when free processing capacity becomes available. This arrangement shifts focus from the //management of tasks// towards the //management of capacity// -- which seems more adequate, given that capacity is scarce while tasks are abundant, yet limited in size and processed atomically.

This change of perspective however implies that not the time-axis is in control; rather processing is driven by the workers, which happen to show up in a pattern assumed to be //essentially random.// This leads to the conclusion that capacity may not be available at the point where it's needed most, and some overarching yet slowly adapting scheme of redistribution is required to make the render engine run smoothly.

!Maintaining work capacity
Capacity as such is an abstracted and statistical entity, emerging from the random pattern of worker pulls. The workers themselves can be in a subdued //sleeping state// -- causing the underlying OS thread to be sent into a time bound sleeping wait state, thereby freeing up the used computation »core« for other unrelated work. This kind of state transition involves a context switch to the OS kernel, which obviously bears some overhead to be minimised. ''Contemporary OS scheduling'' does not //rely on a fixed time slice model any more.// The kernel provides a high-resolution time source and employs an elaborate wake-up scheme, balancing the ability for //soft-realtime operation// with the imperative to //limit energy consumption// of the system. The software control-flow within an individual thread can thus make no reliable assumptions regarding uninterrupted processing, since pre-emption may happen anywhere, any time, irregularly and spanning a wide array of frequencies. When instructed for wake-up after a given time period, the actual response time can be much larger, yet //on average// a given wake-up goal is met with latencies ranging down into the low µs range. However, the actual response provided by the OS scheduler largely depends on the current load situation, and how this load is distributed over the available »cores«. A power management governance may prefer to concentrate all load on a small number of cores to allow stepping down excess computation resources. The consequence is: whenever several tasks are competing for active use of a core, the work time will be divided in accordance to demand -- striving to achieve the desired balance in the realm of 1...10 milliseconds. Effectively, not anything can taken for granted below 10 milliseconds.

This has ramifications for the Render Engine scheduling strategy.<br/>The goals are set as follows:
* achieve optimal use of resources on average
* adapt to a wide array of given circumstances
Notably the use of special purpose hardware or special OS privileges is //not required,// yet the system should be able to exploit such abilities, when available. Taking into account the nature of contemporary OS scheduling, as outlined above, the ''fundamental strategy'' is constantly to produce //maximum demand for computation// on precisely that amount of concurrent threads //matching the number// of »cores« allocated to rendering. By default, the number indicated as {{{std::thread::hardware_concurrency()}}} by the C++ runtime system will be used, not more. Assuming that all other threads and processes are mostly sleeping on wait for events, producing such a load pattern will coerce the OS scheduler to allocate each of these demand-drawing worker threads to a single »core« and thereby granting the majority of available computation capacity to this single worker on each »core«.

This fundamental strategy translates into the goal to provide each worker with the next piece of work as soon as the thread returns from previous work. Putting this scheme into practice however faces major obstacles stemming from //work granularity// and //timing patterns.// Threads will call for work at essentially random time points, and in no way related to the calculation schedule laid out in advance based on reasoning about dependencies, resource allocation and deadlines. This is a problem hard to overcome on the level of individual operationality. And thus, in accordance to the goals set out above, demanding to aim at optimal use of recurses //on average// -- it is indicated to change to a //statistical perspective// and to interpret the worker call as an //capacity event.//
!!!The statistical flow of capacity
By employing this changed perspective, the ongoing train of worker calls transliterates into a flow of capacity. And this capacity can be grouped and classified. The optimal situation -- which should be achieved on average -- is to be able to provide the next piece of work //immediately// for each worker calling in. On the other hand, target times set out for scheduling should be observed, and optimum state can thus be characterised as »being slightly overdue«. Ideally, capacity should thus flow-in //slightly behind schedule.// At operational level, since „capacity flow“ is something generated stochastically, the desired state translates into adjustments to re-distribute the worker callback time points into some focal time zone, located slightly behind the next known [[render activity|RenderActivity]] to tend for. (As aside, an //alternative approach// -- likewise considered for the scheduler design -- would be to pre-attribute known capacity to the next round of known scheduling events; this solution was rejected however, presumably leading to excessively fine-grained and unnecessarily costly computation, disregarding the overall probabilistic nature of the rendering process.)

Drawing upon these principles, the treatment of worker calls can be aligned to the upcoming schedule. On average, the planning of this schedule can be assumed to happen in advance with suitable safety margin (notwithstanding the fact that re-scheduling and obliteration of established schedule can happen any time, especially in response to user interaction). The next-known »head element« of the schedule will thus play a //pivotal role,// when placed into the context of the //current time of request.// On average, this head element will be the next activity to consider, and the distance-of-encounter can be taken as indicator for //current density of work.// This allows to establish a priority of concerns to consider when answering a worker pull call, thereby transitioning the capacity-at-hand into a dedicated segment of the overall capacity:
# ideally, activities slightly overdue can be satisfied by //active capacity//
# spinning-wait is adequate for imminent events below scheduling latency
# next priority to consider is a head element located more into the future
# beyond that, capacity can be redirected into a focussed zone behind the head
# and if the head is far away, capacity is transitioned into the reserve segment
Worker capacity events can be further distinguished into //incoming capacity// and //outgoing capacity.// The latter is the most desirable transition, since it corresponds to a worker just returning from a computation task, while incoming capacity stems from workers calling back after a sleep period. Thus a high priority is placed on re-assigning the outgoing capacity immediately back to further active work, while for incoming capacity there is a preference to send it back to sleep. A worker placed into the sleeping reserve for an extended stretch of time can be considered excess capacity and will be removed from the [[work force|SchedulerWorker]]. This kind of asymmetry creates a cascading flow, and allows in the end to synthesise an average load factor, which in turn can be used to regulate the Engine on a global level. If scheduling is overloaded, increasing the slip of planned timings, more capacity might be added back when available, or the planning process should be throttled accordingly.
!!!Capacity redistribution scheme
The effective behaviour results from the collaboration of {{{Scheduler::scatteredDelay()}}} with the {{{LoadController}}}. Redistribution can happen on //incoming capacity// (before the actual dispatch) and on //outgoing capacity//. After a »targeted sleep«, the thread is sent back with {{{PASS}}} and will thus immediately re-enter the capacity decision logic. The actual decision is then based on the current distance to the scheduler's »head element«
|!next head|!|>|!incoming|>|!outgoing|
|~|!distance|!tend-next|!regular|!tend-next|!regular|
|unknown                 | ∞| IDLEWAIT | IDLEWAIT | WORKTIME | WORKTIME |
|sleep-horizon | >20ms| IDLEWAIT | IDLEWAIT | TENDNEXT | WORKTIME |
|work-horizon   | >5ms| IDLEWAIT | IDLEWAIT | TENDNEXT | WORKTIME |
|near-horizon  | >50µs| TENDNEXT | NEARTIME | TENDNEXT | NEARTIME |
|imminent         | >now| SPINTIME | SPINTIME | SPINTIME | SPINTIME |
|past                 | <now| DISPATCH | DISPATCH| DISPATCH | DISPATCH |
|case matrix for capacity redistribution|c
&rarr; please have a look at {{{SchedulerLoadControl_test::classifyCapacity}}}

!!!Playback and processing priority
Not all tasks are on equal footing though. Given the diverse nature of functions fulfilled by the render engine, several kinds of render processes can be distinguished:
;Free-wheeling
:Final quality rendering must be completed flawlessly, //whatever it takes.//
:Yet there is no temporal limit, rather it is tantamount to use resources efficiently.
;Time-bound
:Media playback has to observe clear temporal constraints and is obsolete when missing deadlines.
:Making progress when possible is more important than completing every chain of activities
;Background
:Various service tasks can be processed by the render engine when there is excess capacity to use.
For the likes of UI audio visualisation or preview caching there is typically no deadline at all, as long as they do not jeopardise important work
These distinct »processing classes« seem to impose almost contradictory calculation patterns and goals, hard to fit into a single organisational scheme. Yet the //statistical view angle// outlined above offers a way to reconcile such conflicting trends -- assuming that there is a pre-selection on a higher organisational level to ensure the overall processing load can be handled with the given resources. Indeed it is pointless even to start three real-time playback processes in a situation where it is blatantly clear that the system can handle only two. Yet when a rough guess of expected load indicates that a real-time playback will only consume half of the system's processing capacity, even overprovisioning of background processes to some degree is feasible, as long as the real-time playback gets the necessary headroom. And all these different processing patterns can be handled within the same scheduling framework, as long as they can be translated into a //probability to acquire the resources.//

A task placed into the scheduling timeline basically represents some //»cross-section of probability«.// If work capacity happens to „land“ into its definition realm and no further overdue task is scheduled before, the task can use the available capacity. Yet when the deadline of the task has passed without any free capacity „hitting“ its scope, its chances are wasted and the scheduler will discard the entry. And by exploiting these relationships, it is possible to craft a scheme of time slot allocation to translate an abstract processing priority into a probabilistic access to some fraction of capacity. Even more so, since it is possible to place multiple instances of the same task into the scheduling timeline, relying on the [[»Gate« mechanism|RenderOperationLogic]] of the [[Activity-Language|RenderActivity]], to spread out a small level of background processing probability over an extended period of time.
//The Scheduler uses an »Extent« based memory management scheme known as {{{BlockFlow}}}.//
The organisation of rendering happens in terms of [[Activities|RenderActivity]], which may bound by //dependencies// and limited by //deadlines.// For the operational point of view this implies that a sequence of allocations must be able to „flow through the Scheduler“ -- in fact, only references to these {{{Activity}}}-records are passed, while the actual descriptors reside at fixed memory locations. This is essential to model the dependencies and conditional execution structures efficiently. At some point however, any {{{Activity}}}-record will either be //performed// or //obsoleted// -- and this leads to the idea of managing the allocations in //extents// of memory here termed as »Epochs«
* a new Activity is planted into a suitable //Epoch,// based on its deadline
* it is guaranteed to sit at a fixed memory location while it can be potentially activated
* based on the deadlines, at some point a complete slate of activities can reasonably be considered as //obsoleted.//
* this allows to discard a complete Extent //without any further checks and processing// (assuming trivial destructors!)

!Safeguards
This is a rather fragile composition and chosen here for performance reasons; while activities are interconnected, their memory locations are adjacent, improving cache locality. Moreover, most of the dependency processing and managing of activities happens single-threaded, while some [[worker|SchedulerWorker]] holds the {{{GroomingToken}}}; so most of the processing is local and does not require memory barriers.

Unfortunately this tricky arrangement also implies that many safety barriers of the C++ language are circumvented. A strict processing regime must be established, with clear rules as to when activities may, or may no longer be accessed.
* each »Epoch« gets an associated //deadline//
* when the next [[job|RenderJob]] processed by a worker starts //after this Epoch's deadline//, the worker //has left the Epoch.//
* when all workers have left an Epoch, only ''pending async IO tasks'' need to be considered, since IO can possibly be delayed for an extended period of time.<br/>For an IO task, buffers //need to be kept available,// and those buffers are indirectly tied to the job depending on them.
* ⟹ thus a count of pending IO activities must be maintained //for each Epoch//  -- implemented by the same mechanism also employed for dependencies between render jobs, which is a notification message causing a local counter to be decremented.<br/>All Epochs ''following the blocked one must be blocked as well'' -- since the callback from IO may immediately pass control there; only later, when the execution logic detects a passed deadline, it is possible to side step further activities; this is achieved by inserting {{{GATE}}}-[[Activity records|RenderActivity]] before any render job invocation bound by deadline.

!operational capabilities
The memory management for the scheduler is arranged into three layers...
* raw memory is allocated in large blocks of {{{Extent}}} size -- {{red{currently as of 7/23}}} claimed from regular heap storage
* a low-level allocation scheme, the {{{ExtentFamily}}} uses a //pool of extents cyclically,// with the ability to claim more extents on-demand
* the high-level {{{BlockFlow}}} allocation manager is aware of scheduler semantics and dresses up those extents as {{{Epoch}}}
For each new RenderActivity, the API usage with the help of the {{{ActivityLang}}} is required to designate a ''deadline'' -- which can be used to associate the corresponding {{{Activity}}}-records with a suitable {{{Epoch}}}. The //temporal spacing// of epochs, as well as the number of active epochs (=extents) must be managed dynamically, to accommodate varying levels of load. This bears the danger of control oscillations, and more fine tuning and observations under real-world conditions are indicated {{red{as of 7/23}}}, see [[#1316|https://issues.lumiera.org/ticket/1316]]. When the reserved allocation for an epoch turns out as insufficient (i.e. the underlying extent has been filled up prior to maturity), further {{{Activity}}} records will be //„borrowed“// from the next epoch, while reducing the epoch spacing for compensation. Each {{{Epoch}}} automatically maintains a specifically rigged »''~EpochGuard''«-{{{Activity}}}, always located in the first »slot« of the epoch storage. This guard models the deadline and additionally allows to block deallocation with a count-down latch, which can be tied to pending IO operations.

The auto-regulation of this allocation scheme is based on //controlling the Epoch duration// dynamically. As immediate response to sudden load peaks, the Epoch stepping is reduced eagerly while excess allocations are shifted into Epochs with later deadline; the underlying {{{Extent}}} allocation pool is increased to satisfy additional demand. Overshooting regulation need to be limited however, which is achieved by watching the fill level of each Epoch at the later time point when is //is discarded.// This generates a signal to counteract the eager increase of capacity, and can be used to regulate the load factor to be close to 90%. Yet this quite precise accounting is only possibly with some delay (the limited life time of the Epochs is fundamental trait of this allocation scheme); this second steering signal is thus passed through an exponential moving average and applied with considerable damping, albeit with higher binding force than the eager capacity increase.

!!!performance considerations
It should be noted that {{{BlockFlow}}} provides very specific services, which are more elaborate than just a custom memory allocator. By leveraging possible scale factors, it is possible however to bring the amortised effort for a single {{{Activity}}} allocation down into the same order of magnitude as a standard heap allocation, which equates to roughly 30ns on contemporary machines. For context, an individually managed allocation-deallocation pair with a ref-counting {{{std::shared_ptr}}} has a typical performance cost of ~100ns.

The primary scaling effects exploited to achieve this level of performance are the combined de-allocation of a complete Epoch, and the combination of several allocations tied to a common deadline. However -- since the Epoch duration is chosen dynamically, performance can potentially be //degraded drastically// once the scheme is put under pressure -- since each new allocation has to search through the list of active Epochs. Parameters are tuned such as to ensure this list remains very short (about 5 Epochs) under typical operational conditions.

&rarr; [[Scheduler performance testing|SchedulerTest]]
At first sight, the internals of [[Activity|RenderActivity]] processing may seem overwhelmingly complex -- especially since there is no active »processing loop« which might serve as a starting point for the understanding. It is thus necessary to restate the working mode of the Scheduler: it is an //accounting and direction service// for the //active// [[render workers|SchedulerWorker]]. Any processing happens stochastically and is driven by various kinds of events --
* a //worker// becoming ready to perform further tasks
* an external //IO event// {{red{12/23 only planned yet}}}
* a //planning job// to add new elements to the schedule
The last point highlights a //circular structure:// the planning job itself was part of the schedule, picked up by a worker and activated; this way, the system feeds itself.

!Participants and Realms
From the outside, the Scheduler appears as a service component, exposing two points-of-access: Jobs can be added to the schedule (planned), and a worker can retrieve the next instruction, which implies either to perform an (opaque) computation, or to sleep for some given short period of time. Jobs are further distinguished into processing tasks, IO activities and meta jobs, which are related to the self-regulation of the scheduling process. On a closer look however, several distinct realms can be identified, each offering an unique perspective of operation.

!!!The Activity Realm
[[Activities|RenderActivity]] are part of an //activity language// to describe patterns of processing, and will be arranged into activity terms; what is scheduled is actually the entrance point to such a term. Within this scope, it is assumed abstractly that there is some setup and arrangement to accomplish the //activation// of activities. At the level of the activity language, this is conceptualised and represented as an {{{ExecutionCtx}}}, providing a set of functions with defined signature and abstractly defined semantics. This allows to define and even implement the entirety of activity processing without any recurrence to implementation structures of the scheduler. Rather, the functions in the execution-context will absorb any concern not directly expressed in relations of activities (i.e. anything not directly expressed as language term).

While the possibility to reason about activities and verify their behaviour in isolation is crucial to allow for the //openness// of the activity language (its ability to accommodate future requirements not foreseeable at this time), it also creates some kind of »double bottom« -- which can be challenging when it comes to reasoning about the processing steps within the scheduler. While all functions of the execution-context are in fact wired into internals of the scheduler for the actual usage, it is indispensable to keep both levels separate and to ensure each level is logically consistent in itself.

One especially relevant example is the handling of a notification from the activity chain; this may highlight how these two levels are actually interwoven, yet must be kept separate for understanding the functionality and the limitations. Notification is added optionally to a chain, typically to cause further activities in another chain after completion of the first chain's {{{JobFunctor}}}. On the //language level,// the logical structure of execution becomes clear: the {{{NOTIFY}}}-Activity is part of the chain in an activity term, and thus will be activated eventually. For each kind of Activity, it is well defined what activation entails, as can be seen in the function {{{Activity::activate}}} (see at the bottom of [[activity.hpp|https://issues.lumiera.org/browser/Lumiera/src/vault/gear/activity.hpp]]) &rArr; the activity will invoke the λ-post in the execution-context, passing the target of notification. The formal semantics of this hook is that a posted activity-term will be //dispatched;// and the dispatch of an Activity is defined likewise, depending on what kind of Activity is posted. The typical case will be to have a {{{GATE}}}-Activity as target, and dispatching towards such a gate causes the embedded condition to be checked, possibly passing forward the activation on the chain behind the gate. The condition in the gate check entails to look at the gate's deadline, and a pre-established prerequisite count, which is decremented when receiving a notification. So this clearly describes operational semantics, albeit on a conceptual level.

Yet looking into the bindings within the Scheduler implementation however will add an additional layer of meaning. When treated within the scope of the scheduler, any Activity passed at some point through the λ-post translates into an //activation event// -- which is the essence of what can be placed into the scheduler queue. As such, it defines a start point and a deadline, adding an entirely different perspective of planning and time-bound execution. And the execution-context takes on a very specific meaning here, as it defines a //scope// to draw inferences, as can be seen in {{{ExecutionCtx::post()}}} in the Scheduler itself. The context carries contextual timing information, which is combined with timing information from the target and timing information given explicitly. While the activity language only contains the notion of //sequencing,// within the Scheduler implementation, actual timing constraints become relevant: in the end, the notification actually translates into another entry placed into the queue, at a starting time derived contextually from its target.

!!!Queues and Activation Events
The core of the Scheduler implementation is built up from two layers of functionality
;Layer-1
:The scheduler-invocation revolves around ''Activation Events'', which are //instructed,// fed into //prioritisation// and finally //retrieved for dispatch.//
;Layer-2
:The scheduler-commutator adds the notion of ''planning'', and ''dispatch'' and is responsible for coordinating activations and ''concurrency''
Activation Events are the working medium within the Scheduler implementation. They are linked to a definition of a processing pattern given as Activity-chain, which is then qualified further through a start-time and deadline and other metadata. The Scheduler uses its own //time source// (available as function in the execution-context) -- which is in fact implemented by access to the high-resolution timer of the operating system. Relying on time comparisons and a priority queue, the Scheduler can decide if and when an Activation Event actually should „happen“. This decisions and the dispatch of each event are embedded into the execution-context moreover, which (for performance reasons) is mostly established on-the-fly within the current call graph, starting from an //anchor event// and refining the temporal constraints on each step. Moreover, the {{{ExecutionCtx}}} object holds a link to the Scheduler implementation, allowing to bind the λ-hooks -- as required by the Activity-language -- into Scheduler implementation functions.

The Scheduler has two entrance avenues, and the first one to mention is the ''Job planning''. The interface to this part takes on the form of a builder-DSL, class {{{ScheduleSpec}}}; its purpose is to build an Activity-term and to qualify the temporal aspects of its performance. Moreover, several such terms can be connected through //notification links.// (see above for a description of the notification functionality). This interface is not meant to be „really“ public, insofar it requires an understanding of the Activity language and the formal semantics of dispatch to be used properly. For an successful execution, the proper kind of execution scheme or {{{Term::Template}}} must be used, the given timing data should match up and be located sufficiently into the future; the planning must ensure not to attach notification links from activities which could possibly be dispatched already, and it //must not touch// an Activity past its deadline ever (since the memory of such an Activity can be reused for other Activities without further notice).

A lot can be expressed by choosing the start time, deadline, connectivity and further metadata of an Activity -- and it is assumed that the job planning will involve some kind of capacity management and temporal windowing to allocate the processing work on a higher level. Moreover, each term can be marked with an ''manifestation ID'', and the execution of already planned Activities for some specific manifestation ID can be disabled globally, making those planned Activities kind of disappear instantaneously from the schedule. This feature is provided as foundation to allow some //amending of the schedule//  after the fact -- which obviously must be done based on a clear understanding of the current situation, as the Scheduler itself has no means to check for duplicate or missing calculation steps. A similar kind of //expressive markup// is the ability to designate ''compulsory'' jobs. If such a job is encountered past its deadline, a »Scheduler Emergency« will ensue, bringing all ongoing calculations to a halt. The expectation is that some higher level of coordination will then cause a clean re-boot of the affected calculation streams.

!!!The Worker Aspect
A second avenue of calling into the Scheduler is related to an altogether different perspective: the organisation and coordination of work. In this design, the Scheduler is a //passive service,// supporting a pool of workers, asking //actively for further work// when ready. At the implementation level, a worker is outfitted with a {{{doWork()}}} λ-function. So the individual worker is actually not aware what kind of work is performed -- its further behaviour is defined as reaction on the //sequencing mark// returned from such a »work-pull«. This sequencing mark is a state code of type {{{Activity::Proc}}} and describes the expected follow-up behaviour, which can be to {{{PASS}}} on and continue pulling , enter a {{{WAIT}}} cycle, be {{{KICK}}}ed aside due of contention, or an instruction to {{{HALT}}} further processing and quit the pool.

Following this avenue further down will reveal an intricate interplay of the //essentially random event// of a worker asking for work, and the current timing situation in relation to the //Scheduler queue head,// which is the next Activation Event to be considered. Moreover, based on the call context, the processing of these pull-calls is either treated as //incoming capacity// or //outgoing capacity.// In this context, a pull-call initiated by the worker is considered »incoming«, since the availability of the worker's capacity stems from the worker itself, its current state and behaviour pattern (which, of course, is a result of the previous history of work-pulling). Quite to the contrary, when the call-graph of the »pull« has just returned from processing a {{{JobFunctor}}}, the worker again equates to available capacity -- a capacity however, which is directly tied to the processing situation currently managed by the scheduler, and is about to move out of the scheduler's direct control, hence the term »outgoing«. By virtue of this distinction, the Scheduler is able to enact a //strong preference// towards //keeping active workers active// -- even while this might be detrimental at times to the goal of increasing usage of available capacity: whenever possible, the Scheduler attempts to utilise outgoing capacity immediately again, while, on the other hand, incoming capacity needs to „fit into the situation“ -- it must be able to acquire the {{{GroomingToken}}}, which implies that all other workers are currently either engaged into work processing or receded to a sleep cycle. The reasoning behind this arrangement is that the internal and administrative work is small in comparison to the actual media processing; each worker is thus utilised alongside to advance the Scheduler's internal machinery through some steps, until hitting on the next actual processing task.

Any internal and administrative work, and anything possibly changing the Scheduler's internal state, must be performed under the proviso of holding the {{{GroomingToken}}} (which is implemented as an //atomic variable// holding the respective worker's thread-ID). Internal work includes //feeding// newly instructed Activation Events from the entrance queue into the priority queue, reorganising the priority queue when retrieving the »head element«, but also state tracking computations related to the [[capacity control|SchedulerLoadControl]]. Another aspect of internal processing are the »meta jobs« used to perform the ''job planning'', since these jobs typically cause addition of further Activity terms into the schedule. And besides that, there is also the »''Tick''«, which is a special duty-cycle job, re-inserted repeatedly during active processing every 50 milliseconds. Tasks to be fulfilled by the »tick« are hard-wired in code, encompassing crucial maintenance work to clean-up the memory manager and reclaim storage of Activities past their deadline; a load state update hook is also included here -- and when the queues have fallen empty, the »tick« will cease to re-insert itself, sending the Scheduler into paused state. After some idle time, the {{{WorkForce}}} (worker pool) will scale down autonomously, effectively placing the Render Engine into standby.  

!!!Process Control
Seen as a service and a central part of the Lumiera Render Engine, the Scheduler has a distinct operational state on its own, with the need to regulate some specifics of its processing modalities, and the ability to transition from one mode of operation to another one. The most obvious and uttermost consequential distinction regards to whether the Scheduler is //running,// or //in stand-by// or even //disabled.// Starting discussion with the latter -- as such the Scheduler can not be disabled directly, but confining the {{{WorkForce}}} to size zero will effectively render the Scheduler defunct; no processing whatsoever can take place then, even after adding a job to the schedule. Taking this approach is mostly of relevance for testing however, since having this kind of a crippled service in the core of the application is not commendable, due to breaking various implicit assumptions regarding cause and effect of media processing. In a similar vein, it is not clear if the engine can be run successfully on a worker pool with just a single worker. In theory it seems this should be possible though.

For use within the application it can thus be assumed that the {{{WorkForce}}} is somehow configured with a //maximal capacity// -- which should relate to the number of independent (virtual) cores available on the hardware platform. The active processing then needs to be //seeded// by injecting the initial ''planning job'' for the desired stream-of-calculations; this in turn will have to establish the initial schedule and then re-add itself as a //continuation// planning job. Adding a seed-job this way causes the Scheduler to »ignite« -- which means to pass a signal for scale-up to the worker pool and then immediately to enter the first duty-cylce (»tick«) processing directly. This serves the purpose of updating and priming the state of the [[memory manager|SchedulerMemory]] and of any state pertaining operational load control. From this point on, the »tick« jobs will automatically reproduce themselves every 50ms, until some »tick« cycle detects an empty scheduler queue. As discussed above, such a finding will discontinue the »tick« and later cause the {{{WorkForce}}} to scale down again. Notably, no further regulation signals should be necessary, since the actual Scheduler performance is driven by the worker's »pull« calls. Workers will show up with some initial randomisation, and the stochastic capacity control will serve to re-distribute their call-back cycles in accordance with the current situation at the Scheduler queue head.

A special operational condition mentioned above is the ''Scheduler Emergency''. This condition arises when timing constraints and capacity limitations lead to breaking the logical assumptions of the Activity Language, without actually breaking the low-level operational state within the implementation (meaning all queue and state management data structures remain in good health). This condition can not be cured by the Scheduler alone, and there is the serious danger that such a situation will reproduce itself, unless adjustments and corrective actions are taken on a higher level of application state control.

!!!!💡 see also
&rarr; [[Behaviour traits|SchedulerBehaviour]]
&rarr; [[Stress testing|SchedulerTest]]
&rarr; [[Rendering]]
The [[Scheduler]] is responsible for geting the individual [[render jobs|RenderJob]] to run. The basic idea is that individual render jobs //should never block// -- and thus the calculation of a single frame might be split into several jobs, including resource fetching. This, together with the data exchange protocol defined for the OutputSlot, and the requirements of storage management (especially releasing of superseded render nodes &rarr; FixtureStorage), leads to certain requirements to be ensured by the scheduler:
;ordering of jobs
:the scheduler has to ensure all prerequisites of a given job are met
;job time window
:when it's not possible to run a job within the defined target time window, it should be silently dropped
;event propagation
:when some job is done, or due to an external event, a //notification// must be propagated to dependent jobs
;guaranteed execution
:some jobs are marked as »compulsory«. These need to run reliable, even when prerequisite jobs fail -- and this failure state needs to be propagated
;superseding groups of jobs
:jobs can be marked as belonging to some group or stratum (called the »manifestation«); such a group can be masked out at a central registry, causing all the affected jobs to be //silently dropped//.

!detecting termination
The way other parts of the system are built, requires us to obtain a guaranteed knowledge of some job's termination. It is possible to obtain that knowledge with some limited delay, but propagation of this information needs to be absolutely reliable (violations leading either to segfault, or memory leaks). The requirements stated above assume this can be achieved through a combination of //compulsory jobs// and //notifications.// Alternatively we could consider using a separate messaging system within the engine (the »EngineObserver«), which shall also be used for timing observations, higher level capacity management and for signaling exceptional operational failures.
With the Scheduler testing effort [[#1344|https://issues.lumiera.org/ticket/1344]], several goals are pursued
* by exposing the new scheduler implementation to excessive overload, its robustness can be assessed and defects can be spotted
* with the help of a systematic, calibrated load, characteristic performance limits and breaking points can be established
* when performed in a reproducible way, these //stress tests// can also serve to characterise the actual platform
* a synthetic load, emulating standard use cases, allows to watch complex interaction patterns and to optimise the implementation

!A synthetic load for performance testing
In December 2023, a load generator component was developed [[#1346|https://issues.lumiera.org/ticket/1346]] as foundation for Scheduler testing.
The {{{TestChainLoad}}} is inspired by the idea of a blockchain, and features a //graph// of //processing steps (nodes)// to model complex interconnected computations. Each //node// in this graph stands for one render job to be scheduled, and it is connected to //predecessors// and //successors//. The node is able to compute a chained hash based on the hash values of its predecessors. When configured with a //seed value//, this graph has the capability to compute a well defined result hash, which involves the invocation of every node in proper order. For verification, this computation can be done sequentially, by a single thread performing a linear pass over the graph. The actual test however is to traverse the graph to produce a schedule of render jobs, each linked to a single node and defining prerequisites in accordance to the graph's structure. It is up to the scheduler then to uphold dependency relations and work out a way to pass on the node activation, so that the correct result hash will be reproduced at the end.

The Scheduler thus follows the connectivity pattern, and defining a specific pattern characterises the load to produce. Patterns are formed by connecting (or re-connecting) nodes, based on the node's hash -- which implies that patterns are generated. The generation process and thus the emerging form is guided and defined by a small number of rules. Each rule establishes a probability mapping; when fed with a node hash, some output parameter value is generated randomly yet deterministically.
The following //topology control rules// are recognised:
;seeding rule
:when non-zero, the given number of new start (seed) nodes is injected.
:all seed nodes hold the same, preconfigured fixed seed value and have successors only, but no predecessors
;expansion rule
:when non-zero, the current node forks out to the given additional number of successor nodes
;reduction rule
:when non-zero, the current node will join the given number of additional predecessor nodes
;pruning rule
:when non-zero, the current node will be an exit node without successor, and terminate the current chain
:when all ongoing chains happen to be terminated, a new seed node is injected automatically
;weight rule
:when non-zero, a computational weight with the given degree (multiplier) is applied on invocation of the node as render job
:the //base time// of that weight can be configured (in microseconds) and is roughly calibrated for the current machine
* //without any rule,// the default is to connect linear chains with one predecessor and one successor
* the node graph holds a fixed preconfigured number of nodes
* the //width// is likewise preconfigured, i.e. the number of nodes on the same »level«, typically to be started at the same time
* generation is automatically terminated with an exit node at all remaining chains

! Scheduler Stress Testing
The point of departure for any stress testing is to show that the subject is resilient and will break in controlled ways only. Since the Scheduler relies on the limited computational resources available in the system, overload is easy to provoke by adding too many render jobs; delay will build up until some job's deadline is violated, at which point the Scheduler will drop this job (and any follow-up jobs with unmet dependencies). Much more challenging however is the task to find out about //the boundary of regular scheduler operation.// The domain of regular operation can be defined by the ability of the scheduler to follow and conform to the timings set out explicitly in the schedule. Obviously, short and localised load peaks can be accommodated, yet once a persistent backlog builds up, the schedule starts to slip and the calculation process will flounder.

A method to determine such a »''breaking point''« in a systematic way relies on the //synthetic calculation load// mentioned above, the {{{TestChainLoad}}}. Using a simplified model of expected computation expense, a timing grid for scheduling can be established, which is then //scaled and condensed// to provoke loss of control in the Scheduler -- a condition that can be determined by statistical observation: since the process of scheduling contains an element of //essential randomness,// persistent overload will be indicated by an increasing variance of the overall runtime, and a departure from the nominal runtime of the executed schedule. This consideration leads to a formalised condition, which can be used to control a //binary search// to find the breaking point in terms of a //critical stress factor.//

Observing this breaking point in correlation with various load patterns will unveil performance characteristics and weak spots of the implementation.
&rarr; [[Scheduler behaviour traits|SchedulerBehaviour]]

Another, quite different avenue of testing is to investigate a ''steady full-load state'' of processing. Contrary to the //breaking point// technique discussed above, for this method a fluid, homogenous schedule is required, and effects of scaffolding, ramp-up and load adaptation should be minimised. By watching a constant flow of back-to-back processing, in a state of //saturation,// the boundary capabilities for throughput and parallelisation can be derived, ideally expressed as a model of processing efficiency. A viable setup for this kind of investigation would be to challenge the scheduler with a massive load peak of predetermined size: a set of jobs //without any further interdependencies,// which are scheduled effectively instantaneously, so that the scheduler is immediately in a state of total overload. The actual measurement entails to watch the time until completing this work load peak, together with the individual job activation times during that period; the latter can be integrated to account for the //effective parallelism// and the amount of time in //impeded state,// where at most single threaded processing is observed. Challenging the Scheduler with a random series of such homogenous load peaks allows build a correlation table and to compute a linear regression model.

! Observations
!!!Breaking Point and Stress
Several investigations to determine the »breaking point« of a schedule were conducted with the load topology depicted to the right. This load pattern is challenging on various levels. There are dependency chains leading from the single start node to the three exit nodes, and thus the order of processing must be strictly observed. Moreover, several nodes bear //no weight,// and so the processing for those jobs returns immediately, producing mostly administrative overhead. Some nodes however are attributed with a weight up to factor 3.

<html><img title="Load topology with 64 nodes joined into a dependency chain, used for »breaking point« search"  src="dump/2024-04-08.Scheduler-LoadTest/Topo-10.svg"  style="float:right; margin-left:2ex"/></html>For execution, this weight is loaded with a base time, for example ''500''µs. An //adapted schedule// is generated based on the //node layers,// and using a simplified heuristic to account both for the accumulated node weight found within a given level, and the ability for speed-up through concurrency. Nodes without a weight are assumed to take no time (a deliberate simplification), while possible parallelisation is applied solely as factor based on the node count, completely disregarding any concerns of »optimal stacking«. This leads to the following schedule:

|! Level|!stepFac|!Schedule |
| 0|  0.000|  0.000ms|
| 1|  0.000|  0.000ms|
| 2|  0.000|  0.000ms|
| 3|  2.000|  1.000ms|
| 4|  2.000|  1.000ms|
| 5|  2.000|  1.000ms|
| 6|  2.000|  1.000ms|
| 7|  3.000|  1.500ms|
| 8|  5.000|  2.500ms|
| 9|  7.000|  3.500ms|
| 10|  8.000|  4.000ms|
| 11|  8.000|  4.000ms|
| 12|  8.000|  4.000ms|
| 13|  9.000|  4.500ms|
| 14| 10.000|  5.000ms|
| 15| 12.000|  6.000ms|
| 16| 12.000|  6.000ms|
| 17| 13.000|  6.500ms|
| 18| 16.000|  8.000ms|
| 19| 16.000|  8.000ms|
| 20| 20.000| 10.000ms|
| 21| 22.500| 11.250ms|
| 22| 24.167| 12.083ms|
| 23| 26.167| 13.083ms|
| 24| 28.167| 14.083ms|
| 25| 30.867| 15.433ms|
| 26| 32.200| 16.100ms|

The tests were typically performed with ''4 workers''. 
The computed schedule takes this into account, but only approximately, considering the number of nodes in each layer of nodes -- but not their dependencies on predecessors or possibly differing weight
(node simulated computation duration).

<html><div style="clear: both"/></html>
To explain this calculation, e.g. at begin of Level-3, a factor 2 is added as increment, since Node-2 is attributed with weight-factor ≔ 2; in the .dot-diagram above, the weight is indicated as suffix, attached behind the node hash mark, in this case ({{{2: 95.2}}}). With 500µs as weight, the node(s) in Level-2 will thus be scheduled at t≔1ms. To discuss another, more interesting example, Level-19 holds 5 nodes with weight-factor ≔ 2. Scheduling for 4 workers will allow to parallelise 4 nodes, but require another round for the remaining node. Thus an increment of +4 is added at the beginning of Level-20, thereby scheduling the two following nodes ({{{34: D8.3}}}) and  ({{{35: B0.2}}}) at t≔10ms. So given by their weight factors, a combined weight ≡ 5 is found in Level-20, and the two nodes are assumed to be parallelised, thus (in a simplified manner) allocating an offset of +5/2 · 500µs. The following Level-21 is thus placed at t ≔ 11.25ms

This heuristic time allocation leads to a schedule, which somehow considers the weight distribution, yet is deliberately unrealistic, since it does not consider any base effort, nor does it fully account for the limited worker pool size. At the beginning, the Scheduler will thus be partially idle waiting, while at the end, a massive short overload peak is added, further exacerbated by the dependency constraints. It should be recalled that the objective of this test is to tighten or stretch this schedule by a constant ''stress factor'', and to search the point at which a cascading catastrophic slippage can be observed.

And such a well defined breaking point can indeed be determined reliably. However -- initial experiments placed this event at a stress-factor closely above 0.5 -- which is way off any expectation. The point in question is not the absolute value of slippage, which is expectedly strong, due to the overload peak at end; rather, we are looking for the point, at which the scheduler is unable to follow even the general outline of this schedule. And the expectation would be that this happens close to the nominal schedule, which implies stress-factor ≡ 1. Further //instrumentation// was thus added, allowing to capture invocations of the actual processing function; values integrated from these events allowed to draw conclusions about various aspects of the actual behaviour, especially
* the actual job run times consistently showed a significant drift towards longer run times (slower execution) than calibrated
* the effective average concurrency deviates systematically from the average speed-up factor, as assumed in the schedule generation; notably this deviation is stable over a wide range of stress factors, but obviously depends strongly on the actual load graph topology and work pool size; is can thus be interpreted as a ''form factor'' to describe the topological ability to map a given local node connectivity to a scheduling situation.
By virtue of the instrumentation, both effects can be determined //empirically// during the test run, and compounded into a correction factor, applied to the scale of the stress-factor value determined as result. With this correction in place, the observed »breaking point« moved ''very close to 1.0''. This is considered an //important result;// both corrected effects relate to circumstances considered external to the Scheduler implementation -- which, beyond that, seems to handle timings and the control of the processing path //without significant further overhead.//

!!!Overload
Once the Scheduler is //overloaded,// the actual schedule does not matter much any more -- within the confines of an important limitation: the deadlines. The schedule may slip, but when a given job is pushed ahead beyond its deadline, the general promise of the schedule is //broken.// This draws on an important distinction; deadlines are hard, while start times can be shifted to accommodate. As long as the timings stay within the overall confines, as defined by the deadlines, the scheduling is able to absorb short load peaks. During this mode of operation, no further timing waits are performed, rather, jobs are processed in order, as defined by their start times; when done with one job, a worker immediately retrieves the next job, which -- in state of overload -- is likewise overdue. So this setup allows to observe the efficiency of the „mechanics“ of job invocation.
<html><img title="Load Peak with 8ms"  src="dump/2024-04-08.Scheduler-LoadTest/Graph13.svg"  style="float:right; width: 80ex; margin-left:2ex"/></html>
The measurement shown to the right used a pool of ''8 workers'' to process a //load peak of jobs,// each loaded with a processing function calibrated to run ''8''ms. Workers are thus occupied with processing the job workload for a significant amount of time -- and so the probability of workers accidentally asking for work at precisely the same time is rather low. Since the typical {{{ActicityTerm}}} for [[regular job processing|RenderOperationLogic]] entails dropping the {{{GroomingToken}}} prior to invocation of the actual {{{JobFunctor}}}, another worker can access the queues in the meantime, and handle Activities up to invocation of the next {{{JobFunctor}}}
With such a work load, the //worker pull processing// plays out to its full strength; since there is no »manager« thread, the administrative work is distributed evenly to all workers, and performed on average without imposing any synchronisation overhead to other workers. Especially with longer load peaks, the concurrency was observed to converge towards the theoretical maximum of 8 (on this machine) -- as can be seen at the light blue vertical bars in the secondary diagram below (where the left Y scale displays the average concurrency). The remaining headroom can be linked (by investigation of trace dumps) to the inevitable ramp-up and tear-down; the work capacity shows up with some distribution of random delay, and thus it typically takes several milliseconds until all workers got their first task. Moreover, there is the overhead of the management work, which happens outside the measurement bracket inserted around the invocation of the job function -- even while in this load scenario the management work is also done concurrently to the other worker's payload processing, it is not captured as part of the payload effort, and thus reduces the average concurrency accounted.

The dependency of load size to processing time is clearly linear, with a very high correlation (0.98). A ''linear regression model'' indicates a gradient very close to the expected value of 1ms/job (8ms nominal job time distributed to 8 cores). The slight deviation is due to the fact that //actual job times// (&rarr; dark green dots) tend to diverge to higher values than calibrated, an effect consistently observed on this machine throughout this scheduler testing effort. An explanation might be that the calibration of the work load is done in a tight loop (single threaded or multi threaded does not make much of difference here), while in the actual processing within the scheduler, some global slowdown is generated by cache misses, pipeline stalls and the coordination overhead caused by accessing the atomic {{{GroomingToken}}} variable. Moreover, the linear model indicates a socket overhead, which largely can be attributed to the ramp-up / tear-down phase, where -- inevitably -- not all workers can be put to work. In accordance with that theory, the socket overhead indeed increases with larger job times. The probabilistic capacity management employed in this Scheduler implementation adds a further socket overhead; the actual work start depends on workers pulling further work, which, depending on the circumstances, happens more or less randomly at the begin of each test run. Incidentally, there is further significant scaffolding overhead, which is not accounted for in the numbers presented here: At start, the worker pool must be booted, the jobs for the processing load must be planned, and a dependency to a wake-up job will be maintained, prompting the controlling test-thread to collect the measurement data and to commence with the next run in the series.

Note also that all these investigations were performed with ''debug builds''.
<html><div style="clear: both"/></html>
<html><img title="Load Peak with 8ms"  src="dump/2024-04-08.Scheduler-LoadTest/Graph15.svg"  style="float:right; width: 80ex; margin-left:2ex"/></html>
However, ''very short jobs'' cause ''significant loss of efficiency''.
The example presented to the right uses a similar setup (''8 workers''), but reduced the calibrated job-time to only ''200''µs. This comes close to the general overhead required for retrieving and launching the next job, which amounts to further 100µs running a debug build, including the ~30µs required solely to access, set and drop the {{{GroomingToken}}}. Consequently, only a small number of //other workers// get a chance to acquire the {{{GroomingToken}}} and then to work through the Activities up to the next Job invocation, before the first worker returns already -- causing a significant amount of ''contention'' on the {{{GroomingToken}}}. Now, the handling of work-pull requests in the Scheduler implementation is arranged in a way to prefer workers just returning from active processing. Thus (intentionally) only a small subset of the workers is able to pull work repeatedly, while the other workers will encounter a series of »contention {{{KICK}}}« events; an inbuilt //contention mitigation scheme// responds to this kind of repeated pull-failures by interspersing sleep cycles, thereby effectively throttling down the {{{WorkForce}}} until contention events return to an attainable level. This mitigation is important, since contention, even just on an atomic variable, can cause a significant global slow-down of the system.

As net effect, most of the load peaks are just handled by two workers, especially for larger load sizes; most of the available processing capacity remains unused for such short running payloads. Moreover, on average a significant amount of time is spent with partially blocked or impeded operation (&rarr; light green circles), since administrative work must be done non-concurrently. Depending on the perspective, the behaviour, as exposed in this test, can be seen as a weakness -- or as the result of a deliberate trade-off made by the reliance on active work-pulling and a passive Scheduler.

The actual average in-job time (&rarr; dark green dots) is offset significantly here, and closer to 400µs -- which is also confirmed by the gradient of the linear model (0.4ms / 2 Threads ≙ 0.2ms/job). With shorter load sizes below 90 jobs, increased variance can be observed, and measurements can no longer be subsumed under a single linear relation -- in fact, data points seem to be arranged into several groups with differing, yet mostly linear correlation, which also explains the negative socket value of the overall computed model; using only the data points with > 90 jobs would yield a model with slightly lower gradient but a positive offset of ~2ms.
<html><div style="clear: both"/></html>
Further measurement runs with other parameter values fit well in between the two extremes presented above. It can be concluded that this Scheduler implementation strongly favours larger job sizes starting with several milliseconds, when it comes to processing through an extended homogenous work load without much job interdependencies. Such larger lot sizes can be handled efficiently and close to expected limits, while very small jobs massively degrade the available performance. This can be attributed both to the choice of a randomised capacity distribution, and of pull processing without a central manager.

!!!Stationary Processing
The ultimate goal of //load- and stress testing// is to establish a notion of //full load// and to demonstrate adequate performance under //nominal load conditions.// Thus, after investigating overheads and the breaking point of a complex schedule, a measurement setup was established with load patterns deemed „realistic“ -- based on knowledge regarding some typical media processing demands encountered for video editing. Such a setup entails small dependency trees of jobs loaded with computation times around 5ms, interleaving several challenges up to the available level of concurrency. To determine viable parameter bounds, the //breaking-point// measurement method can be applied to an extended graph with this structure, to find out at which level the computations will use the system's abilities to such a degree that it is not able to move along any faster.
<html><img title="Load topology for stationary processing with 8 cores"  src="dump/2024-04-08.Scheduler-LoadTest/Topo-20.svg"  style="width:100%"/></html>
This pattern can be processed
* with 8 workers in overall 192ms
* processing 256 Nodes, each loaded with 5ms
* since this graph has 35 levels, ∅ 5.6ms are required per level
* on average, concurrency reaches 5.4 (some nodes have to wait for dependencies)
This research revealed again the tendency of the given Scheduler implementation to ''scale-down capacity unless overloaded''. Using the breaking-point method with such a fine grained and rather homogenous schedule can thus be problematic, since a search for the limit will inevitably involve running several probes //below the limit// -- which can cause the scheduler to reduce the number of workers used to a level that fills the available time. Depending on the path taken, the search can thus find a breaking point corresponding to a throttled capacity, while taking a search path through parameter ranges of overload will reveal the ability to follow a much tighter schedule. While this is an inherent problem of this measurement approach, it can be mitigated to some degree by limiting the empiric adaption of the parameter scale to the initial phase of the measurement, while ensuring this initial phase is started from overload territory.
<html><img title="Load topology for stationary processing with 8 cores"  src="dump/2024-04-08.Scheduler-LoadTest/Topo-21.svg"   style="float:right; width: 80ex; margin-left:2ex"/></html>

For comparison, another, similar load pattern was used, which however is comprised entirely of interleaved 4-step linear chains. Each level can thus be handled with a maximum of 4 workers;  actually there are 66 levels — with ∅3.88 Nodes/Level, due to the ramp-up and ramp-down towards the ends.
| !observation|!4 workers|!8 workers|
| breaking point|stress 1.01 |stress 0.8 |
| run time|340ms |234ms |
| ≙ per level|5.15ms |3.5 |
| avg.conc|3.5 |5.3 |
These observations indicate ''adequate handling without tangible overhead''.
When limited to ''4 workers'', the concurrency of ∅ 3.5 is only slightly below the average number of 3.88 Nodes/Level, and the time per level is near optimal, taking into account the fact (established by the overload measurements) that the actual job load tends to be slightly above the calibrated value of 5ms. The setup with ''8 workers'' shows that further workers can be used to accommodate a tighter schedule, but then the symptoms for //breaking the schedule// are already reached at a nominally lower stress value, and only 5.3 of 8 workers will be active on average — this graph topology simply does not offer more work load locally, since 75% of all Nodes have a predecessor.


@@font-size:1.4em; 🙛 -- 🙙@@

Building upon these results, an extended probe with 64k nodes was performed -- this time with planning activities interwoven for each chunk of 32 nodes -- leading to a run time of 87.4 seconds (5.3ms per node) and concurrency ≡ 3.95
This confirms the ''ability for steady-state processing at full load''.
<html><div style="clear: both"/></html>
The Scheduler //maintains a ''Work Force'' (a pool of workers) to perform the next [[render activities|RenderActivity]] continuously.//
Each worker runs in a dedicated thread; the Activities are arranged in a way to avoid blocking those worker threads
* IO operations are performed asynchronously {{red{planned as of 9/23}}}
* the actual calculation job is started only when all prerequisites are available
!Workload and invocation scheme
Using a pool of workers to perform small isolated steps of work atomically and in parallel is an well established pattern in high performance computing. However, the workload for rendering media is known to have some distinctive traits, calling for a slightly different approach compared with an operation system scheduler or a load balancer. Notably, the demand for resources is high, often using „whatever it takes“ -- driving the system into load saturation. The individual chunks of work, which can be computed independently, are comparatively large, and must often be computed in a constrained order. For real-time performance, it is desirable to compute data as late as possible, to avoid blocking memory with computed results. And for the final quality render, for the same reason it is advisable to proceed in data dependency order to keep as much data as possible in memory and avoid writing temporary files.

This leads to a situation where it is more adequate to //distribute the scarce computation resources// to the tasks //sequenced in temporary and dependency order//. The computation tasks must be prepared and ordered -- but beyond that, there is not much that can be »managed« with a computation task. For this reason, the Scheduler in the Lumiera Render Engine uses a pool of workers, each providing one unit of computation resource (a »core«), and these workers will ''pull work'' actively, which stands in contrast to the usual pattern of distributing, queuing and dispatching tasks towards a passive set of workers.

Moreover, the actual computation tasks, which can be parallelised, are at least by an order of magnitude more expensive than any administrative work for sorting tasks, checking dependencies and maintaining process state. This leads to a scheme where a worker first performs some »management work«, until encountering the next actual computation job, at which point the worker leaves the //management mode// and transitions into //concurrent work mode//. All workers are expected to be in work mode almost entirely most of the time, and thus we can expect not much contention between workers performing »management work« -- allowing to confine this management work to //single threaded operation,// thereby drastically reducing the complexity of management data structures and memory allocation.
!!!Regulating workers
The behaviour of individual workers is guided solely by the return-value flag from the work-functor. Consequently, no shared flags and no direct synchronisation whatsoever is required //within the {{{WorkForce}}} implementation.// -- notwithstanding the fact that the implementation //within the work-functor// obviously needs some concurrency coordination to produce these return values, since the whole point is to invoke this functor concurrently. The following aspects of worker behaviour can be directed:
* returning {{{activity::PASS}}} instructs the worker to re-invoke the work-functor in the same thread immediately
* returning {{{activity::WAIT}}} requests an //idle-wait cycle//
* returning {{{activity::KICK}}} signals //contention,// causing a short back-off
* any other value, notably {{{activity::HALT}}} causes the worker to terminate
* likewise, an exception from anywhere within the worker shall terminate the worker and activate a »disaster mode«
Essentially this implies that //the workers themselves// (not some steering master) perform the management code leading to the aforementioned state directing return codes.

The ''idle-wait cycle'' involves some degree of capacity control. First off, a worker receiving the {{{activity::WAIT}}} should perform a sleep (by the OS scheduler) for some small period (configurable as Render Engine parameter). However, if a worker encounters an extended ongoing sequence of idle-wait cycles, beyond a (likewise configurable) threshold, the //worker shall terminate.// This way, the {{{WorkForce}}} gradually scales down when not used any more. On the other hand, //scaling up// is performed stepwise or in one jump, and initiated by an [[external control call|SchedulerLoadControl]], typically conducted by the {{{Scheduler}}} in response to a signal from the {{{Player}}} indicating a newly added workload.

!!!Synchronisation and data consistency
As pointed out above, the code controlling the worker behaviour must care for data consistency when using shared state. However, in the actual usage scenario for scheduling media computations the amount of »management work« is by orders of magnitude smaller than any actual media computations. For that reason, //all management work// involving shared data structures can be performed //exclusively single-threaded.// This is achieved by acquiring the {{{GroomingToken}}} within the Scheduler control layer (Layer-2). After mutating the queues to determine the next task, this token will be dropped again, prior to engaging into the actual media computations.

Following the rules of [[C++ atomic Release-Acquire ordering|https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering]], for ensuring concurrent data consistency it is sufficient with this usage pattern to rely on a single atomic variable (used as implementation of the {{{GroomingToken}}}). All the other internal management data (notably the priority queue and the memory manager) can be implemented with ordinary memory accessible by all threads. The important part is actually to establish a »synchronises-with« relation
# managment data is processed in one thread holding the {{{GroomingToken}}}
# an atomic store is performed with {{{memory_order_release}}} (thereby dropping the token)
# now another thread can perform an atomic load with {{{memory_order_acquire}}} -- but it is essential to check and branch on the loaded value
# and only when the loaded value indicates that the load is //synchronised-with// the preceding store, then the other management data can be processed

&rarr; [[Internals of processing in the Scheduler|SchedulerProcessing]]
A link to relate a compound of [[nested placement scopes|PlacementScope]] to the //current// session and the //current//&nbsp; [[focus for querying|QueryFocus]] and exploring the structure. ScopeLocator is a singleton service, allowing to ''explore'' a [[Placement]] as a scope, i.e. discover any other placements within this scope, and allowing to locate the position of this scope by navigating up the ScopePath finally to reach the root scope of the HighLevelModel.

In the general case, this user visible high-level-model of the [[objects|MObject]] within the session allows for more than tree-like associations, as a given [[Sequence]] might be bound into multiple [[timelines|Timeline]]. Effectively, this makes the ScopePath context dependent. The ScopeLocator is the point where the strictly tree-like hierarchy of placements is connected to this more elaborate scope and path structure. To this end, ScopeLocator maintaines a QueryFocusStack, to keep track of the current location in focus, in cooperation with the QueryFocus objects used by client code.
&rarr; see BindingScopeProblem
&rarr; see TimelineSequences

!!a note about concurrency
While there //is// a "current state" involved, the effect of concurrent access deliberately remains unspecified, because access is expected to be serialised on a higher level. If this assumption were to break, then probably the ScopeLocator would involve some thread local state.
The sequence of nested [[placement scopes|PlacementScope]] leading from the root (global) scope down to a specific [[Placement]] is called ''scope path''. Ascending this path yields all the scopes to search or query in proper order to be used when resolving some attribute of placement. Placements use visibility rules comparable to visibility of scoped definitions in common programming languages or in cascading style sheets, where a local definition can shadow a global one. In a similar way, properties not defined locally may be resolved by querying up the sequence of nested scopes.

A scope path is a sequence of scopes, where each scope is implemented by a PlacementRef pointing to the &raquo;scope top&laquo;, i.e. the placement in the session //constituting this scope.// Each Placement is registered with the session as belonging to a scope, and each placement can contain other placements and thus form a scope. Thus, the ''leaf'' of this path can be considered the current scope. In addition to some search and query functions, a scope path has the ability to ''navigate'' to a given target scope, which must be reachable by ascending and descending into the branches of the overall tree or DAG. Navigating changes the current path. ({{red{WIP 11/09}}} navigation to scopes outside the current path and the immediate children of the current leaf is left out for now. We'll need it later, when actually implementing meta-clips. &rarr; see BindingScopeProblem)

!access path and session structure
ScopePath represents an ''effective scoping location'' within the model &mdash; it is not necessarily identical to the storage structure (&rarr; PlacementIndex) used to organise the session. While similar in most cases, binding a sequence into multiple timelines or meta-clips will cause the internal and the logical (effective) structure to digress (&rarr; BindingScopeProblem). An internal singleton service, the ScopeLocator is used to constitute the logical (effective) position for a given placement (which in itself defines a position in the session datastructure). This translation involves a [[current focus|QueryFocus]] remembering the access path used to reach the placement in question. Actually, this translation is built on top of the //navigation//-Operation of ScopePath, which thus forms the foundation to provide such a logical view on the "current" location.

!Operations
* the default scope path contains just the root (of the implicit PlacementIndex, i.e. usually the root of the model in the session)
* a scope path can be created starting from a given scope. This convenience shortcut uses the ScopeLocator to establish the position of the given start scope. This way, effectively the PlacementIndex within the current session is queried for parentship relations until reaching the root of the HighLevelModel.
* paths are ''copyable value objects'' without identity on their own
* there is a special //invalid//&nbsp; path token, {{{ScopePath::INVALID}}}
* length, validity and empty check
* paths are equality comparable
* relations
** if a scope in question is contained in the path
** if a scope in question is at the leaf position of the path
** if a path in question is prefix (contained) in the given path
** if two paths share a common prefix
** if two paths are disjoint (only connected at root)
* navigation
** move up one step
** move up to the root
** navigate to a given scope
** clear a path (reset to default)
An implementation mechanism used within the PlacementIndex to detect some specific kinds of object connections(&raquo;MagicAttachment&laquo;), which then need to trigger a special handling.

{{red{planned feature as of 6/2010)}}}

//It is not clear yet, if that's just an implementation facility, or something which is exposed through the ConfigRules.//

!preliminary requirements
We need to detect attaching and detaching of
* root &harr; BindingMO
* root &harr; [[Fork]]
//Segmentation of timeline// denotes a data structure and a step in the BuildProcess.
When [[building the fixture|BuildFixture]], ~MObjects -- as handled by their Placements -- are grouped below each timeline using them; Placements are then to be resolved into [[explicit Placements|ExplicitPlacement]], resulting in a single well defined time interval for each object. This allows to cut this effective timeline into slices of constant wiring structure, which are represented through the ''Segmentation Datastructure'', a time axis with segments holding object placements and [[exit nodes|ExitNode]]. &nbsp;&rarr; see [[structure of the Fixture|Fixture]]
* for each Timeline we get a Segmentation
** which in turn is a list of non-overlapping segments
*** each holding
**** an ExplicitPlacement for each object touching that time interval
**** an ExitNode for each ModelPort of the corresponding timeline
**** a JobTicket as blueprint for [[frame Job|RenderJob]] generation corresponding to each ModelPort

!Usage pattern
;(1) build process
:&rarr; a tree walk yields the placements per timeline, which then get //resolved//
:&rarr; after //sorting,// the segmentation can be established, thereby copying placements spanning multiple segments
:&rarr; only //after running the complete build process for each segment,// the list of model ports and exit nodes can be established
;(2) commit stage
: -- after the build process(es) are completed, the new fixture gets ''committed'', thus becoming the officially valid state to be rendered. As render processes might be going on in parallel, some kind of locking or barrier is required. It seems advisable to make the change into a single atomic hot-swap. Meaning we'd get a single access point to be protected. But there is another twist: We need to find out which render processes to cancel and restart, to pick up the changes introduced by this build process -- which might include adding and deleting of timelines as a whole, and any conceivable change to the segmentation grid. Because of the highly dynamic nature of the placements, on the other hand it isn't viable to expect the high-level model to provide this information. Thus we need to find out about a ''change coverage'' at this point. We might expand on that idea to //prune any new segments which aren't changed.// This way, only a write barrier would be necessary on switching the actually changed segments, and any render processes touching these would be //tainted.// Old allocations could be released after all tainted processes are known to be terminated. &rarr; SegmentationChange
;(3) rendering use
:Each play/render process employs a ''frame dispatch step'' to get the right exit node for pulling a given frame (&rarr; [[Dispatcher|FrameDispatcher]]). Planning appropriate [[render jobs|RenderJob]] involves support by the JobTicket for each Segment and port, which provides //a blueprint for rendering and connectivity.// From there on, the calculation process -- transmitted through [[Scheduler activity|RenderActivity]]  -- proceeds into the [[processing nodes|ProcNode]]. The storage of these processing nodes and accompanying wiring descriptors is hooked up behind the individual segments, by sharing a common {{{AllocationCluster}}}. Yet the calculation of individual frames also depends on ''parameters'' and especially ''automation'' linked with objects in the high-level model. It is likely that there might be some sharing or some kind of additional communication interface, as the intention was to allow ''live changes'' to automated values. <br/>{{red{WIP 4/2023}}} details about to be elaborated &rarr; PlaybackVerticalSlice
!!!observations
* Storage and initialisation for explicit placements is an issue. We should strive at making that inline as much as possible.
* the overall segmentation emerges from a sorting of time points, which are start points of explicit placements
* after the segmentation has been built, the usage pattern changes entirely into a lookup of segment by time
* the individual segments act as umbrella for a lot of further objects hooked up behind.
* we need the ability to exchange or swap-in whole segments
* each segment controls an AllocationCluster
* we need to track processes for tainting
* access happens per ModelPort

!!!conclusions
The Fixture is mostly comprised of the Segementation datastructure, but some other facilities are involved too
# at top level, access is structured by groups of model ports, actually grouped by timeline. This first access level is handled by the Fixture
# during the build process, there is a collecting and ordering of placements; these intermediaries as well as the initial collection can be discarded afterwards
# the backbone of the segmentation is closely linked to an ordering by time. Initially it should support sorting, access by time interval search later on.
# discarding a segment (or failing to do so) has an high impact on the whole application. We should employ a reliable mechanism for that.
# the frame dispatch and the tracking of processes can be combined; data duplication is a virtue when it comes to parallel processes
# the process of comparing and tainting is broken out into a separate data structure to be used just once

Largely the storage of the render nodes network is hooked up behind the Fixture &rarr; [[storage considerations|FixtureStorage]]
At the end of the build process, the existing [[Segmentation]] possibly needs to be changed, extended or adapted.
This change must be performed as a //transactional switch,// since render or playback processes might be performed concurrently. All Fixture and low-level-Model datastructures are //immutable// -- thus for any changes, suitably adapted structures will be built as a replacement.

!Adapting the Segmentation by »split splice«
This is an implementation level operation, which analyses the existing Segmentation and determines the changes necessary to introduce a new or altered Segment. This operation has to retain the Segmentation ''Invariant'': it is a seamless sequence of Time intervals covering the complete time axis.

!!!Structure of the split-splice operation
;Invariant
:complete and valid [[Segmentation]] always retained
:* seamless coverage the complete time axis [-∞ .. +∞]
:* all entries ordered ascending (in time)
;Stage-1
:determine Predecessor and Successor
:* //Predecessor//<br>   {{{sep}}} ≔ {{{start}}} or {{{after}}} (if missing {{{start}}})<br>   P~~start~~ >= {{{sep}}} ⟹ ↯ Predecessor
:** &rarr; find largest Predecessor with P~~start~~ < {{{sep}}}
:* //Successor// is the first one to violate this condition
:* //otherwise// Successor == Predecessor (split)
;Stage-2
:establish start and end point of new segment
:* explicitly given {{{start}}}/{{{after}}}-points are binding
:** missing {{{start}}}<br>   {{{sep}}} ≡ {{{after}}}
:*** P~~after~~ < {{{sep}}} ⟹ {{{start}}} ≔ P~~after~~
:*** //else// ⟹ {{{start}}} ≔ P~~start~~ (replace or trunc)
:** missing {{{after}}}<br>   {{{sep}}} ≡ {{{start}}} ∧ S~~start~~ >= {{{sep}}} 
:*** S~~start~~ > {{{sep}}} ⟹ {{{after}}} ≔ S~~start~~
:*** //else// ⟹ {{{after}}} ≔ S~~after~~ (replace or trunc)
:''POST''
:* {{{start}}} < {{{after}}}
:** //else// ⟹ REJECT
:* P~~start~~ <= {{{start}}}
:* P~~start~~ == S~~start~~ ∨ {{{start}}} <= S~~start~~
;Stage-3
:determine relation to //Predecessor// and //Successor//
:* //case// P~~start~~ < {{{start}}}
:** P~~after~~ < {{{start}}} ⟹ ins ~NOP-Predecessor
:** P~~after~~ == {{{start}}} ⟹ seamless
:** P~~after~~ <= {{{after}}} ⟹ truc(Predecessor)
:** P~~after~~ > {{{after}}} ⟹ split(Predecessor)
:* //case// P~~start~~ == {{{start}}}
:** P~~after~~ <= {{{after}}} ⟹ drop(Predecessor)
:** P~~after~~ > {{{after}}} ⟹ swap_trunc(Predecessor)
:* //case// Predecessor == Successor<br>   //this case has already been completely covered by the preceding cases//
:** P~~after~~ == {{{after}}} == Time::NEVER ⟹ trunc(Predecessor)
:** P~~after~~ > {{{after}}} ⟹ split(Predecessor)
:* //case// S~~start~~ < {{{after}}}
:** S~~after~~ < {{{after}}} ⟹ drop(Successor) and ++Successor and recurse (same base case)
:** S~~after~~ == {{{after}}} ⟹ drop(Successor)
:** S~~after~~ > {{{after}}} ⟹ trunc(Successor)
:** S~~start~~ == {{{after}}} ⟹ seamless
:** {{{after}}} < S~~start~~ ⟹ ins ~NOP-Successor
;Stage-4
:perform element insertion and replacement
:* for the //Predecessor//
:** trunc ⟹ del
:** split ⟹ del
:** drop ⟹ del
:** swap_trunc ⟹ del
:** ins_nop | seamless ⟹ retain
:* for the //Successor//
:** drop++ ⟹ del
:** trunc ⟹ del
:** ins_nop | seamless ⟹ retain
:* for //insertion of new elements//
:** &rarr;''before''<br>   //cases//
:*** ins ~NOP-Predecessor
:*** trunc(Predecessor) ⟹ ins copy Predecessor-shortened-end
:*** split(Predecessor) ⟹ ins copy Predecessor shortened-end
:** &rarr;''new Segment''
:** &rarr;''after''<br>   //cases//
:*** split(Predecessor) ⟹ ins copy Predecessor shortened-{{{start}}}
:*** swap_trunc(Predecessor) ⟹ ins copy Predecessor shortened-{{{start}}}
:*** trunc(Successor) ⟹ ins copy Successor shortened-{{{start}}}
:*** ins ~NOP-Successor

!!!Implementation techique
The split-splice operation is performed always for a single segment in the context of an existing segmentation; the covered range can be defined explicitly, or by partial spec. For each application of this algorithm, an instance of a //working context// is created (on stack) and initialised by scanning the existing segmentation to establish the insert point. The four stages are then performed on this working data, thereby determining the handling cases -- and in the last stage, new elements are inserted and existing elements are deleted (assuming immutable segment data, any changes and adaptations are done by inserting a modified clone copy).

!!!Generalisation
The first implementation drafts furthered the observation that this decision scheme is actually not dependent on the specifics of the segmentation data structure; it can be generalised rather easily to to work on //any ordered discrete axis// and to handle generic intervals covering this axis in part or complete. The implementation can thus be turned into a ''template'', which is then instantiated with a ''binding'' to a concrete data structure, providing a few generic operation as λ-Functions. This allows to use a dedicated test binding for intervals over natural numbers -- used for exhaustive test coverage by systematically probing all logically possible invocation situations. This helped to spot some obscured bugs and fill-in some details of the implementation, notably the behaviour on incomplete specification of the new interval to splice in.
A sequence is a collection of media objects, arranged onto a fork ("track tree"). Sequences are the building blocks within the session. To be visible and editable, a session needs to be bound into a top-level [[Timeline]]. Alternatively, it may be used as a VirtualClip nested within another sequence.

The sequences within the session establish a //logical grouping//, allowing for lots of flexibility. Actually, we can have several sequences within one session, and these sequences can be linked together or not, they may be arranged in temporal order or may constitute a logical grouping of clips used simultaneously in compositional work etc. The data structure comprising a sequence is always a sub-tree of tracks, attached allways directly below root (Sequences at sub-nodes are deliberately disallowed). Through the sequence as frontend, this track tree might be used at various places in the model simultaneously. Tracks in turn are only an organisational (grouping) device, like folders &mdash; so this structure of sequences and track trees referred through them allows to use the contents of such a track or folder at various places within the model. But at any time, we have exactly one [[Fixture]], derived automatically from all sequences and containing the content actually to be rendered.
&rarr; see considerations about [[the role of Tracks and Pipes in conjunction with the sequences|TrackPipeSequence]]

!!Implementation and lifecycle
Actually, sequences are façade objects to quite some extent, delegating the implementation of the exposed functionality to the relevant placements and ~MObjects within the model. But they're not completely shallow; each sequence has an distinguishable identity and may hold additional meta-data. Thus, stressing this static aspect, sequences are implemented as StructAsset, attached to the [[model root|ModelRootMO]] through the AssetManager, simultaneously registered with the session, then accessed and owned by ref-counting smart-ptr.

A sequence is always tied to a root-placed track, it can't exist without such. When moving this track by changing it's [[Placement]], thus disconnecting it from the root scope, the corresponding sequence will be automatically removed from the session and discarded. On the other hand, sequences aren't required to be //bound// &mdash; a sequence might just exist in the model (attached by its track placed into root scope) and thereby remain passive and invisible. Such an unbound sequence can't be edited, displayed in the GUI or rendered, it is only accessible as asset. Of course it can be re-activated by linking it to a new or existing timeline or VirtualClip.
&rarr; see detailed [[discussion of dependent objects' behaviour|ModelDependencies]]
A helper to implement a specific memory management scheme for playback and rendering control data structures.
In this context, model and management data is structured into [[Segments|Segmentation]] of similar configuration within the project timeline. Beyond logical reasoning, these segments also serve as ''extents'' for memory allocation. Which leads to the necessity of [[segment related memory management|FixtureStorage]]. The handling of actual media data buffers is outside the realm of this topic; these are managed by the frame cache within the Vault.

When addressing this task, we're facing several closely related concerns.
;throughput
:playback operations are ongoing and incur a continuous memory consumption.Thus, we need to keep up a minimal clean-up rate
;availability
:the playback system operates time bound. "Stop the World" for clean-up isn't an option
;contention
:playback and rendering operations are essentially concurrent. We need reliable yet decentralised bookkeeping

!sequence points
A ''sequence point'' is a conceptual entity for the purpose of organisation of dependent operations. Any sequence point can be ''declared'', ''referred'', ''fulfilled'' and ''triggered''. The referral to sequence points creates an ordering -- another sequence point can be defined as being prerequisite or dependent. But the specific twist is: any of these operations can happen //in any thread.// Triggering a sequence point means to invoke an action (functor) tied to that point. This is allowed only when we're able to prove that this sequence point has been fulfilled, which means that all of its prerequisites have been fulfilled and that optionally an additional fulfilment signal was detected. After the triggering, a sequence point ceases to exist.

!solution idea
The solution idea is inspired by the pattern of operation within a garbage collector: The key point to note with this pattern is the ability to detect the usage status by external reasoning, without explicit help from //within// the actual context of usage. In a garbage collector, we're reasoning about reachability, and we're doing so at our own discretion, at some arbitrary point in time, later, when the opportunity for collecting garbage is exploited.

For the specific problem for handling sequence points as outlined above, a similar structure can be established by introducing a concept of ''water level''. When we're able to prove a certain water level, any sequence points below that level must have been fulfilled. And for modern computing architectures the important point is that we're able to do this reasoning  for each thread separately, based just on local information. Once a given thread has proven a certain water level, this conclusion is published in a lock free manner -- meaning that this information will be available in any other thread //eventually, after some time.// After that, any triggers below water level can be performed in correct dependency order, any time and in any thread, just as we see fit.

!!!complications
The concurrent nature of the problem is what makes this simple task somewhat more involved. Also note that the "water level" cannot be a global property, since the graph of dependencies is not necessarily globally connected. In the general case, it's not a tree, but a wood.
* information about fulfilling a sequence point may appear in various threads
* referrals to already known sequence points might be added later, and also from different threads ({{red{WIP 3/14}}} not sure if we need to expand on this observation -- see CalcStream &hArr; Segment)
This structure looks like requiring a message passing approach: we can arrange for determining the actual dependencies and fulfillment state late, within a single consumer, which is responsible for invoking the triggers. The other clients (threads) just pass messages into a possibly lock-free messageing channel.
The Session contains all information, state and objects to be edited by the User. From a users view, the Session is synonymous to the //current Project//. It can be [[saved and loaded|SessionLifecycle]]. The individual Objects within the Session, i.e. Clips, Media, Effects, are contained in one (or several) collections within the Session, which we call [[Sequence]].
&rarr; [[Session design overview|SessionOverview]]
&rarr; Structure of the SessionInterface

!Session structure
The Session object is a singleton &mdash; actually it is a »~PImpl«-Facade object (because the actual implementation object can be swapped for (re)loading Sessions).<br/>The Session is the access point to the HighLevelModel; it is comprised of
* a number of (independent) top-level [[time-lines|Timeline]]
* some [[sequences|Sequence]] to be used within these timelines
* a [[scope structure|PlacementScope]] backed by an index, and a current QueryFocus
* a set of ConfigRules to guide default behaviour {{red{planned as of 10/09}}}
* the ''Fixture'' with a possibility to [[(re)build it|PlanningBuildFixture]] {{red{just partially designed as of 01/09}}}
* the [[Asset subsystem|AssetManager]] is tightly integrated; besides, there are some SessionServices for internal use

&rarr; see [[relation of timeline, sequences and objects|TimelineSequences]]
&rarr; see //clarification of some fine points// regarding [[relation of Assets and MObjects|AssetModelConnection]]


!Session lifecycle
The session lifecycle need to be distinguished from the state of the [[session subsystem|SessionSubsystem]]. The latter is one of the major components of Lumiera, and when it is brought up, the {{{SessionCommandFacade}}} is opened and the SteamDispatcher started. On the other hand, the session as such is a data structure and pulled up on demand, by the {{{SessionManager}}}. Whenever the session is fully populated and configured, the SteamDispatcher is instructed to //actually allow dispatching of commands towards the session.// This command dispatching mechanism is the actual access point to the session for clients outside ~Steam-Layer; when dispatching is halted, commands can be enqueued non the less, which allows for a reactive UI.
LayerSeparationInterface, provided by the Steam-Layer.
The {{{SessionCommand}}} façade and the corresponding {{{steam::control::SessionCommandService}}} can be considered //the public interface to the session://
They allow to send [[commands|CommandHandling]] to work on the session data structure. All these commands, as well as the [[Builder]], are performed in a dedicated thread, the »session loop thread«, which is operated by the SteamDispatcher. As a direct consequence, all mutations of the session data, as well as all logical consequences determined by the builder, are performed single-threaded, without the need to care for synchronisation issues. Another consequence of this design is the fact that running the builder disables session command processing, causing further commands to be queued up in the SteamDispatcher. Any structural changes resulting from builder runs will finally be pushed back up into the UI, asynchronously.
While the core of the persistent session state corresponds just to the HighLevelModel, there is additionaly attached state, annotations and specific bindings, which allow to connect the session model to the local application configuration on each system. A typical example would be the actual output channels, connections and drivers to use on a specific system. In a Studio setup, these setup and wiring might be quite complex, it may be specific to just a single project, and the user might want to work on the same project on different systems. This explains why we can't just embody these configuration information right into the actual model.
Querying and retrieving objects within the session model is always bound to a [[scope|PlacementScope]]. When using the //dedicated API,// this scope is immediately defined by the object used to issue the query, like e.g. when searching the contents of a fork ("track" or "media bin"). But when using the //generic API,// this scope is rather implicit, because in this case a (stateful) QueryFocus object is used to invoke the queries. Somewhat in-between, the top-level session API itself exposes dedicated query functions working on the whole-session scope (model root).
Based on the PlacementIndex, the treelike scope structure can be explored efficiently; each Placement attached to the session knows its parent scope. But any additional filtering currently is implemented on top of this basic scope exploration, which obviously may degenerate when searching large scopes and models. Filtering may happen implicitly; all scope queries are parametrised to a specific kind of MObject, while the PlacementIndex deals with all kinds of {{{Placement<MObject>}}} uniformly. Thus, more specifically typed queries automatically have to apply a type filter based on the RTTI of the discovered placements. The plan is later to add specialised sub-indices and corresponding specific query functions to speed up the most frequently used kinds of queries.

!Ways to query
;contents query
:running a ~ContentsQuery<TY> means to traverse the given start scope depth-first, the order of the retrieved elements is implementation defined (hashtable).
;exploration
:this special parametrisation of the ~ContentsQuery is confined to retrieving the immediate children of the start scope, and intended for exploring and processing the model contents in specific ways. The implementation is based on the bucket iterators of the reverse index (aka scope table).
;parent path
:this query proceeds in the other direction, ascending from the given start scope up to the model root
;special contents
:a ~ContentsQuery<TY> may be combined with a predicate supplied by the client code; this predicate is applied after the general type filter, allowing to use the special interface of the subtype. In a large model this may degenerate considerably.
;picking
:to pick by query means to retrieve the first object returned by the underlying query. As the oder of retrieval is implementation defined, the results may be unpredictable. Typically, this is used under the assumption that the given query can only yield a single result. Beware, the {{{pick(predicate)}}}-functions exposed on the QueryFocus and top-level session API moreover assume there will be a result, and throw when this assumption is broken. The type filter in this case is derived automatically (at compile time) from the provided predicate.
The [[Session]] is interconnected with the GUI, the SessionStorage, [[Builder]] and the CommandHandling. The HighLevelModel is an conceptual view of the session. All these dependencies are isolated from the actual data layout in memory, but the latter is shaped by the intended interactions.

{{red{WIP...}}}Currently as of 3/10, this is an ongoing implementation and planning effort

!Objects, Placements, References
Media objects are attached to the session by [[placements|Placement]]. A Placement within the session gets an distinguishable identity (&rarr; ModelObjectIdentity) and behaves like being an instance of the attached object. Client code usually interacts with the compound of placement + ~MObject. In order to decouple this interaction from the actual implementation within the session, client code rather deals with //references.// These are implemented like a smart-ptr, but based on an opaque hash value, which is equivalent to the //object instance identity.//
&rarr; MObjectRef
&rarr; PlacementRef

!Index of placements attached to the session
For implementation, the HighLevelModel can be reduced to a compound of interconnected placements. These placement instances are owned and managed by the session; attaching a placement means copying it into the session, thereby creating a new placement-ID. A lookup mechanism for placements and placement relations (PlacementIndex) thus is the actual core of the session data structure; the actual object instances are maintained by a pooling custom allocator ({{red{planned as of 1/10}}}).

!Lifecycle
MObject lifetime is managed by reference counting; all placements and client side references to an MObject share ownership. The placement instances attached to the session are maintained by the index; thus, as long as an placement exists, the corresponding object automatically stays alive. Similarly, assets, as managed by shared-ptrs, stay alive when directly referenced, even after being dropped from the AssetManager. A bare PlacementRef on the other hand doesn't guarantee anything about the referred placement; when dereferencing this reference token, the index is accessed to re-establish a connection to the object, if possible. The full-fledged MObjectRef is built on top of such a reference token and additionally incorporates a smart-ptr. For the client code this means, that holding a ref ensures existence of the object, while the //placement// of this object still can get removed from the session.

!Updates and dependent objects
The session and the models rely on dependent objects beeing kept updated and consistent. This problem can't be solved in a generic fashion &mdash; at least not without using a far more elaborate scheme (e.g. a transaction manager), which is beyond the scope here. Thus, care has to be taken on the implementation level, especially in conjunction with error handling and threading considerations
&rarr; see [[details here...|ModelDependencies]]
"Session Interface" has several meanings, depending on the context.
;application global
:the session is a data structure, which can be saved and loaded, and manipulated by [[sending commands|CommandHandling]]
;within ~Steam-Layer
:here »the session« can be seen as a compound of several interfaces and facilities,
:together forming the primary access point to the user visible contents and state of the editing project.
:* the API of the session class
:* the accompanying management interface (SessionManager API)
:* the primary public ~APIs exposed on the objects to be [[queried and retrieved|SessionStructureQuery]] via the session class API
:** [[Timeline]]
:** [[Sequence]]
:** [[Placement]]
:** [[Clip]]
:** [[Fork]]
:** Effect
:** Automation
:* the [[command handling framework|CommandHandling]], including the [[UNDO|UndoManager]] facility

__Note__: the SessionInterface as such is //not a [[external public interface|LayerSeparationInterfaces]].// Clients from outside Steam-Layer can talk to the session by issuing commands through the {{{SessionCommandFacade}}}. Processing of commands is coordinated by the SteamDispatcher, which also is responsible for starting the [[Builder]].


!generic and explicit API
The HighLevelModel exposes two kinds of interfaces (which are interconnected and rely on each other): A generic, but somewhat low-level API, which is good for processing &mdash; like e.g. for the builder or de-serialiser &mdash; and a more explicit API providing access to some meaningful entities within the model. Indeed, the latter (explicit top level entities) can be seen as a ''façade interface'' to the generic structures:
* the [[Session]] object itself corresponds to the ModelRootMO
* the one (or multiple) [[Timeline]] objects correspond to the BindingMO instances attached immediately below the model root
* the [[sequences|Sequence]] bound into these timelines (by the ~BindingMOs) correspond to the top level [[Fork]]-~MObjects within each of these sequences.
[<img[Object relations on the session façade|draw/sessionFacade1.png]]

Thus, there is a convenient and meaningful access path through these façade objects, which of course actually is implemented by forwarding to the actual model elements (root, bindings, forks)

Following this access path down from the session means using the ''dedicated'' API on the objects retrieved.
To the contrary, the ''generic'' API is related to a //current location (state),// the QueryFocus.



!purpose of these ~APIs
* to discover
** by ID
** by type (filter)
** all contained
* to add
* to destroy

!!relation of [[Assets|Asset]] and MObjects
{{red{WARNING -- still under construction 10/2018}}}. Basically these segments represent the flip sides of the same coin. //Assets relate to the bookkeeping view.// However, we build a data model, and thus use representations for the involved entities. This creates some redundancy at times; we made an effort to reduce this redundancy and minimise the necessary data model representation. This means, some things are rather handled and represented as Assets, while other stuff is primarily dealt with as ~MObject.
&rarr; see //discussion of some fine points// regarding [[relation of Assets and MObjects|AssetModelConnection]]

!!exploring session contents
Typically, the list of timelines serves as starting point for exploring the model. Basically, any kind of object could be attached everywhere, but both the GUI and the Builder rely on assumptions regarding the [[overall model structure|HighLevelModel]] &mdash; silently ignoring content not in line. This corresponds to the //dedicated API functions// on specific kinds of objects, which allow to retrieve content according to this assumed structure conveniently and with strong typing. From the timeline and the associated sequence you'll get the root track, and from there the sub-tracks and the clips located on them, which in turn may have attachments (effects, transitions, labels).
On the other hand, arbitrary structures can be retrieved using the //generic API:// Contents can be discovered on the QueryFocus, which automatically follows the //point of mutation,// but can be moved to any point using the {{{QueryFocus::attach(obj)}}} function.

!!queries and defaults
Queries can retrieve the immediate children, or the complete contents of a scope (depth-first). The results can be filtered by type. Additionally, the results of such a scoped query can be filtered by a client-provided predicate, which allows to pick objects with specific properties. The intention is to extend this later to arbitrary logical [[queries|Query]], using some kind of resolution engine. Besides these queries, [[default configured objects|DefaultsManagement]] can be retrieved or defined through the defaults manager, which is accessible as a self-contained component on the public session API. Defaults can be used to establish a standard way of doing things on a per-project base.
&rarr; SessionContentsQuery

{{red{WIP ... just emerging}}}
!!discovery and mutations
The discovery functions available on these ~APIs are wired such as to return suitably typed MObjectRef instances always. These are small value objects and can be used to invoke operations (both query and mutating) on the underlying object within the session. Raw placement (language)references aren't exposed on these outward interfaces.

While this protects against accessing dangling references, it can't prevent clients from invoking any mutating operation directly on these references. It would be conceivable, by using proxies, to create and invoke commands automatically. But we rather don't want to go this route, because
* Lumiera is an application focussed on very specific usage, not a general purpose library or framework
* regarding CommandHandling, the //design decision was to require a dedicated (and hand written) undo functor.//

!!!!protection against accidental mutation
{{red{WIP}}}As of 2/10, I am considering to add a protection against invoking an raw mutation operation accidentally, and especially bypassing the command frontend and the SteamDispatcher. This would not only be annoying (no UNDO), but potentially dangerous, because all of the session internals are not threadsafe by design.
The considered solution would be to treat this situation as if an authorisation is required; this authorisation for mutation could be checked by a &raquo;wormhole&laquo;-like context access (&rarr; aspect oriented programming). Of course, in our case we're not dealing with real access restrictions, just a safeguard: While command execution creates such an authorisation token automatically, a client actually wanting to invoke an mutation operations bypassing the command frontend, would need to set up such a token explicitly and manually.

!!adding and destroying
Objects can be added and destroyed directly on the top level session API. The actual memory management of the object instances works automatically, based on reference counts. (Even after purging an object from the session, it may still be indirectly in use by an ongoing render process).
When adding an object, a [[scope|PlacementScope]] needs to be specified. Thus it makes sense to provide {{{add()}}}-operations on the dedicated API of individual session parts, while the generic {{{attach()}}}-call on the top-level session API relies on the current QueryFocus to determine the location where to add an object. Besides, as we're always adding the [[Placement]] of an object into the session, this placement may specify an additional constraint or tie to a specific scope; resolving the placement thus may cause the object to move to another location


!!!{{red{Questions}}}
* what exactly is the relation of discovery and [[mutations|SessionMutation]]?
* what's the point of locating them on the same conceptual level?
* how to observe the requirement of ''dispatching'' mutations ([[Command]])?
* how to integrate the two possible search depths (children and all)?
* how is all of this related to the LayerSeparationInterfaces, here SessionFacade und EditFacade?

<<<
__preliminary notes__: {{red{3/2010}}} Discovery functions accessible from the session API are always written such as to return ~MObjectRefs. These expose generic functions for modifying the structure: {{{attach(MObjectRef)}}} and {{{purge()}}}. The session API exposes variations of these functions. Actually, all these functions do dispatch the respective commands automatically. {{red{Note 1/2015 not implemented, not sure if thats a good idea}}} To the contrary, the raw functions for adding and removing placements are located on the PlacementIndex; they are accessible as SessionServices &mdash; which are intended for ~Steam-Layer's internal use solely. This separation isn't meant to be airtight, just an reminder for proper use.

Currently, I'm planning to modify MObjectRef to return only a const ref to the underlying facilities by default. Then, there would be a subclass which is //mutation enabled.// But this subclass will check for the presence of a mutation-permission token &mdash; which is exposed via thread local storage, but //only within a command dispatch.// Again, no attempt is made to make this barrier airtight. Indeed, for tests, the mutation-permission token can just be created in the local scope. After all, this is not conceived as an authorisation scheme, rather as a automatic sanity check. It's the liability of the client code to ensure any mutation is dispatched.
<<<
The current [[Session]] is the root of any state found within Steam-Layer. Thus, events defining the session's lifecycle influence and synchronise the cooperative behaviour of the entities within the model, the SteamDispatcher, [[Fixture]] and any facility below.
* when ''starting'', on first access an empty session is created, which puts any related facility into a defined initial state.
* when ''closing'' the session, any dependent facilities are disabled, disconnected, halted or closed
* ''loading'' an existing session &mdash; after closing the previous session &mdash; sets up an empty (default) session and populates it with de-serialised content.
* when encountering a ''mutation point'', [[command processing|SteamDispatcher]] is temporarily halted to trigger off an BuildProcess.

!Role of the session manager
The SessionManager is responsible for conducting the session lifecycle. Accessible through the static interface {{{Session::current}}}, it exposes the actual session as a ~PImpl. Both session manager and session are indeed interfaces, backed by implementation classes belonging to ~Steam-Layer's internals. Loading, saving, resetting and closing are the primary public operations of the session manager, each causing the respective lifecycle event.

Beyond that, client code usually doesn't interact much with the lifecycle, which mostly is a pattern of events to happen in a well-defined sequence. So the //implementation// of the session management operations has to comply to this lifecycle, and does so by relying on a self-contained implementation service, the LifecycleAdvisor. But (contrary to an application framework) the lifecycle of the Lumiera session is rather fixed, the only possibility for configuration or extension being the [[lifecycle hooks|LifecycleEvent]], where other parts of the system (and even plug-ins) may install some callback methods.

!Synchronising access to session's implementation facilities
Some other parts and subsystems within the ~Steam-Layer need specialised access to implementation facilities within the session. Information about some conditions and configurations might be retrieved through [[querrying the session|Query]], and especially default configurations for many objects are [[bound to the session|DefaultsImplementation]]. The [[discovery of session contents|SessionStructureQuery]] relies on an [[index facility|PlacementIndex]] embedded within the session implementation. Moreover, some "properties" of the [[media objects|MObject]] are actually due to the respective object being [[placed|Placement]] in some way into the session; consequently, there might be an dependency on the actual [[location as visible to the placement|PlacementScope]], which in turn is constituted by [[querying the index|QueryFocus]].

Each of these facilities relies on a separate access point to session services, corresponding to distinct service interfaces. But &mdash; on the implementation side &mdash; all these services are provided by a (compound) SessionServices implementation object. This approach allows to switch the actual implementation of all these services simply by swapping the ~PImpl maintained by the session manager. A new implementation level service can thus be added to the ~SessionImpl just by hooking it into the ~SessionServices compound object. But note, this mechanism as such is ''not thread safe'', unless the //implementation// of the invoked functions is synchronised in some way to prevent switching to a new session implementation while another thread is still executing session implementation code.

Currently, the understanding is for some global mechanism to hold any command execution, script running and further object access by the GUI //prior//&nbsp; to invoking any of the session management operations (loading, closing, resetting). An alternative would be to change the top-level access to the session ~PImpl to go through an accessor value object, to acquire some lock automatically before any access can happen. C++ ensures the lifespan of any temporaries to surpass evaluation of the enclosing expression, which would be sufficient to prevent another thread to pull away the session during that timespan. Of course, any value returned from such an session API call isn't covered by this protection. Usually, objects are handed out as MObjectRef, which in turn means to resolve them (automatically) on dereferentiation by another session API access. But while it seems we could get locking to work automatically this way, still such a technique seems risky and involved; a plain flat lock at top level seems to be more appropriate.

!Interface and lifecycle hooks
As detailed above, {{{Session::current}}} exposes the management / lifecycle API, and at the same time can be dereferenced into the primary [[session API|SessionInterface]]. An default configured ~SessionImpl instance will be built automatically, in case no session implementation instance exists on executing this dereferentiation. At various stages within the lifecycle, specific LifecycleEvent hooks are activated, which serve as an extension point for other parts of the system to install callback functions to execute additional behaviour at the right moment.

!!!building (or loading) a session
# as a preparation step, a new implementation instance is created, alongside with any supporting facilities (especially the PlacementIndex)
# the basic default configuration is loaded into this new session instance
# when the new session is (technically) complete and usable, the switch on the ~PImpl happens
# the {{{ON_SESSION_START}}} LifecycleEvent is emitted
# content is loaded into the session, including hard wired content and any de-serialised data from persistent storage
# the {{{ON_SESSION_INIT}}} event is emitted
# additional initialisation, wiring and registration takes place; basically anything to make the session fully functional
# the session LayerSeparationInterface is opened and any further top-level blocking is released
# the {{{ON_SESSION_READY}}} event is emitted

!!!closing the session
# top-level facilities accessing the session (GUI, command processing, scripts) are blocked and the LayerSeparationInterface is closed
# any render processes are ensured to be terminated (or //disconnected// &mdash; so they can't do any harm)
# the {{{ON_SESSION_END}}} event is emitted
# the command processing log is tagged
# the command queue(s) are emptied, discarding any commands not yet executed
# the PlacementIndex is cleared, effectively releasing any object "instances"
# the [[asset registry|AssetManager]] is cleared, thereby releasing any remaining external resource references
# destruction of session implementation instances
{{red{none of the above is implemented as of 11/09}}}
The Session contains all information, state and objects to be edited by the User (&rarr;[[def|Session]]).
As such, the SessionInterface is the main entrance point to Steam-Layer functionality, both for the primary EditingOperations and for playback/rendering processes. ~Steam-Layer state is rooted within the session and guided by the [[session's lifecycle events|SessionLifecycle]].
Implementation facilities within the ~Steam-Layer may access a somewhat richer [[session service API|SessionServices]].

Currently (as of 3/10), Ichthyo is working on getting a preliminary implementation of the [[Session in Memory|SessionDataMem]] settled.

!Session, Model and Engine
The session is a [[Subsystem]] and acts as a frontend to most of the Steam-Layer. But it doesn't contain much operational logic; its primary contents are the [[model|Model]], which is closely [[interconnected to the assets|AssetModelConnection]].

!Design and handling of Objects within the Session
Objects are attached and manipulated by [[placements|Placement]]; thus the organisation of these placements is part of the session data layout. Effectively, such a placement within the session behaves like an //instances// of a given object, and at the same time it defines the "non-substantial" properties of the object, e.g. its positions and relations. [[References|MObjectRef]] to these placement entries are handed out as parameters, both down to the [[Builder]] and from there to the render processes within the engine, but also to external parts within the GUI and in plugins. The actual implementation of these object references is built on top of the PlacementRef tags, thus relying on the PlacementIndex the session maintains to keep track of all placements and their relations. While &mdash; using these references &mdash; an external client can access the objects and structures within the session, any actual ''mutations'' should be done based on the CommandHandling: a single operation of a sequence of operations is defined as [[Command]], to be [[dispatched|SteamDispatcher]] as [[mutation operation|SessionMutation]]. Following this policy ensures integration with the&nbsp;SessionStorage and provides (unlimited) [[UNDO|UndoManager]].

On the implementation level, there are some interdependencies to consider between the [[data layout|SessionDataMem]], keeping ModelDependencies updated and integrating with the BuildProcess. While the internals of the session are deliberately kept single-threaded, we can't make much assumptions regarding the ongoing render processes.
The session manager is responsible for maintaining session state as a whole and for conducting the session [[lifecycle|SessionLifecycle]]. The session manager API allows for saving, loading, closing and resetting the session. Accessible through the static interface {{{Session::current}}}, it exposes the actual session as a ~PImpl. Actually, both session and session manager are interfaces.
//Any modification of the session will pass through the [[command system|CommandHandling]].//
Thus any possible mutation comes in two flavours: a raw operation invoked directly on an object instance attached to the model, and a command taking an MObjectRef as parameter. The latter approach &mdash; invoking any mutation through a command &mdash; will pass the mutations trough the SteamDispatcher to ensure the're logged for [[UNDO|UndoManager]] and executed sequentially, which is important, because the session's internals are //not threadsafe by design.// Thus we're kind of enforcing the use of Commands: mutating operations include a check for a &raquo;permission to mutate&laquo;, which is automatically available within a command execution {{red{TODO as of 2/10}}}. Moreover, the session API and the corresponding LayerSeparationInterfaces expose MObjectRef instances, not raw (language) refs.

!!Questions to solve
* how to get from the raw mutation to the command?
* how to organise the naming and parametrisation of commands?
* should we provide the additional flexibility of a binding here?
* how to keep [[dependencies within the model|ModelDependencies]] up-to date?

!!who defines commands?
The answer is simple: the one who needs to know about their existence. Because basically commands are invoked //by-name// -- someone needs to be aware of that name and its meaning. Thus, for any given mutation, there is a place where it's meaningful to specify the details //and//&nbsp; to subsume them under a meaningful term. An entity responsible for this place could then act as the provider of the command in question.

Interestingly, there seems to be an alternative answer to this question. We could locate the setup and definition of all commands into a central administrative facility. Everyone in need of a command then ought to know the name and retrieve this command. Sounds like bureaucracy.
<<<
{{red{WARNING: Naming was discussed (11/08) and decided to be changed....}}}
* the term [[EDL]] was phased out in favour of ''Sequence''
* [[Session]] is largely synonymous to ''Project''
* there seems to be a new entity called [[Timeline]] which holds the global Pipes
<<<
The [[Session]] (sometimes also called //Project// ) contains all information and objects to be edited by the User. Any state within the Steam-Layer is directly or indirectly rooted in the session. It can be saved and loaded. The individual Objects within the Session, i.e. Clips, Media, Effects, are contained in one or multiple collections within the Session, which we call [[sequence(s)|Sequence]]. Moreover, the sesion contains references to all the Media files used, and it contains various default or user defined configuration, all being represented as [[Asset]]. At any given time, there is //only one current session// opened within the application. The [[lifecycle events|SessionLifecycle]] of the session define the lifecycle of ~Steam-Layer as a whole.

The Session is close to what is visible in the GUI. From a user's perspective, you'll find a [[Timeline]]-like structure, containing an [[Sequence]], where various Media Objects are arranged and placed. The available building blocks and the rules how they can be combined together form Lumiera's [[high-level data model|HighLevelModel]]. Basically, besides the [[media objects|MObjects]] there are data connections and all processing is organized around processing chains or [[pipes|Pipe]], which can be either global (in the Session) or local (in real or virtual clips).

!!!larger projects
For larger editing projects the simple structure of a session containing "the" timeline is not sufficient. Rather
* we may have several [[sequences|Sequence]], e.g. one for each scene. These sequences can be even layered or nested (compositional work).
* within one project, there may be multiple, //independant Timelines// &mdash; each of which may have an associated Viewer or Monitor
Usually, when working with this stucture, you'll drill down starting from a timeline, trough a (top-level) sequence, down into a fork ("track"), a clip, maybe even a embedded Sequence (VirtualClip), and from there even more down into a single attached effect. This constitutes a set of [[nested scopes|PlacementScope]]. Operations are to be [[dispatched|SteamDispatcher]] through a [[command system|CommandHandling]], including the target object [[by reference|MObjectRef]]. [[Timelines|Timeline]] on the other hand are always top-level objects and can't be combined further. You can render a single given timeline to output.
&rarr; see [[Relation of Project, Timelines and Sequences|TimelineSequences]]

!!!the definitive state
With all the structural complexities possible within such a session, we need an isolation layer to provide __one__ definitive state where all configuration has been made explicit. Thus the session manages a special consolidated view (object list), called [[the Fixture|Fixture]], which can be seen as all currently active objects placed onto a single timeline.

!!!organisational devices
The possibility of having multiple Sequences helps organizing larger projects. Each [[Sequence]] is just a logical grouping; because all effective properties of any MObject within this sequence are defined by the ~MObject itself and the [[Placement]], by which the object is anchored to some time point, some fork, can be connected to some pipe, or linked to another object. In a similar manner, [[Forks ("tracks")|Fork]] are just another organisational aid for grouping objects, disabling them and defining common output pipes.

!!!global pipes
[>img[draw/Proc.builder1.png]] Any session should contain a number of global [[(destination) pipes|Pipe]], typically video out and audio out. The goal is, to get any content producing or transforming object in some way connected to one of these outputs, either //by [[placing|Placement]] it directly// to some pipe, or by //placing it to a track// and having the track refer to some pipe. Besides the global destination pipes, we can use internal pipes to form busses or subgroups, either on a global (session) level, or by using the processing pipe within a [[virtual clip|VirtualClip]], which can be placed freely within the sequence(s). Normally, pipes just gather and mix data, but of course any pipe can have an attached effect chain.
&rarr; [[more on Tracks and Pipes within the Sequence|TrackPipeSequence]]

!!!default configuration
While all these possibilities may seem daunting, there is a simple default configuration loaded into any pristine new session:
It will contain a global video and audio out pipe, just one timeline holding a single sequence with a single track; this track will be configured with a fading device, to send any video and audio data encountered on enclosed objects to the global (master) pipes. So, by adding a clip with a simple absolute placement to this track and to some time position, the clip gets connected and rendered, after [[(re)building|PlanningBuildFixture]] the [[Fixture]] and passing the result to the [[Builder]] &mdash; and using the resulting render nodes network (Render Engine).

&rarr; [[anatomy of the high-level model|HighLevelModel]]
&rarr; considerations regarding [[Tracks and Pipes within the session|TrackPipeSequence]]
&rarr; see [[Relation of Project, Timelines and Sequences|TimelineSequences]]
Within Lumiera's Steam-Layer, there are some implementation facilities and subsystems needing more specialised access to implementation services provided by the session. Thus, besides the public SessionInterface and the [[lifecycle and state management API|SessionManager]], there are some additional service interfaces exposed by the session through a special access mechanism. This mechanism needs to be special in order to assure clean transactional behaviour when the session is opened, closed, cleared or loaded. Of course, there is the additional requirement to avoid direct dependencies of the mentioned ~Steam-Layer internals on session implementation details.

!Accessing session services
For each of these services, there is an access interface, usually through an class with only static methods. Basically this means access //by name.//
On the //implementation side//&nbsp; of this access interface class (i.e. within a {{{*.cpp}}} file separate from the client code), there is a (down-casting) access through the top-level session-~PImpl pointer, allowing to invoke functions on the ~SessionServices instance. Actually, this ~SessionServices instance is configured (statically) to stack up implementations for all the exposed service interfaces on top of the basic ~SessionImpl class. Thus, each of the individual service implementations is able to use the basic ~SessinImpl (becaus it inherits it) and the implementaion of the access functions (to the session service we're discussing here) is able to use this forwarding mechanism to get the actual implementation basically by one-liners. The upside of this (admittedly convoluted) technique is that we've gotten at runtime only a single indirection, which moreover is through the top-level session-~PImpl. The downside is that, due to the separation in {{{*.h}}} and {{{*.c}}} files, we can't use any specifically typed generic operations, which forces us to use type erasure in case we need such (an example being the content discovery queries utilised by all high-level model objects).
The frontside interface of the session allows to query for contained objects; it is used to discover the structure and contents of the currently opened session/project. Access point is the public API of the Session class, which, besides exposing those queries, also provides functionality for adding and removing session contents.

!discovering structure
The session can be seen as an agglomeration of nested and typed containers.
Thus, at any point, we can explore the structure by asking for //contained objects of a specific type.// For example, at top level, it may be of interest to enumerate the [[timelines within this session|Timeline]] and to ennumerate the [[sequences|Sequence]]. And in turn, on a given Sequence, it would be of interest to explore the forks or tracks, and also maybe to iterate over all clips within this sequence.
So, clearly, there are two flavours of such an contents exploration query: it could either be issued as an dedicated member function on the public API of the respective container object, e.g. {{{Track::getClips()}}} &mdash; or it could be exposed as generic query function, relying on the implicit knowledge of the //current location//&nbsp; rather.

!problem of context and access path
The (planned) session structure of Lumiera allows for quite some flexibility, which, of course comes at a price tag. Especially, as there can be multiple independent top level timelines, where a given sequence can be used simultaneously within multiple timelines and even as virtual media within a [[meta-clip|VirtualClip]], and moreover, as properties of any Placement are rather queried and discovered within the PlacementScope of this object &mdash; consequently the discovered values may depend on //how you look at this object.// More specifically, it depends on the ''access path'' used to discover this object, because this path constitutes the actual scope visible to this object.

To give an example, let's assume a clip within a sequence, and this sequence is both linked to the top-level timeline, but also used within a meta-clip. (see the drawing &rarr; [[here|PlacementScope]])
In this case, the sequence has an 1:n [[binding|BindingMO]]. A binding is (by definition) also a PlacementScope, and, incidentally, in the case of binding the sequence into a timeline, the binding also translates //logical// output designations into global pipes found within the timeline, while otherwise they get mapped onto "channels" of the virtual media used by the virtual clip. Thus, the absolute time position as well as the output connection of a given clip within this sequence //depends on how we look at this clip.// Does this clip apear as part of the global timeline, or did we discover it as contained within the meta-clip? &rarr; see [[discussion of this scope-binding problem|BindingScopeProblem]]

!!solution requirements
The baseline of any solution to this problem is clear: at the point where the query is issued, a context information is necessary; this context yields an access path from top level down to the object to be queried, and this access path constitutes the effective scope this object can utilise for resolving the query.

!!introducing a QueryFocus
A secondary goal of the design here is to ease the use of the session query interface. Thus the proposal is to treat this context and access path as part of the current state. To do so, we can introduce a QueryFocus following the queries and remembering the access path; this focus should be maintained mostly automatically. It allows for stack-like organisation, to allow sub-queries without affecting the current focus, where the handling of such a temporary sub-focus is handled automatically by a scoped local (RAII) object. The focus follows the issued queries and re-binds as necessary.

!!using QueryFocus as generic query interface
Solving the problem this way has the nice side effect, that we get a quite natural location where to put an unspecific query interface: Just let the current QueryFocus expose the basic set of query API functions. Such an generic query interface can be seen as a complement to the query functions exposed on specific objects ("get me the clips within this track") according to the structure. Because a generic interface especially allows for writing simple diagnostics and discovery code, with only a weak link to the actual session structure.

!Implementation strategy
The solution is being built starting from the generic part, as the actual session structure isn't hard coded into the session implementation, but rather created by convention.
The query API on specific objects (i.e. Session, Timeline, Sequence, Track and Clip) is then easily coded up on top, mostly with inlined one-liners of the kind
{{{
ITER getClips() { return QueryFocus::push(this).query<Clip>(); } 
}}}
To make this work, QueryFocus exposes an static API (and especially the focus stack is a singleton). The //current// QueryFocus object can easily be re-bound to another point of interest (and ajusts the contained path info automatically), and the {{{push()}}}-function creates a local scoped object (which pops automatically).

And last but not least: the difficult part of this whole concept is encapsulated and ''can be left out for now''. Because, according to Lumiera's [[roadmap|http://issues.lumiera.org/roadmap]], meta-clips are to be postponed until well into beta! Thus, we can start with a trivial (no-op) implementation, but immediately gain the benefit of making the relevant parts of the session implementation aware of the problem.

{{red{WIP ... draft}}}
//A subsystem within Steam-Layer, responsible for lifecycle and access to the editing [[Session]].//
[img[Structure of the Session Subsystem|uml/Session-subsystem.png]]

!Structure
The SteamDispatcher is at the heart of the //Session Subsystem.// Because the official interface for working on the session, the [[SessionCommand façade|SessionCommandFacade]], is expressed in terms of sending command messages to invoke predefined [[commands|CommandHandling]] to operate on the SessionInterface, the actual implementation of such a {{{SessionCommandService}}} needs a component actually to enqueue and dispatch those commands -- which is the {{{DispatcherLoop}}} within the SteamDispatcher. As usual, the ''lifecycle'' is controlled by a subsystem descriptor, which starts and stops the whole subsystem; and this starting and stopping in turn translates into starting/stopping of the dispatcher loop. On the other hand, //activation of the dispatcher,// which means actively to dispatch commands, is controlled by the lifecycle of the session proper. The latter is just a data structure, and can be loaded / saved and rebuilt through the ''session manager''.

!Lifecycle
As far as lifecycle is concerned, the »session subsystem« has to be distinguished from the //session proper,// which is just a data structure with its own, separate lifecycle considerations. Accessing the session data only makes sense when this data structure is fully loaded, while the //session subsystem,// deals with performing commands on the session and with triggering the builder runs.

!!!start-up
The session subsystem lifecycle translates into method invocations on the {{{SteamDispatcher}}}, which in turn manages the parts actually implementing the session command processing and builder operations. This relation is expressed by holding onto the implementation as a //~PImpl.// As long as the {{{DispatcherLoop}}} object exists, the session subsystem can be considered in //running state.// This is equivalent to the following
* the ''session loop thread'' is spawned. This thread performs all of the session and builder operations (single-threaded).
* the {{{SessionCommandService}}} is started and connected as implementation of the {{{SessionCommand}}} façade.

!!!shutdown
Shutdown is initiated by sending a message to the dispatcher loop. This causes the internal loop control to wake up and leave the loop, possibly after finishing a command or builder run currently in progress. When leaving the loop, the {{{sigTerm}}} of the SessionSubsystem is invoked, which then in turn causes the {{{DispatcherLoop}}} object to be deleted and the SteamDispatcher thus returned into halted state. 
<<search>><<closeAll>><<permaview>><<newTiddler>><<saveChanges>><<slider chkSliderOptionsPanel OptionsPanel "options »" "Change TiddlyWiki advanced options">>
Building a Render Nodes Network from Media Objects in the Session
Engine
{{red{killme}}}
A small (in terms of storage) and specifically configured StateProxy object which is created on the stack {{red{Really on the stack? 9/11}}} for each individual {{{pull()}}} call. It is part of the invocation state of such a call and participates in the buffer management. Thus, in a calldown sequence of {{{pull()}}} calls we get a corresponding sequence of "parent" states. At each level, the &rarr; WiringDescriptor of the respective node defines a Strategy how the call is passed on.

{{red{WIP 4/2024: the interface will be called {{{StateClosure}}} -- not sure to what degree there will be different implementations ...
right now about to work towards a first integration of render node invocation}}}
An Object representing a //Render Process// and containing associated state information.
* it is created in the Player subsystem while initiating the RenderProcess
* it is passed on to the generated Render Engine, which in turn passes it down to the individual Processors
* moreover, it contains methods to communicate with other state relevant parts of the system, thereby shielding the rendering code from any complexities of state synchronisation and management if necessary. (thus the name Proxy)
* in a future version, it may also encapsulate the communication in a distributed render farm
The architecture of the Lumiera application separates functionality into three Layers: __Stage__, __Steam__ and __Vault__.

The ~Steam-Layer as the middle layer transforms the structures of the usage domain into structures of the technical implementation domain, which can be processed efficiently with contemporary media processing frameworks. While the Vault-Layer is responsible for Data access and management and for carrying out the computation intensive media opterations, the ~Steam-Layer contains [[assets|Asset]] and [[Session]], i.e. the user-visible data model and provides configuration and behaviour for these entities. Besides, he is responsible for [[building and configuring|Builder]] the [[render engine|RenderEngine]] based on the current Session state. Furthermore, the [[Player]] subsystem, which coordinates render and playback operations, can be seen to reside at the lower boundary of ~Steam-Layer.
&rarr; [[Session]]
&rarr; [[Player]]
&rarr; UI-Layer
&rarr; Vault-Layer
//The guard and coordinator of any operation within the session subsystem.//
The session and related components work effectively single threaded. Any tangible operation on the session data structure has to be enqueued as [[command|CommandHandling]] into the dispatcher. Moreover, the [[Builder]] is triggered from the SteamDispatcher; and while the Builder is running, any command processing is halted. The Builder in turn creates or reshapes the processing nodes network, and the changed network is brought into operation with a //transactional switch// -- while render processes on this processing network operate unaffected and essentially multi-threaded.

Enqueueing commands through the SessionCommandFacade into the SteamDispatcher is the official way to cause changes to the session. And the running state of the SteamDispatcher is equivalent with the running state of the //session subsystem as a whole.//

!Requirements
To function properly as action coordinator of the session subsystem, the dispatcher has to fulfil multiple demands
;enqueue
:accept and enqueue command messages concurrently, any time, without blocking the caller
:*FIFO for //regular commands//
:*LIFO for //priority requests//  {{red{unimplemented 1/17}}}
;process
:dequeue and process entries sequentially
;sleep
:work continuously until queue is empty, then enter wait state
;check point
:arrive at a well defined check point reliably non blocking ("ensure to make progress")
:* necessary to know when internal state is consistent
:* when?
:** after each command
:** after builder run
:** after wake-up
;manage
:care for rectifying entries in the queue
:* ensure they //match// current session, discard obsoleted requests
:* //aggregate// similar requests
:* //supersede// by newer commands of a certain kind

!Operational semantics
The SteamDispatcher is a component with //running state.// There is some kind of working loop, which possibly enters a sleep state when idle. In fact, this loop is executed ''exclusively in the session thread''. This is the very essence of treating the session entirely single threaded, thus evading all the complexities of parallelism. Consequently, the session thread will either
* execute a command on the session
* perform the [[Builder]]
* evaluate loop control logic in the SteamDispatcher
* block waiting in the SteamDispatcher

Initially the command queue is empty and the SteamDispatcher can be considered idle. Whenever more commands are available in the queue, the dispatcher will handle them one after another, without delay, until the queue is emptied. Yet the Builder run need to be kept in mind. Essentially, the builder models a //dirty state:// whenever a command has touched the session, the corresponding LowLevelModel must be considered out of sync, possibly not reflecting the intended semantics of the session anymore. From a strictly logical view angle, we'd need to trigger the builder after each and every session command -- but it was a very fundamental design decision in Lumiera to allow for a longer running build process, more akin to running a compiler. This decision opens all the possibilities of integrating a knowledge based system and resolution activities to find a solution to match the intended session semantics. For this reason, we decouple the UI actions from session and render engine consistency, and we enqueue session commands, to throttle down the number of builder runs.

So the logic to trigger builder runs has to take some leeway into account. Due to the typical interactive working style of an editing application, session commands might be trickling in in strikes of similar commands, intermingled with tiny pauses. For this reason, the SteamDispatcher implements some //hysteresis,// as far as triggering the builder runs is concerned. The builder is fired in idle state, but only after passing some //latency period.// On the other hand, massive UI activities (especially during a builder run) may have flooded the queue, thus sending the session into an extended period of command processing. From the user's view angle, the application looks non responsive in such a case, albeit not frozen, since the UI can still enqueue further commands and thus retains the ability to react locally on user interaction. To mitigate this problem, the builder should be started anyway after some extended period of command processing, even if the queue is not yet emptied. Each builder run produces a structural diff message sent towards the UI and thus causes user visible changes within the session's UI representation. This somewhat stuttering response conveys to the user a tangible sensation of ongoing activity, while communicating at the same time, at least subconsciously some degree of operational overload. {{red{note 12/2016 builder is not implemented, so consider this planning}}}

Any change to the circumstances determining the SteamDispatcher's behaviour needs to be imparted actively through the public interface -- the dispatcher is not designed to be a state listener or observer. Any such state change notifications are synchronised and cause a wakeup notification to the session thread. For this purpose, enqueuing of further commands counts as state change and is lock protected. Beyond that, any other activities, like //processing// of commands or builder runs, are performed within the session thread without blocking other threads; the locking on the SteamDispatcher is only ever short term to ensure consistent internal state. Clients need to be prepared for the effect of actions to appear asynchronously and with some delay. Especially this means that session switch or shutdown has to await completion of any session command or builder run currently in progress.

When the session is closed or dismantled, further processing in the SteamDispatcher will be disabled, after completing the current command or builder run. This disabled state can be reversed when a new session instance becomes operative. And while the dispatcher will then continue to empty the command queue, most commands in queue will probably be obsoleted and dropped, because of referring to a deceased session instance. Moreover, the lifecycle of the session instances has to be distinguished from the lifecycle of the SessionSubsystem as such. When the latter is terminated, be it by a fatal error in some builder run, or be it due to general shutdown of the application, the SteamDispatcher will be asked to terminate the session thread after completing the current activity in progress. Such an event will also discard any further commands waiting in the dispatcher's queue.
Conversion of a media stream into a stream of another type is done by a processor module (plugin). The problem of finding such a module is closely related to the StreamType and especially [[problems of querying|StreamTypeQuery]] for such. (The builder uses a special Facade, the ConManager, to access this functionality). There can be different kinds of conversions, and the existance or non-existance of such an conversion can influence the stream type classification.
* different //kinds of media// can be ''transformed'' into each other
* stream types //subsumed// by a given prototype should be ''lossless convertible'' and thus can be considered //equivalent.//
* besides, between different stream //implementation types,// there can be a ''rendering'' (lossy conversion) &mdash; or no conversion at all.
The stream Prototype is part of the specification of a media stream's type. It is a semantic (or problem domain oriented) concept and should be distinguished from the actual implementation type of the media stream. The latter is provided by an [[library implementation|StreamTypeImplFacade]]. While there are some common predefined prototypes, mostly, they are defined within the concrete [[Session]] according to the user's needs. 

Prototypes form an open (extensible) collection, though each prototype belongs to a specific media kind ({{{VIDEO, IMAGE, AUDIO, MIDI,...}}}).
The ''distinguishing property'' of a stream prototype is that any [[Pipe]] can process //streams of a specific prototype only.// Thus, two streams with different prototype can be considered "something quite different" from the users point of view, while two streams belonging to the same prototype can be considered equivalent (and will be converted automatically when their implementation types differ). Note this definition is //deliberately fuzzy,// because it depends on the actual situation of the project in question.

Consequently, as we can't get away with an fixed Enum of all stream prototypes, the implementation must rely on a query interface. The intention is to provide a basic set of rules for deciding queries about the most common stream prototypes; besides, a specific session may inject additional rules or utilize a completely different knowledge base. Thus, for a given StreamTypeDescriptor specifying a prototype
* we can get a [[default|DefaultsManagement]] implementation type
* we can get a default prototype to a given implementation type by a similar query
* we can query if a implementation type in question can be //subsumed// by this prototype
* we can determine if another prototype is //convertible//

!!Examples
In practice, several things might be considered "quite different" and thus be distinguished by protorype: NTSC and PAL video, video versus digitized film, HD video versus SD video, 3D versus flat video, cinemascope versus 4:3, stereophonic versus monaural, periphonic versus panoramic sound, Ambisonics versus 5.1, data reduced ~MP3 versus full quality linear PCM...
//how to classify and describe media streams//
Media data is supposed to appear structured as stream(s) over time. While there may be an inherent internal structuring, at a given perspective ''any stream is a unit and homogeneous''. In the context of digital media data processing, streams are always ''quantized'', which means they appear as a temporal sequence of data chunks called ''frames''.

! Terminology
* __Media__ is comprised of a set of streams or channels
* __Stream__ denotes a homogeneous flow of media data of a single kind
* __Channel__ denotes a elementary stream, which can't be further separated in the given context
* all of these are delivered and processed in a smallest unit called __Frame__. Each frame corresponds to a //time interval.//
* a __Buffer__ is a data structure capable of holding a Frame of media data.
* the __~Stream-Type__ describes the kind of media data contained in the stream

! Problem of Stream Type Description
Media types vary largely and exhibit a large number of different properties, which can't be subsumed under a single classification scheme. On the other hand we want to deal with media objects in a uniform and generic manner, because generally all kinds of media behave somewhat similar. But the twist is, these similarities disappear when describing media with logical precision. Thus we are forced into specialized handling and operations for each kind of media, while we want to implement a generic handling concept.

! Lumiera Stream Type handling
!! Identification
A stream type is denoted by a StreamTypeID, which is an identifier, acting as an unique key for accessing information related to the stream type. It corresponds to an StreamTypeDescriptor record, containing an &mdash; //not necessarily complete// &mdash; specification of the stream type, according to the classification detailed below.

!! Classification
Within the Steam-Layer, media streams are treated largely in a similar manner. But, looking closer, not everything can be connected together, while on the other hand there may be some classes of media streams which can be considered //equivalent// in most respects. Thus separating the distinction between various media streams into several levels seems reasonable...
* Each media belongs to a fundamental ''kind'' of media, examples being __Video__, __Image__, __Audio__, __MIDI__, __Text__,... <br/>Media streams of different kind can be considered somewhat "completely separate" &mdash; just the handling of each of those media kinds follows a common //generic pattern// augmented with specialisations. Basically, it is //impossible to connect// media streams of different kind. Under some circumstances there may be the possibility of a //transformation// though. For example, a still image can be incorporated into video, sound may be visualized, MIDI may control a sound synthesizer.
* Below the level of distinct kinds of media streams, within every kind we have an open ended collection of ''prototypes'', which, when compared directly, may each be quite distinct and different, but which may be //rendered//&nbsp; into each other. For example, we have stereoscopic (3D) video and we have the common flat video lacking depth information, we have several spatial audio systems (Ambisonics, Wave Field Synthesis), we have panorama simulating sound systems (5.1, 7.1,...), we have common stereophonic and monaural audio. It is considered important to retain some openness and configurability within this level of distinction, which means this classification should better be done by rules then by setting up a fixed property table. For example, it may be desirable for some production to distinguish between digitized film and video NTSC and PAL, while in another production everything is just "video" and can be converted automatically. The most noticeable consequence of such a distinction is that any Bus or [[Pipe]] is always limited to a media stream of a single prototype. (&rarr; [[more|StreamPrototype]])
* Besides the distinction by prototypes, there are the various media ''implementation types''. This classification is not necessarily hierarchically related to the prototype classification, while in practice commonly there will be some sort of dependency. For example, both stereophonic and monaural audio may be implemented as 96kHz 24bit PCM with just a different number of channel streams, but we may as well get a dedicated stereo audio stream with two channels multiplexed into a single stream. For dealing with media streams of various implementation type, we need //library// routines, which also yield a //type classification system.// Most notably, for raw sound and video data we use the [[GAVL]] library, which defines a classification system for buffers and streams.
* Besides the type classification detailed thus far, we introduce an ''intention tag''. This is a synthetic classification owned by Lumiera and used for internal wiring decisions. Currently (8/08), we recognize the following intention tags: __Source__, __Raw__, __Intermediary__ and __Target__. Only media streams tagged as __Raw__ can be processed.

!! Media handling requirements involving stream type classification
* set up a buffer and be able to create/retrieve frames of media data.
* determine if a given media data source and sink can be connected, and how.
* determine and enumerate the internal structure of a stream.
* discover processing facilities
&rarr; see StreamTypeUse
&rarr; [[querying types|StreamTypeQuery]]
A description and classification record usable to find out about the properties of a media stream. The stream type descriptor can be accessed using an unique StreamTypeID. The information contained in this descriptor record can intentionally be //incomplete,// in which case the descriptor captures a class of matching media stream types. The following information is maintained:
* fundamental ''kind'' of media: {{{VIDEO, IMAGE, AUDIO, MIDI,...}}}
* stream ''prototype'': this is the abstract high level media type, like NTSC, PAL, Film, 3D, Ambisonics, 5.1, monaural,...
* stream ''implementation type'' accessible by virtue of an StreamTypeImplFacade
* the ''intended usage category'' of this stream: {{{SOURCE, RAW, INTERMEDIARY, TARGET}}}.
&rarr; see &raquo;[[Stream Type|StreamType]]&laquo; detailed specification
&rarr; notes about [[using stream types|StreamTypeUse]]
&rarr; more [[about prototypes|StreamPrototype]]
This ID is an symbolic key linked to a StreamTypeDescriptor. The predicate {{{stream(ID)}}} specifies a media stream with the StreamType as detailed by the corresponding descriptor (which may contain complete or partial data defining the type).
A special kind of media stream [[implementation type|StreamTypeImplFacade]], which is not fully specified. As such, it is supposed there //actually is// an concrete implementation type, while only caring for some part or detail of this implementation to exhibit a specific property. For example, using an type constraint we can express the requirement of the actual implementation of a video stream to be based on ~RGB-float, or to enforce a fixed frame size in pixels.

An implementation constraint can //stand-in// for a completely specified implementation type (meaning it's a sub interface of the latter). But actually using it in this way may cause a call to the [[defaults manager|DefaultsImplementation]] to fill in any missing information. An example would be to call {{{createFrame()}}} on the type constraint object, which means being able to allocate memory to hold a data frame, with properties in compliance with the given type constraint. Of cousre, then we need to know all the properties of this stream type, which is where the defaults manager is queried. This allows session customisation to kick in, but may fail under certain cicumstances.
Common interface for dealing with the implementation of media stream data. From a high level perspective, the various kinds of media ({{{VIDEO, IMAGE, AUDIO, MIDI,...}}}) exhibit similar behaviour, while on the implementation level not even the common classification can be settled down to a complete general and useful scheme. Thus, we need separate library implementations for deailing with the various sorts of media data, all providing at least a set of basic operations:
* set up a buffer
* create or accept a frame
* get an tag describing the precise implementation type
* ...?

&rarr; see also &raquo;[[Stream Type|StreamType]]&laquo;
//Note:// there is a sort-of "degraded" variant just requiring some &rarr; [[implementation constraint|StreamTypeImplConstraint]] to hold
Querying for media stream type information comes in various flavours
* you may want to find a structural object (pipe, output, processing patten) associated with / able to deal with a certain stream type
* you may need a StreamTypeDescriptor for an existing stream given as implementation data
* you may want to build or complete type information from partial specification.
Mostly, those queries involve the ConfigRules system in some way or the other. The [[prototype-|StreamPrototype]] and [[implementation type|StreamTypeImplFacade]]-interfaces themselves are mostly a facade for issuing appropriate queries. Some objects (especially [[pipes|Pipe]]) are tied to a certain stream type and thus store a direct link to type information. Others are just associated with a type by virtue of the DefaultsManagement.

The //problem// with this pivotal role of the config rules is that &mdash; from a design perspective &mdash; not much can be said specifically, besides //"you may be able to find out...", "...depends on the defaults and the session configuration".// This way, a good deal of crucial behaviour is pushed out of the core implementation (and it's quite intentionally being done this way). What can be done regarding the design of the core is mostly to setup a framework for the rules and determine possible ''query situations''.

!the kind of media
the information of the fundamental media kind (video, audio, text, MIDI,...) is assiciated with the prototype, for technical reasons. Prototype information is mandatory for each StreamType, and the impl facade provides a query function (because some implementation libraries, e.g. [[GAVL]], support multiple kinds of media).

!query for a prototype
__Situation__: given an implementation type, find a prototype to subsume it.
Required only for building a complete ~StreamType which isn't known at this point.
The general case of this query is //quite hairy,// because the solution is not necessary clear and unique. And, worse still, it is related to the semantics, requiring semantic information and tagging to be maintained somewhere. For example, while the computer can't "know" what stereopohinc audio is (only a human can, by listening to a stereophoic playback and deciding if it actually does convey a spatical sound image), in most cases we can overcome this problem by using the //heuristical rule// of assuming the prototype "stereophonic" when given two identically typed audio channels. This example also shows the necessity of ordering heuristic rules to be able to pick a best fit.

We can inject two different kinds of fallback solutions for this kind of query:
* we can always build a "catch-all" prototype just based on the kind of media (e.g. {{{prototype(video).}}}). This should match with lowest priority
* we can search for existing ~StreamTypes with the same impl type, or an impl type which is //equivalent convertible// (see &rarr; StreamConversion). 
The latter case can yield multiple solutions, which isn't any problem, because the match is limited to classes of equivalent stream implementation, which would be subsumed under the same prototype anyway. Even if the registry holds different prototypes linked to the same implementation type, they would be convertible and thus could //stand-in// for one another. Together this results in the implementation
# try to get a direct match to an existing impl type which has an associated (complete) ~StreamType, thus bypassing the ConfigRules system altogether
# run a {{{Query<Prototype>}}} for the given implementation type
# do the search within equivalence class as described above
# fall back to the media kind.
{{red{TODO: how to deal with the problem of hijacking a prototype?}}} &rarr; see [[here|StreamTypeUse]]

!query for an implementation
__Situation 1__: given an partially specified ~StreamType (just an [[constraint|StreamTypeImplConstraint]])
__Situation 2__: find an implementation for a given prototype (without any further impl type guidlines)
Both cases have to go though the [[defaults manager|DefaultsManagement]] in some way, in order to give any default configuration a chance to kick in. This is //one of the most important use cases// of the defaults system: the ability to configure a default fromat for all streams with certain semantic classification. {{{prototype(video)}}} by default is RGBA 24bit non-interlaced for example.
But after having queried the defaults system, there remains the problem to build a new solution (which will then automatically become default for this case). To be more precise: invoking the defaults system (as implemented in Lumiera) means first searching through existing objects encountered as default, and then issuing an general query with the capabilities in question. This general query in turn is conducted by the query type handler and usually consists of first searching existing objects and then creating a new object to match the capabilities. But, as said, the details depend on the type (and are defined by the query handler installed for this type). Translated to our problem here in question, this means //we have to define the basic operations from which a type query handler can be built.// Thus, to start with, it's completely sufficient to wire a call to the DefaultsManagement and assume the current session configuration contains some rules to cover it. Plus being prepared for the query to fail (throw, that is).

Later on this could be augmented by providing some search mechanisms:
* search through existing stream type implementations (or a suitable pre filtered selection) and narrow down the possible result(s) by using the constraint as a filter. Obviously this requires support by the MediaImplLib facade for the implementation in question. (This covers __Situation 1__)
* relate a protoype in question to the other existing prototypes and use the convertibility / subsumption as a filter mechanism. Finally pick an existing impl type which is linked to one of the prototypes found thus far.
Essentially, we need a search mechanism for impltypes and prototypes. This search mechanism is best defined by rules itself, but needs some primitive operations on types, like ennumerating all registered types, filter those selections and match against a constraint.

!query for an (complete) StreamType
All situations discussed thus far can also occur wrapped into and triggered by a query for a complete type. Depending on what part is known, the missing bits will be queried. 
Independent from these is __another Situation__ where we query for a type ''by ID''.
* a simple symbolic ID can be found by searching through all existing stream types (Operation supported by the type registry within STypeManager)
* a special ''classificating'' ID can be parsed into the components (media kind, prototype, impltype), resulting in sub searches for these.
{{red{not sure if we want to support queries by symboic ID}}}...problem is the impl type, because probably the library needs to support describing any implementation type by a string. Seemingly GAVL does, but requiring it for every lib?
Questions regarding the use of StreamType within the Steam-Layer.
* what is the relation between Buffer and Frame?
* how to get the required size of a Buffer?
* who does buffer allocations and how?

Mostly, stream types are used for querying, either to decide if they can be connected, or to find usable processing modules.
Even building a stream type from partial information involves some sort of query.
&rarr; more on [[media stream type queries|StreamTypeQuery]]

!creating stream types
seemingly stream types are created based on an already existing media stream (or a Frame of media data?). {{red{really?}}}
The other use case seems to be that of an //incomplete// stream type based on a [[Prototype|StreamPrototype]]

!Prototype
According to my current understanding, a prototype is merely a classification entity. But then &mdash; how to bootstrap a Prototype?
And how to do the classification of an existing implementation type.

Besides, there is the problem of //hijacking a prototype:// when a specific implementation type gets tied to a rather generic protoype, like {{{protoype(video)}}}, how to comply to the rule of prototypes subsuming a class of equivalent implementations?

!Defaults and partial specification
A StreamType need not be defined completely. It is sufficient to specify the media kind and the Prototype. The implementation type may be just given as a constraint, thus defining some properties and leaving out others. When creating a frame buffer based upon such an //incomplete type,// [[defaults|DefaultsManagement]] are queried to fill in the missing parts.
Constraints are objects provided by the Lumiera core, but specialized to the internals of the actual implementation library.
For example there might be a constraint implementation to force a specific {{{gavl_pixelformat_t}}}.

!the ID problem
Basically I'd prefer the ~IDs to be real identifiers. So they can be used directly within rules. At least the Prototypes //can// have such a textual identifier. But the implementation type is problematic, and consequently the ID of the StreamType as well. Because the actual implementation should not be nailed down to a fixed set of possibilities. And, generally, we can't expect an implementation library to yield textual identifiers for each implementation type. //Is this really a problem? {{red{what are the use cases?}}}//
As far as I can see, in most cases this is no problem, as the type can be retrieved or derived from an existing media object. Thus, the only problematic case is when we need to persist the type information without being able to guarantee the persistence of the media object this type was derived from. For example this might be a problem when working with proxy media. But at least we should be able to create a constraint (partial type specification) to cover the important part of the type information, i.e. the part which is needed to re-create the model even when the original media isn't there any longer.
Thus, //constraints may be viewed as type constructing functors.//

--------------
!use cases
* pulling data from a media file
* connecting pipes and similar wiring problems
* describing the properties of an processor plugin

!! pulling data from a media file
To open the file, we need //type discovery code,// resulting in a handle to some library module for accessing the contents, which is in compliance with the Lumiera application. Thus, we can determine the possible return values of this type discovery code and provide code which wires up a corresponding StreamTypeImplFacade. Further, the {{{control::STypeManager}}} has the ability to build a complete or partial StreamType from
* an ~ImplFacade
* a Prototype
* maybe even from some generic textual ~IDs?
Together this allows to associate a StreamType to each media source, and thus to derive the Prototype governing the immediately connected [[Pipe]]
A pipe can by design handle data of one Prototype solely.

!! wiring problems
When deciding if a connection can be made, we can build up the type information starting out from the source. (this requires some work, but it's //possible,// generally speaking.). Thus, we can allways get an ~ImplType for the "lower end" of the connection, and at least a Prototype for the "output side" &mdash; which should be enough to use the query functions provided by the stream type interfaces

!! describing properties
{{red{currently difficult to define}}} as of 9/2008, because the property description of plugins is not planned yet.
My Idea was to use [[type implementation constraints|StreamTypeImplConstraint]] for this, which are a special kind of ~ImplType
This design lays great emphasis on separating all those components and subsystems, which are considered not to have a //natural link// of their underlying concepts. This often means putting some additional constraints on the implementation, so basically we need to rely on the actual implementation to live up to this goal. In many cases it may seem to be more natural to "just access the necessary information". But on the long run this coupling of not-directly related components makes the whole codebase monolithic and introduces lots of //accidental complexity.//

Instead, we should try to just connect the various subsystems via Interfaces and &mdash; instead of just using some information, rather use some service to be located on an Interface to query other components for this information. The best approach of course is always to avoid the dependency altogether.

!Examples
* There is a separation between the __high level [[Session]] view__ and the [[Fixture]]: the latter only accesses the MObjects and the Placement Interfaces.
* same holds true for the Builder: it just uses the same Interfaces. The actual coupling is done rather //by type//, i.e. the Builder relies on an arrangement of MObjects to exist and picks up their properties through a small number of generic overloaded methods -- the session is interpreted and translated.
* the Builder itself is a separation layer. Neither do the Objects in the session access directly [[Render Nodes|ProcNode]], nor do the latter call back into the session. Both connections seem to be necessary at first sight, but both can be avoided by using the Builder Pattern
* another separation exists between the Render Engine and the individual Nodes: The Render Engine doesn't need to know the details of the data types processed by the Nodes. It relies on the Builder having done the correct connections and just pulls out the calculated results. If there needs to be additional control information to be passed, then I would prefer to do a direct wiring of separate control connections to specialized components, which in turn could instruct the controller to change the rendering process.
* to shield the rendering code of all complexities of thread communication and synchronization, we use the StateProxy
Structural Assets are intended mainly for internal use, but the user should be able to see and query them. They are not "loaded" or "created" directly, rather they //leap into existence // by creating or extending some other structures in the session, hence the name. Some of the structural Asset parametrisation can be modified to exert control on some aspects of the Steam-Layer's (default) behaviour.
* [[Processing Patterns|ProcPatt]] encode information how to set up some parts of the render network to be created automatically: for example, when building a clip, we use the processing pattern how to decode and pre-process the actual media data.
* [[Forks ("tracks")|Fork]] are one of the dimensions used for organizing the session data. They serve as an Anchor to attach parametrisation of output pipe, overlay mode etc. By [[placing|Placement]] to a track, a media object inherits placement properties from this track.
* [[Pipes|Pipe]] form &mdash; at least as visible to the user &mdash; the basic building block of the render network, because the latter appears to be a collection of interconnected processing pipelines. This is the //outward view; // in fact the render network consists of [[nodes|ProcNode]] and is [[built|Builder]] from the Pipes, clips, effects...[>img[Asset Classess|uml/fig131205.png]]<br/>Yet these //inner workings// of the render proces are implementation detail we tend to conceal.
* [[Sequence]] assets act as a façade to the fundamental compound building blocks within the model, a sequence being a collection of clips placed onto a tree of tracks. Sequences, as well as the top-level tracks enclosed will be created automatically on demand. Of course you may create them deliberately. Without binding it to a timeline or meta-clip, a sequence remains invisible.
* [[Timeline]] assets are the top level structures to access the model; similar to the sequences, they act as façade to relevant parts of the model (BindingMO) and will be created on demand, alongside with a new session if necessary, bound to the new timeline. Likewise, they can be referred by their name-ID
* [[Viewer|ViewerAsset]] assets correspond to the available viewer elements in the GUI. By [[connecting to a viewer|ViewConnection]], session elements derive a concrete (physical) output and gain the ability to be [[played|PlayService]].





!querying
Structural assets can be queried by specifying the specific type (Pipe, Track, ProcPatt) and a query goal. For example, you can {{{Query<Pipe> ("stream(mpeg)")}}}, yielding the first pipe found which declares to have stream type "mpeg". The access point for this querying facility is on the ~StructFactory, which (as usual within Lumiera) can be invoked as static member {{{Struct::retrieve(Query<TY> ...}}}. Given such a query, first an attempt is made to satisfy it by retrieving an existing object of type TY (which might bind variables as a side effect). On failure, a new structural asset of the requested type will be created to satisfy the given goal. In case you want to bypass the resolution step and create a new asset right away, use {{{Struct::retrieve.newInstance<TY>}}}

{{red{Note:}}} in the current implementation no real resolution engine is used (as of 2/2010); rather, we're just able to retrieve a hard-wired answer or create a new asset, simply pattern matching on parts of the query.

&rarr; [[Assets in general|Asset]]
/*{{{*/
/* a contrasting background so I can see where one tiddler ends and the other begins */
body {
	background: [[ColorPalette::TertiaryLight]];
}

/* sexy colours and font for the header */
.headerForeground {
	color: [[ColorPalette::PrimaryPale]];
}
.headerShadow, .headerShadow a {
	color: [[ColorPalette::PrimaryMid]];
}
.headerForeground, .headerShadow {
	padding: 1em 1em 0;
	font-family: 'Trebuchet MS' sans-serif;
	font-weight:bold;
}
.headerForeground .siteSubtitle {
	color: [[ColorPalette::PrimaryLight]];
}
.headerShadow .siteSubtitle {
	color: [[ColorPalette::PrimaryMid]];
}

/* make shadow go and down right instead of up and left */
.headerShadow {
	left: 2px;
	top: 3px;
}

/* prefer monospace for editing */
.editor textarea {
	font-family: 'Consolas' monospace;
}

/* sexy tiddler titles */
.title {
	font-size: 250%;
	color: [[ColorPalette::PrimaryLight]];
	font-family: 'Trebuchet MS' sans-serif;
}

/* more subtle tiddler subtitle */
.subtitle {
	padding:0px;
	margin:0px;
	padding-left:0.5em;
	font-size: 90%;
	color: [[ColorPalette::TertiaryMid]];
}
.subtitle .tiddlyLink {
	color: [[ColorPalette::TertiaryMid]];
}

/* a little bit of extra whitespace */
.viewer {
	padding-bottom:3px;
}

/* don't want any background color for headings */
h1,h2,h3,h4,h5,h6 {
	background: [[ColorPalette::Background]];
	color: [[ColorPalette::Foreground]];
}

/* give tiddlers 3d style border and explicit background */
.tiddler {
	background: [[ColorPalette::Background]];
	border-right: 2px [[ColorPalette::TertiaryMid]] solid;
	border-bottom: 2px [[ColorPalette::TertiaryMid]] solid;
	margin-bottom: 1em;
	padding-bottom: 2em;
}

/* make options slider look nicer */
#sidebarOptions .sliderPanel {
	border:solid 1px [[ColorPalette::PrimaryLight]];
}


/* the borders look wrong with the body background */
#sidebar .button {
	border-style: none;
}

/* displays the list of a tiddler's tags horizontally. used in ViewTemplate */
.tagglyTagged li.listTitle {
	display:none
}
.tagglyTagged li {
	display: inline; font-size:90%;
}
.tagglyTagged ul {
	margin:0px; padding:0px;
}

/* this means you can put line breaks in SidebarOptions for readability */
#sidebarOptions br {
	display:none;
}
/* undo the above in OptionsPanel */
#sidebarOptions .sliderPanel br {
	display:inline;
}

/* horizontal main menu stuff */
#displayArea {
	margin: 1em 15.7em 0em 1em; /* use the freed up space */
}
#topMenu br {
	display: none;
}
#topMenu {
	background: [[ColorPalette::PrimaryMid]];
	color:[[ColorPalette::PrimaryPale]];
}
#topMenu {
	padding:2px;
}
#topMenu .button, #topMenu .tiddlyLink, #topMenu a {
	margin-left: 0.5em;
	margin-right: 0.5em;
	padding-left: 3px;
	padding-right: 3px;
	color: [[ColorPalette::PrimaryPale]];
	font-size: 115%;
}
#topMenu .button:hover, #topMenu .tiddlyLink:hover {
	background: [[ColorPalette::PrimaryDark]];
}

/* make it print a little cleaner */
@media print {
	#topMenu {
		display: none ! important;
	}
	/* not sure if we need all the importants */
	.tiddler {
		border-style: none ! important;
		margin:0px ! important;
		padding:0px ! important;
		padding-bottom:2em ! important;
	}
	.tagglyTagging .button, .tagglyTagging .hidebutton {
		display: none ! important;
	}
	.headerShadow {
		visibility: hidden ! important;
	}
	.tagglyTagged .quickopentag, .tagged .quickopentag {
		border-style: none ! important;
	}
	.quickopentag a.button, .miniTag {
		display: none ! important;
	}
}

/* *** Additions by Ichthyostega *** */
.red {
	background: #ffcc99;
	color: #ff2210;
	padding: 0px 0.8ex;
}

.viewer th {
        background: #91a6af;
}
/*}}}*/
The term ''Subsystem'' denotes a coherent and tightly coupled and integrated part of the application's functionality.
A Subsystem exists as an entity at runtime -- it has state and a lifecycle and might depend on other subsystems.
The arrangement of subsystems is part of the architecture; separation into subsystems is a more fine grained structuring and somewhat orthogonal to the arrangement of functionality into ''Layers'', which are more of conceptual nature.

!Layers and Subsystems
;GUI
:the actual user interface is loaded and started as a plug-in. It is typically monolithic and thus counts as //one// subsystem (but there might be several alternative interfaces)
;~Steam-Layer
:this is the metadata and organisational layer and serves to accomodate the user oriented view to the technical necessities of rendering and playback
:* SessionSubsystem, comprised of the [[Session(datastructure)|Session]] and the [[Builder]]
:* [[Player]]
:* RenderEngine
;Vault
:here the goal is to provide system level services for the upper layers. The structuring is not so clear yet {{red {1/2014}}}
:* probably the [[Scheduler]] gets the ability to be started explicitly
:* the other services are started in conjunction and form a common subsystem
Viewers for displaying output are connected to the timelines or other elements to be displayed through a ViewConnection. This creates an additional mapping step (&rarr; BindingMO), similar to the attachment of a sequence or virtual clip to the [[global busses|GlobalBus]]. Yet the viewers don't provide such an elaborate mixing desk like view as the timelines -- rather there is only a simplified version of the global busses, exposed to the user as ''switch board''.

The corresponding GUI control will allow to choose among possible data sources (esp. ProbePoint) for any given StreamType (more precisely, for every //prototype,// e.g. image, sound). Moreover, there are some (limited) overlay capabilities (split image, mix sound). By default, this additional control will likely be hidden, as there is a default 1:1 association between the master busses of the timeline and the usable output destinations.

!Model and Interfaces
Regarding the internal wiring of components, the Viewer with a switch board behaves similar as a timeline: At the output side, there are a small number of standard OutputDesignation elements for the selection of primary kinds of media handled within this session (typically Video and Audio). But contrary to a timeline, a Viewer element also exposes an OutputManager interface, which is backed by a specific OutputMapping element, which internally connects to the main OutputManager for the whole Application. This way, a Viewer has the capability to get a PlayController
<<timeline better:true maxDays:55 maxEntries:45>>
/***
|Name|TaskMacroPlugin|
|Author|<<extension TaskMacroPlugin author>>|
|Location|<<extension TaskMacroPlugin source>>|
|License|<<extension TaskMacroPlugin license>>|
|Version|<<extension TaskMacroPlugin versionAndDate>>|
!Description
A set of macros to help you keep track of time estimates for tasks.

Macros defined:
* {{{task}}}: Displays a task description and makes it easy to estimate and track the time spent on the task.
* {{{taskadder}}}: Displays text entry field to simplify the adding of tasks.
* {{{tasksum}}}: Displays a summary of tasks sandwiched between two calls to this macro.
* {{{extension}}}: A simple little macro that displays information about a TiddlyWiki plugin, and that will hopefully someday migrate to the TW core in some form.
Core overrides:
* {{{wikify}}}: when wikifying a tiddler's complete text, adds refresh information so the tiddler will be refreshed when it changes
* {{{config.refreshers}}}: have the built-in refreshers return true; also, add a new refresher ("fullContent") that redisplays a full tiddler whenever it or any nested tiddlers it shows are changed
* {{{refreshElements}}}: now checks the return value from the refresher and only short-circuits the recursion if the refresher returns true
!Plugin Information
***/
//{{{
version.extensions.TaskMacroPlugin = {
	major: 1, minor: 1, revision: 0,
	date: new Date(2006,5-1,13),
	author: "LukeBlanshard",
	source: "http://labwiki.sourceforge.net/#TaskMacroPlugin",
	license: "http://labwiki.sourceforge.net/#CopyrightAndLicense"
}
//}}}
/***
A little macro for pulling out extension info.  Use like {{{<<extension PluginName datum>>}}}, where {{{PluginName}}} is the name you used for {{{version.extensions}}} and {{{datum}}} is either {{{versionAndDate}}} or a property of the extension description object, such as {{{source}}}.
***/
//{{{
config.macros.extension = {
	handler: function( place, macroName, params, wikifier, paramString, tiddler ) {
		var info  = version.extensions[params[0]]
		var datum = params[1]
		switch (params[1]) {
		case 'versionAndDate':
			createTiddlyElement( place, "span", null, null,
				info.major+'.'+info.minor+'.'+info.revision+', '+info.date.formatString('DD MMM YYYY') )
			break;
		default:
			wikify( info[datum], place )
			break;
		}
	}
}
//}}}
/***
!Core Overrides
***/
//{{{
window.wikify_orig_TaskMacroPlugin = window.wikify
window.wikify = function(source,output,highlightRegExp,tiddler)
{
	if ( tiddler && tiddler.text === source )
		addDisplayDependency( output, tiddler.title )
	wikify_orig_TaskMacroPlugin.apply( this, arguments )
}
config.refreshers_orig_TaskMacroPlugin = config.refreshers
config.refreshers = {
	link: function() {
		config.refreshers_orig_TaskMacroPlugin.link.apply( this, arguments )
		return true
	},
	content: function() {
		config.refreshers_orig_TaskMacroPlugin.content.apply( this, arguments )
		return true
	},
	fullContent: function( e, changeList ) {
		var tiddlers = e.refreshTiddlers
		if ( changeList == null || tiddlers == null )
			return false
		for ( var i=0; i < tiddlers.length; ++i )
			if ( changeList.find(tiddlers[i]) != null ) {
				var title = tiddlers[0]
				story.refreshTiddler( title, null, true )
				return true
			}
		return false
	}
}
function refreshElements(root,changeList)
{
	var nodes = root.childNodes;
	for(var c=0; c<nodes.length; c++)
		{
		var e = nodes[c],type;
		if(e.getAttribute)
			type = e.getAttribute("refresh");
		else
			type = null;
		var refresher = config.refreshers[type];
		if ( ! refresher || ! refresher(e, changeList) )
			{
			if(e.hasChildNodes())
				refreshElements(e,changeList);
			}
		}
}
//}}}
/***
!Global Functions
***/
//{{{
// Add the tiddler whose title is given to the list of tiddlers whose
// changing will cause a refresh of the tiddler containing the given element.
function addDisplayDependency( element, title ) {
	while ( element && element.getAttribute ) {
		var idAttr = element.getAttribute("id"), tiddlerAttr = element.getAttribute("tiddler")
		if ( idAttr && tiddlerAttr && idAttr == story.idPrefix+tiddlerAttr ) {
			var list = element.refreshTiddlers
			if ( list == null ) {
				list = [tiddlerAttr]
				element.refreshTiddlers = list
				element.setAttribute( "refresh", "fullContent" )
			}
			list.pushUnique( title )
			return
		}
		element = element.parentNode
	}
}

// Lifted from Story.prototype.focusTiddler: just return the field instead of focusing it.
Story.prototype.findEditField = function( title, field )
{
	var tiddler = document.getElementById(this.idPrefix + title);
	if(tiddler != null)
		{
		var children = tiddler.getElementsByTagName("*")
		var e = null;
		for (var t=0; t<children.length; t++)
			{
			var c = children[t];
			if(c.tagName.toLowerCase() == "input" || c.tagName.toLowerCase() == "textarea")
				{
				if(!e)
					e = c;
				if(c.getAttribute("edit") == field)
					e = c;
				}
			}
		return e
		}
}

// Wraps the given event function in another function that handles the
// event in a standard way.
function wrapEventHandler( otherHandler ) {
	return function(e) {
		if (!e) var e = window.event
		e.cancelBubble = true
		if (e.stopPropagation) e.stopPropagation()
		return otherHandler( e )
	}
}
//}}}
/***
!Task Macro
Usage:
> {{{<<task orig cur spent>>description}}}
All of orig, cur, and spent are optional numbers of days.  The description goes through the end of the line, and is wikified.
***/
//{{{
config.macros.task = {
	NASCENT:	0, // Task not yet estimated
	LIVE:		1, // Estimated but with time remaining
	DONE:		2, // Completed: no time remaining
	bullets:	["\u25cb", // nascent (open circle)
			 "\u25ba", // live (right arrow)
			 "\u25a0"],// done (black square)
	styles:		["nascent", "live", "done"],

	// Translatable text:
	lingo: {
		spentTooBig:	"Spent time %0 can't exceed current estimate %1",
		noNegative:	"Times may not be negative numbers",
		statusTips:	["Not yet estimated", "To do", "Done"], // Array indexed by state (NASCENT/LIVE/DONE)
		descClickTip:	" -- Double-click to edit task description",
		statusClickTip:	" -- Double-click to mark task complete",
		statusDoneTip:	" -- Double-click to adjust the time spent, to revive the task",
		origTip:	"Original estimate in days",
		curTip:		"Current estimate in days",
		curTip2:	"Estimate in days", // For when orig == cur
		clickTip:	" -- Click to adjust",
		spentTip:	"Days spent on this task",
		remTip:		"Days remaining",
		curPrompt:	"Estimate this task in days, or adjust the current estimate by starting with + or -.\n\nYou may optionally also set or adjust the time spent by putting a second number after the first.",
		spentPrompt:	"Enter the number of days you've spent on this task, or adjust the current number by starting with + or -.\n\nYou may optionally also set or adjust the time remaining by putting a second number after the first.",
		remPrompt:	"Enter the number of days it will take to finish this task, or adjust the current estimate by starting with + or -.\n\nYou may optionally also set or adjust the time spent by putting a second number after the first.",
		numbersOnly:	"Enter numbers only, please",
		notCurrent:	"The tiddler has been modified since it was displayed, please redisplay it before doing this."
	},

	// The macro handler
	handler: function( place, macroName, params, wikifier, paramString, tiddler )
	{
		var start = wikifier.matchStart, end = wikifier.nextMatch

		var origStr	= params.length > 0? params.shift() : "?"
		var orig	= +origStr // as a number
		var cur		= params.length > 1? +params.shift() : orig
		var spent	= params.length > 0? +params.shift() : 0
		if ( spent > cur )
			throw Error( this.lingo.spentTooBig.format([spent, cur]) )
		if ( orig < 0 || cur < 0 || spent < 0 )
			throw Error( this.lingo.noNegative )
		var rem		= cur - spent
		var state	= isNaN(orig+rem)? this.NASCENT : rem > 0? this.LIVE : this.DONE
		var table	= createTiddlyElement( place, "table", null, "task "+this.styles[state] )
		var tbody	= createTiddlyElement( table, "tbody" )
		var row		= createTiddlyElement( tbody, "tr" )
		var statusCell	= createTiddlyElement( row,   "td", null, "status", this.bullets[state] )
		var descCell	= createTiddlyElement( row,   "td", null, "description" )

		var origCell	= state==this.NASCENT || orig==cur? null
				: createTiddlyElement( row, "td", null, "numeric original" )
		var curCell	= createTiddlyElement( row, "td", null, "numeric current" )
		var spentCell	= createTiddlyElement( row, "td", null, "numeric spent" )
		var remCell	= createTiddlyElement( row, "td", null, "numeric remaining" )

		var sums = config.macros.tasksum.tasksums
		if ( sums && sums.length ) {
			var summary = [(state == this.NASCENT? NaN : orig), cur, spent]
			summary.owner = tiddler
			sums[0].push( summary )
		}

		// The description goes to the end of the line
		wikifier.subWikify( descCell, "$\\n?" )
		var descEnd = wikifier.nextMatch

		statusCell.setAttribute( "title", this.lingo.statusTips[state] )
		descCell.setAttribute(   "title", this.lingo.statusTips[state]+this.lingo.descClickTip )
		if (origCell) {
			createTiddlyElement( origCell, "div", null, null, orig )
			origCell.setAttribute( "title", this.lingo.origTip )
			curCell.setAttribute( "title", this.lingo.curTip )
		}
		else {
			curCell.setAttribute( "title", this.lingo.curTip2 )
		}
		var curDivContents = (state==this.NASCENT)? "?" : cur
		var curDiv = createTiddlyElement( curCell, "div", null, null, curDivContents )
		spentCell.setAttribute( "title", this.lingo.spentTip )
		var spentDiv = createTiddlyElement( spentCell, "div", null, null, spent )
		remCell.setAttribute( "title", this.lingo.remTip )
		var remDiv = createTiddlyElement( remCell, "div", null, null, rem )

		// Handle double-click on the description by going
		// into edit mode and selecting the description
		descCell.ondblclick = this.editDescription( tiddler, end, descEnd )

		function appTitle( el, suffix ) {
			el.setAttribute( "title", el.getAttribute("title")+suffix )
		}

		// For incomplete tasks, handle double-click on the bullet by marking the task complete
		if ( state != this.DONE ) {
			appTitle( statusCell, this.lingo.statusClickTip )
			statusCell.ondblclick = this.markTaskComplete( tiddler, start, end, macroName, orig, cur, state )
		}
		// For complete ones, handle double-click on the bullet by letting you adjust the time spent
		else {
			appTitle( statusCell, this.lingo.statusDoneTip )
			statusCell.ondblclick = this.adjustTimeSpent( tiddler, start, end, macroName, orig, cur, spent )
		}

		// Add click handlers for the numeric cells.
		if ( state != this.DONE ) {
			appTitle( curCell, this.lingo.clickTip )
			curDiv.className = "adjustable"
			curDiv.onclick = this.adjustCurrentEstimate( tiddler, start, end, macroName,
				orig, cur, spent, curDivContents )
		}
		appTitle( spentCell, this.lingo.clickTip )
		spentDiv.className = "adjustable"
		spentDiv.onclick = this.adjustTimeSpent( tiddler, start, end, macroName, orig, cur, spent )
		if ( state == this.LIVE ) {
			appTitle( remCell, this.lingo.clickTip )
			remDiv.className = "adjustable"
			remDiv.onclick = this.adjustTimeRemaining( tiddler, start, end, macroName, orig, cur, spent )
		}
	},

	// Puts the tiddler into edit mode, and selects the range of characters
	// defined by start and end.  Separated for leak prevention in IE.
	editDescription: function( tiddler, start, end ) {
		return wrapEventHandler( function(e) {
			story.displayTiddler( null, tiddler.title, DEFAULT_EDIT_TEMPLATE )
			var tiddlerElement = document.getElementById( story.idPrefix + tiddler.title )
			window.scrollTo( 0, ensureVisible(tiddlerElement) )
			var element = story.findEditField( tiddler.title, "text" )
			if ( element && element.tagName.toLowerCase() == "textarea" ) {
				// Back up one char if the last char's a newline
				if ( tiddler.text[end-1] == '\n' )
					--end
				element.focus()
				if ( element.setSelectionRange != undefined ) { // Mozilla
					element.setSelectionRange( start, end )
					// Damn mozilla doesn't scroll to visible.  Approximate.
					var max = 0.0 + element.scrollHeight
					var len = element.textLength
					var top = max*start/len, bot = max*end/len
					element.scrollTop = Math.min( top, (bot+top-element.clientHeight)/2 )
				}
				else if ( element.createTextRange != undefined ) { // IE
					var range = element.createTextRange()
					range.collapse()
					range.moveEnd("character", end)
					range.moveStart("character", start)
					range.select()
				}
				else // Other? Too bad, just select the whole thing.
					element.select()
				return false
			}
			else
				return true
		} )
	},

	// Modifies a task macro call such that the task appears complete.
	markTaskComplete: function( tiddler, start, end, macroName, orig, cur, state ) {
		var macro = this, text = tiddler.text
		return wrapEventHandler( function(e) {
			if ( text !== tiddler.text ) {
				alert( macro.lingo.notCurrent )
				return false
			}
			if ( state == macro.NASCENT )
				orig = cur = 0
			// The second "cur" in the call below bumps up the time spent
			// to match the current estimate.
			macro.replaceMacroCall( tiddler, start, end, macroName, orig, cur, cur )
			return false
		} )
	},

	// Asks the user for an adjustment to the current estimate, modifies the macro call accordingly.
	adjustCurrentEstimate: function( tiddler, start, end, macroName, orig, cur, spent, curDivContents ) {
		var macro = this, text = tiddler.text
		return wrapEventHandler( function(e) {
			if ( text !== tiddler.text ) {
				alert( macro.lingo.notCurrent )
				return false
			}
			var txt = prompt( macro.lingo.curPrompt, curDivContents )
			if ( txt != null ) {
				var a = macro.breakInput( txt )
				cur = macro.offset( cur, a[0] )
				if ( a.length > 1 )
					spent = macro.offset( spent, a[1] )
				macro.replaceMacroCall( tiddler, start, end, macroName, orig, cur, spent )
			}
			return false
		} )
	},

	// Asks the user for an adjustment to the time spent, modifies the macro call accordingly.
	adjustTimeSpent: function( tiddler, start, end, macroName, orig, cur, spent ) {
		var macro = this, text = tiddler.text
		return wrapEventHandler( function(e) {
			if ( text !== tiddler.text ) {
				alert( macro.lingo.notCurrent )
				return false
			}
			var txt = prompt( macro.lingo.spentPrompt, spent )
			if ( txt != null ) {
				var a = macro.breakInput( txt )
				spent = macro.offset( spent, a[0] )
				var rem = cur - spent
				if ( a.length > 1 ) {
					rem = macro.offset( rem, a[1] )
					cur = spent + rem
				}
				macro.replaceMacroCall( tiddler, start, end, macroName, orig, cur, spent )
			}
			return false
		} )
	},

	// Asks the user for an adjustment to the time remaining, modifies the macro call accordingly.
	adjustTimeRemaining: function( tiddler, start, end, macroName, orig, cur, spent ) {
		var macro = this
		var text  = tiddler.text
		var rem   = cur - spent
		return wrapEventHandler( function(e) {
			if ( text !== tiddler.text ) {
				alert( macro.lingo.notCurrent )
				return false
			}
			var txt = prompt( macro.lingo.remPrompt, rem )
			if ( txt != null ) {
				var a = macro.breakInput( txt )
				var newRem = macro.offset( rem, a[0] )
				if ( newRem > rem || a.length > 1 )
					cur += (newRem - rem)
				else
					spent += (rem - newRem)
				if ( a.length > 1 )
					spent = macro.offset( spent, a[1] )
				macro.replaceMacroCall( tiddler, start, end, macroName, orig, cur, spent )
			}
			return false
		} )
	},

	// Breaks input at spaces & commas, returns array
	breakInput: function( txt ) {
		var a = txt.trim().split( /[\s,]+/ )
		if ( a.length == 0 )
			a = [NaN]
		return a
	},

	// Adds to, subtracts from, or replaces a numeric value
	offset: function( num, txt ) {
		if ( txt == "" || typeof(txt) != "string" )
			return NaN
		if ( txt.match(/^[+-]/) )
			return num + (+txt)
		return +txt
	},

	// Does some error checking, then replaces the indicated macro
	// call within the text of the given tiddler.
	replaceMacroCall: function( tiddler, start, end, macroName, orig, cur, spent )
	{
		if ( isNaN(cur+spent) ) {
			alert( this.lingo.numbersOnly )
			return
		}
		if ( spent < 0 || cur < 0 ) {
			alert( this.lingo.noNegative )
			return
		}
		if ( isNaN(orig) )
			orig = cur
		if ( spent > cur )
			cur = spent
		var text = tiddler.text.substring(0,start) + "<<" + macroName + " " +
			orig + " " + cur + " " + spent + ">>" + tiddler.text.substring(end)
		var title = tiddler.title
		store.saveTiddler( title, title, text, config.options.txtUserName, new Date(), undefined )
		//story.refreshTiddler( title, null, true )
		if ( config.options.chkAutoSave )
			saveChanges()
	}
}
//}}}
/***
!Tasksum Macro
Usage:
> {{{<<tasksum "start" ["here" [intro]]>>}}}
or:
> {{{<<tasksum "end" [intro]>>}}}
Put one of the {{{<<tasksum start>>}}} lines before the tasks you want to summarize, and an {{{end}}} line after them.  By default, the summary goes at the end; if you include {{{here}}} in the start line, the summary will go at the top.  The intro argument, if supplied, replaces the default text introducing the summary.
***/
//{{{
config.macros.tasksum = {

	// Translatable text:
	lingo: {
		unrecVerb:	"<<%0>> requires 'start' or 'end' as its first argument",
		mustMatch:	"<<%0 end>> must match a preceding <<%0 start>>",
		defIntro:	"Task summary:",
		nascentSum:	"''%0 not estimated''",
		doneSum:	"%0 complete (in %1 days)",
		liveSum:	"%0 ongoing (%1 days so far, ''%2 days remaining'')",
		overSum:	"Total overestimate: %0%.",
		underSum:	"Total underestimate: %0%.",
		descPattern:	"%0 %1. %2",
                origTip:	"Total original estimates in days",
		curTip:		"Total current estimates in days",
		spentTip:	"Total days spent on tasks",
		remTip:		"Total days remaining"
	},

	// The macro handler
	handler: function( place, macroName, params, wikifier, paramString, tiddler )
	{
		var sums = this.tasksums
		if ( params[0] == "start" ) {
			sums.unshift([])
			if ( params[1] == "here" ) {
				sums[0].intro = params[2] || this.lingo.defIntro
				sums[0].place = place
				sums[0].placement = place.childNodes.length
			}
		}
		else if ( params[0] == "end" ) {
			if ( ! sums.length )
				throw Error( this.lingo.mustMatch.format([macroName]) )
			var list = sums.shift()
			var intro = list.intro || params[1] || this.lingo.defIntro
			var nNascent=0, nLive=0, nDone=0, nMine=0
			var totLiveSpent=0, totDoneSpent=0
			var totOrig=0, totCur=0, totSpent=0
			for ( var i=0; i < list.length; ++i ) {
				var a = list[i]
				if ( a.length > 3 ) {
					nNascent 	+= a[0]
					nLive 		+= a[1]
					nDone 		+= a[2]
					totLiveSpent 	+= a[3]
					totDoneSpent 	+= a[4]
					totOrig 	+= a[5]
					totCur 		+= a[6]
					totSpent 	+= a[7]
					if ( a.owner == tiddler )
						nMine	+= a[8]
				}
				else {
					if ( a.owner == tiddler )
						++nMine
					if ( isNaN(a[0]) ) {
						++nNascent
					}
					else {
						if ( a[1] > a[2] ) {
							++nLive
							totLiveSpent += a[2]
						}
						else {
							++nDone
							totDoneSpent += a[2]
						}
						totOrig  += a[0]
						totCur   += a[1]
						totSpent += a[2]
					}
				}
			}

			// If we're nested, push a summary outward
                        if ( sums.length ) {
				var summary = [nNascent, nLive, nDone, totLiveSpent, totDoneSpent,
						totOrig, totCur, totSpent, nMine]
				summary.owner = tiddler
				sums[0].push( summary )
			}

			var descs = [], styles = []
			if ( nNascent > 0 ) {
				descs.push( this.lingo.nascentSum.format([nNascent]) )
				styles.push( "nascent" )
			}
			if ( nDone > 0 )
				descs.push( this.lingo.doneSum.format([nDone, totDoneSpent]) )
			if ( nLive > 0 ) {
				descs.push( this.lingo.liveSum.format([nLive, totLiveSpent, totCur-totSpent]) )
				styles.push( "live" )
			}
			else
				styles.push( "done" )
			var off = ""
			if ( totOrig > totCur )
				off = this.lingo.overSum.format( [Math.round(100.0*(totOrig-totCur)/totCur)] )
			else if ( totCur > totOrig )
				off = this.lingo.underSum.format( [Math.round(100.0*(totCur-totOrig)/totOrig)] )

			var top		= (list.intro != undefined)
			var table	= createTiddlyElement( null, "table", null, "tasksum "+(top?"top":"bottom") )
			var tbody	= createTiddlyElement( table, "tbody" )
			var row		= createTiddlyElement( tbody, "tr", null, styles.join(" ") )
			var descCell	= createTiddlyElement( row,   "td", null, "description" )

			var description = this.lingo.descPattern.format( [intro, descs.join(", "), off] )
			wikify( description, descCell, null, tiddler )

			var origCell	= totOrig == totCur? null
					: createTiddlyElement( row, "td", null, "numeric original", totOrig )
			var curCell	= createTiddlyElement( row, "td", null, "numeric current", totCur )
			var spentCell	= createTiddlyElement( row, "td", null, "numeric spent", totSpent )
			var remCell	= createTiddlyElement( row, "td", null, "numeric remaining", totCur-totSpent )

			if ( origCell )
				origCell.setAttribute( "title", this.lingo.origTip )
			curCell  .setAttribute( "title", this.lingo.curTip )
			spentCell.setAttribute( "title", this.lingo.spentTip )
			remCell  .setAttribute( "title", this.lingo.remTip )

			// Discard the table if there are no tasks
			if ( list.length > 0 ) {
				var place = top? list.place : place
				var placement = top? list.placement : place.childNodes.length
				if ( placement >= place.childNodes.length )
					place.appendChild( table )
				else
					place.insertBefore( table, place.childNodes[placement] )
			}
		}
		else
			throw Error( this.lingo.unrecVerb.format([macroName]) )

		// If we're wikifying, and are followed by end-of-line, swallow the newline.
		if ( wikifier && wikifier.source.charAt(wikifier.nextMatch) == "\n" )
			++wikifier.nextMatch
	},

	// This is the stack of pending summaries
	tasksums: []
}
//}}}
/***
!Taskadder Macro
Usage:
> {{{<<taskadder ["above"|"below"|"focus"|"nofocus"]...>>}}}
Creates a line with text entry fields for a description and an estimate.  By default, puts focus in the description field and adds tasks above the entry fields.  Use {{{nofocus}}} to not put focus in the description field.  Use {{{below}}} to add tasks below the entry fields.
***/
//{{{
config.macros.taskadder = {

	// Translatable text:
	lingo: {
		unrecParam:	"<<%0>> doesn't recognize '%1' as a parameter",
		descTip:	"Describe a new task",
		curTip:		"Estimate how much days the task will take",
		buttonText:	"add task",
		buttonTip:	"Add a new task with the description and estimate as entered",
		notCurrent:	"The tiddler has been modified since it was displayed, please redisplay it before adding a task this way.",

		eol:		"eol"
	},

	// The macro handler
	handler: function( place, macroName, params, wikifier, paramString, tiddler )
	{
		var above = true
		var focus = false

		while ( params.length > 0 ) {
			var p = params.shift()
			switch (p) {
			case "above": 	above = true;  break
			case "below": 	above = false; break
			case "focus": 	focus = true;  break
			case "nofocus":	focus = false; break
			default:	throw Error( this.lingo.unrecParam.format([macroName, p]) )
			}
		}

		// If we're followed by end-of-line, swallow the newline.
		if ( wikifier.source.charAt(wikifier.nextMatch) == "\n" )
			++wikifier.nextMatch

		var where	= above? wikifier.matchStart : wikifier.nextMatch

		var table	= createTiddlyElement( place, "table", null, "task" )
		var tbody	= createTiddlyElement( table, "tbody" )
		var row		= createTiddlyElement( tbody, "tr" )
		var statusCell	= createTiddlyElement( row,   "td", null, "status" )
		var descCell	= createTiddlyElement( row,   "td", null, "description" )
		var curCell	= createTiddlyElement( row,   "td", null, "numeric" )
		var addCell	= createTiddlyElement( row,   "td", null, "addtask" )

		var descId	= this.generateId()
		var curId	= this.generateId()
		var descInput	= createTiddlyElement( descCell, "input", descId )
		var curInput	= createTiddlyElement( curCell,  "input", curId  )

		descInput.setAttribute( "type", "text" )
		curInput .setAttribute( "type", "text" )
		descInput.setAttribute( "size", "40")
		curInput .setAttribute( "size", "6" )
		descInput.setAttribute( "autocomplete", "off" );
		curInput .setAttribute( "autocomplete", "off" );
		descInput.setAttribute( "title", this.lingo.descTip );
		curInput .setAttribute( "title", this.lingo.curTip  );

		var addAction	= this.addTask( tiddler, where, descId, curId, above )
		var addButton	= createTiddlyButton( addCell, this.lingo.buttonText, this.lingo.buttonTip, addAction )

		descInput.onkeypress = this.handleEnter(addAction)
		curInput .onkeypress = descInput.onkeypress
		addButton.onkeypress = this.handleSpace(addAction)
		if ( focus || tiddler.taskadderLocation == where ) {
			descInput.focus()
			descInput.select()
		}
	},

	// Returns a function that inserts a new task macro into the tiddler.
	addTask: function( tiddler, where, descId, curId, above ) {
		var macro = this, oldText = tiddler.text
		return wrapEventHandler( function(e) {
			if ( oldText !== tiddler.text ) {
				alert( macro.lingo.notCurrent )
				return false
			}
			var desc	= document.getElementById(descId).value
			var cur		= document.getElementById(curId) .value
			var init	= tiddler.text.substring(0,where) + "<<task " + cur + ">> " + desc + "\n"
			var text	= init + tiddler.text.substring(where)
			var title	= tiddler.title
			tiddler.taskadderLocation = (above? init.length : where)
			try {
				store.saveTiddler( title, title, text, config.options.txtUserName, new Date(), undefined )
				//story.refreshTiddler( title, null, true )
			}
			finally {
				delete tiddler.taskadderLocation
			}
			if ( config.options.chkAutoSave )
				saveChanges()
		} )
	},

	// Returns an event handler that delegates to two other functions: "matches" to decide
	// whether to consume the event, and "addTask" to actually perform the work.
	handleGeneric: function( addTask, matches ) {
		return function(e) {
			if (!e) var e = window.event
			var consume = false
			if ( matches(e) ) {
				consume = true
				addTask( e )
			}
			e.cancelBubble = consume;
			if ( consume && e.stopPropagation ) e.stopPropagation();
			return !consume;
		}
	},

	// Returns an event handler that handles enter keys by calling another event handler
	handleEnter: function( addTask ) {
		return this.handleGeneric( addTask, function(e){return e.keyCode == 13 || e.keyCode == 10} ) // Different codes for Enter
	},

	// Returns an event handler that handles the space key by calling another event handler
	handleSpace: function( addTask ) {
		return this.handleGeneric( addTask, function(e){return (e.charCode||e.keyCode) == 32} )
	},

	counter: 0,
	generateId: function() {
		return "taskadder:" + String(this.counter++)
	}
}
//}}}
/***
!Stylesheet
***/
//{{{
var stylesheet = '\
.viewer table.task, table.tasksum {\
	width: 100%;\
	padding: 0;\
	border-collapse: collapse;\
}\
.viewer table.task {\
	border: none;\
	margin: 0;\
}\
table.tasksum, .viewer table.tasksum {\
	border: solid 2px #999;\
	margin: 3px 0;\
}\
table.tasksum td {\
	text-align: center;\
	border: 1px solid #ddd;\
	background-color: #ffc;\
	vertical-align: middle;\
	margin: 0;\
	padding: 0;\
}\
.viewer table.task tr {\
	border: none;\
}\
.viewer table.task td {\
	text-align: center;\
	vertical-align: baseline;\
	border: 1px solid #fff;\
	background-color: inherit;\
	margin: 0;\
	padding: 0;\
}\
td.numeric {\
	width: 3em;\
}\
table.task td.numeric div {\
	border: 1px solid #ddd;\
	background-color: #ffc;\
	margin: 1px 0;\
	padding: 0;\
}\
table.task td.original div {\
	background-color: #fdd;\
}\
table.tasksum td.original {\
	background-color: #fdd;\
}\
table.tasksum td.description {\
	background-color: #e8e8e8;\
}\
table.task td.status {\
	width: 1.5em;\
	cursor: default;\
}\
table.task td.description, table.tasksum td.description {\
	width: auto;\
	text-align: left;\
	padding: 0 3px;\
}\
table.task.done td.status,table.task.done td.description {\
	color: #ccc;\
}\
table.task.done td.current, table.task.done td.remaining {\
	visibility: hidden;\
}\
table.task.done td.spent div, table.tasksum tr.done td.current,\
table.tasksum tr.done td.spent, table.tasksum tr.done td.remaining {\
	background-color: #eee;\
	color: #aaa;\
}\
table.task.nascent td.description {\
	color: #844;\
}\
table.task.nascent td.current div, table.tasksum tr.nascent td.numeric.current {\
	font-weight: bold;\
	color: #c00;\
	background-color: #def;\
}\
table.task.nascent td.spent, table.task.nascent td.remaining {\
	visibility: hidden;\
}\
td.remaining {\
	font-weight: bold;\
}\
.adjustable {\
	cursor: pointer; \
}\
table.task input {\
	display: block;\
	width: 100%;\
	font: inherit;\
	margin: 2px 0;\
	padding: 0;\
	border: 1px inset #999;\
}\
table.task td.numeric input {\
	background-color: #ffc;\
	text-align: center;\
}\
table.task td.addtask {\
	width: 6em;\
	border-left: 2px solid white;\
	vertical-align: middle;\
}\
'
setStylesheet( stylesheet, "TaskMacroPluginStylesheet" )
//}}}
The Lumiera development is //test driven.//
This does not mean, that we're using a formalised method, nor does it mean that we're aiming at a high coverage rate as a value //per se.// But indeed it does mean that we always design the usage situation before we go into the details of the implementation. Every major design work in the application starts out as a usage sketch in the form of a ''test narrative''. But our tests are limited in a characteristic way: We do not write unit tests to cover obvious details of the implementation -- whenever we test, we test abstractions. Which means, we have to bring in abstractions into even the fine grained detail level of implementation. Another consequence is that we don't use any mocking framework, nor do we rely on a dependency injection framework to build our application. Rather, we tend to build in testability and test rigging and diagnostics facilities into every major component of the application.

Over time, a selection of techniques and usage patterns for test support has emerged

!diagnositc output
As a rule, any component of significance has an overloaded {{{operator string()}}}. The {{{<iostream>}}} subsystem from the standard library will use these operators automatically. Moreover, we use a wrapper on top of {{{boost::format}}}, which does so as well, while providing exception safety and reducing code bloat by confining the actual {{{boost::format}}} instantiation to a single compilation unit with preselected specialisations. Thus, tests changing the relevant, tangible state of some application component will change the rendered diagnostic representation. While building those tests, effects can be seen immediately or in the debugger. Later the expected output of running such a test can be easily verified.

!equality and ordering
We go into great lengths of defining identity, equality and ordering operators when appropriate. This can be seen as an investment into the future. While certainly this doesn't pay off all the time, together with the diagnostic output it allows to write tests on the level of symbolic representation.

!test adapters
several core services offer a docking point to attach a test adapter. This special  {{{friend class}}} is deeply integrated into the implementation level of said facility and may activate, when established, an extensive and costly additional layer of diagnostic reporting. For example, it may store an audit trail of every invoked state change. This approach allows us to write tests almost like formal specifications. We expect these test adapters to evolve into a diagnostic framework eventually -- Lumiera can be expected to become a challenge for diagnostics and user support, given the extensive use of rules based programming.

!rigged diagnositc setup
We build a collection of hard wired test configurations of the application. These can be activated by use of predefined name-~IDs. Depending on the level in use, a well defined set of processing rules, low-level wiring, mocked diagnostic output, faked input media and specific session contents will be injected. These can be used to write test scenarios against a known setup. Again we expect these provision to evolve into a general diagnostic facility, likely to be accessible for more advanced users too.
/***
''TextAreaPlugin for TiddlyWiki version 2.0''
^^author: Eric Shulman - ELS Design Studios
source: http://www.elsdesign.com/tiddlywiki/#TextAreaPlugin
license: [[Creative Commons Attribution-ShareAlike 2.5 License|http://creativecommons.org/licenses/by-sa/2.5/]]^^

This plugin 'hijacks' the TW core function, ''Story.prototype.focusTiddler()'', so it can add special 'keyDown' handlers to adjust several behaviors associated with the textarea control used in the tiddler editor.  Specifically, it:
* Adds text search INSIDE of edit fields.^^
Use ~CTRL-F for "Find" (prompts for search text), and ~CTRL-G for "Find Next" (uses previous search text)^^
* Enables TAB characters to be entered into field content^^
(instead of moving to next field)^^
* Option to set cursor at top of edit field instead of auto-selecting contents^^
(see configuration section for checkbox)^^
!!!!!Configuration
<<<
<<option chkDisableAutoSelect>> place cursor at start of textarea instead of pre-selecting content
<<option chkTextAreaExtensions>> add control-f (find), control-g (find again) and allow TABs as input in textarea
<<<
!!!!!Installation
<<<
Import (or copy/paste) the following tiddlers into your document:
''TextAreaPlugin'' (tagged with <<tag systemConfig>>)
<<<
!!!!!Revision History
<<<
''2006.01.22 [1.0.1]''
only add extra key processing for TEXTAREA elements (not other edit fields).
added option to enable/disable textarea keydown extensions (default is "standard keys" only)
''2006.01.22 [1.0.0]''
Moved from temporary "System Tweaks" tiddler into 'real' TextAreaPlugin tiddler.
<<<
!!!!!Code
***/
//{{{
version.extensions.textAreaPlugin= {major: 1, minor: 0, revision: 1, date: new Date(2006,1,23)};
//}}}

//{{{
if (!config.options.chkDisableAutoSelect) config.options.chkDisableAutoSelect=false; // default to standard action
if (!config.options.chkTextAreaExtensions) config.options.chkTextAreaExtensions=false; // default to standard action

// Focus a specified tiddler. Attempts to focus the specified field, otherwise the first edit field it finds
Story.prototype.focusTiddler = function(title,field)
{
	var tiddler = document.getElementById(this.idPrefix + title);
	if(tiddler != null)
		{
		var children = tiddler.getElementsByTagName("*")
		var e = null;
		for (var t=0; t<children.length; t++)
			{
			var c = children[t];
			if(c.tagName.toLowerCase() == "input" || c.tagName.toLowerCase() == "textarea")
				{
				if(!e)
					e = c;
				if(c.getAttribute("edit") == field)
					e = c;
				}
			}
		if(e)
			{
			e.focus();
			e.select(); // select entire contents

			// TWEAK: add TAB and "find" key handlers
			if (config.options.chkTextAreaExtensions) // add extra key handlers
				addKeyDownHandlers(e);

			// TWEAK: option to NOT autoselect contents
			if (config.options.chkDisableAutoSelect) // set cursor to start of field content
				if (e.setSelectionRange) e.setSelectionRange(0,0); // for FF
				else if (e.createTextRange) { var r=e.createTextRange(); r.collapse(true); r.select(); } // for IE

			}
		}
}
//}}}

//{{{
function addKeyDownHandlers(e)
{
	// exit if not textarea or element doesn't allow selections
	if (e.tagName.toLowerCase()!="textarea" || !e.setSelectionRange) return;

	// utility function: exits keydown handler and prevents browser from processing the keystroke
	var processed=function(ev) { ev.cancelBubble=true; if (ev.stopPropagation) ev.stopPropagation(); return false; }

	// capture keypress in edit field
	e.onkeydown = function(ev) { if (!ev) var ev=window.event;

		// process TAB
		if (!ev.shiftKey && ev.keyCode==9) { 
			// replace current selection with a TAB character
			var start=e.selectionStart; var end=e.selectionEnd;
			e.value=e.value.substr(0,start)+String.fromCharCode(9)+e.value.substr(end);
			// update insertion point, scroll it into view
			e.setSelectionRange(start+1,start+1);
			var linecount=e.value.split('\n').length;
			var thisline=e.value.substr(0,e.selectionStart).split('\n').length-1;
			e.scrollTop=Math.floor((thisline-e.rows/2)*e.scrollHeight/linecount);
			return processed(ev);
		}

		// process CTRL-F (find matching text) or CTRL-G (find next match)
		if (ev.ctrlKey && (ev.keyCode==70||ev.keyCode==71)) {
			// if ctrl-f or no previous search, prompt for search text (default to previous text or current selection)... if no search text, exit
			if (ev.keyCode==70||!e.find||!e.find.length)
				{ var f=prompt("find:",e.find?e.find:e.value.substring(e.selectionStart,e.selectionEnd)); e.focus(); e.find=f?f:e.find; }
			if (!e.find||!e.find.length) return processed(ev);
			// do case-insensitive match with 'wraparound'...  if not found, alert and exit 
			var newstart=e.value.toLowerCase().indexOf(e.find.toLowerCase(),e.selectionStart+1);
			if (newstart==-1) newstart=e.value.toLowerCase().indexOf(e.find.toLowerCase());
			if (newstart==-1) { alert("'"+e.find+"' not found"); e.focus(); return processed(ev); }
			// set new selection, scroll it into view, and report line position in status bar
			e.setSelectionRange(newstart,newstart+e.find.length);
			var linecount=e.value.split('\n').length;
			var thisline=e.value.substr(0,e.selectionStart).split('\n').length;
			e.scrollTop=Math.floor((thisline-1-e.rows/2)*e.scrollHeight/linecount);
			window.status="line: "+thisline+"/"+linecount;
			return processed(ev);
		}
	}
}
//}}}
A service generating //periodic ticks// &mdash; repetedly invoking a callback with a given frequency.
* defined 1/2009 as part of the PlayerDummy (design sketch)
* probably to be implemented later on based on Posix timers
* will probably be later on integrated into a synchronisation framework (display sync, audio sync, MTC master/slave...)

The Name of the Software driving this Wiki. Is is written completely in ~JavaScript and contained in one single HTML page.
Thus no server and no network connection is needed. Simply open the file in your browser and save changes locally. As the [[Engine/Development TiddlyWiki|CoreDevelopment]] HTML is located in the Lumiera source tree, all changes will be managed and distributed via GIT. While doing so, you sometimes will have to merge conflicing changes manually in the HTML source.
 * see GettingStarted
 * see [[Wiki-Markup|https://classic.tiddlywiki.com/#HelloThere%20%5B%5BHeadings%20Formatting%5D%5D%20%5B%5BBasic%20Formatting%5D%5D%20%5B%5BCode%20Formatting%5D%5D%20%5B%5BCSS%20Formatting%5D%5D%20%5B%5BHorizontal%20Rule%20Formatting%5D%5D%20%5B%5BHTML%20Entities%20Formatting%5D%5D%20%5B%5BHTML%20Formatting%5D%5D%20HtmlEntities%20%5B%5BImage%20Formatting%5D%5D%20%5B%5BLine%20Break%20Formatting%5D%5D%20%5B%5BLink%20Formatting%5D%5D%20%5B%5BList%20Formatting%5D%5D%20PeriodicTable%20PlainText%20PluginFormatting%20%5B%5BQuotations%20Formatting%5D%5D%20%5B%5BSuppressing%20Formatting%5D%5D%20%5B%5BTables%20Formatting%5D%5D%20TiddlerComments]], [[CSS-formatting|https://classic.tiddlywiki.com/#%5B%5BCSS%20Formatting%5D%5D]]
__note__: This is  a //Tiddly Wiki »classic«// -- the version prior to the complete rewrite known as [[TiddlyWiki-5|http://five.tiddlywiki.com/]]
Simple time points are just like values and thus easy to change. The difficulties arise when time values are to be //quantised to an existing time grid.// The first relevant point to note is that for quantised time values, the effect of a change can be disproportional to the cause. Small changes below the threshold might be accumulated, and a tiny change might trigger a jump to the next grid point. While this might be annoying, the yet more complex questions arise when we acknowledge that the amount of the change itself might be related to a time grid.

The problem with modification of quantised values highlights an inner contradiction or conflicting goals within the design
* the whole system should fit in naturally and just feel like using raw time values
* quantisation should be added dynamically and //late// -- like a view
* there should be a guidance towards the intended proper use

!!!general assumptions in Lumiera
At this point, we should recall some general assumptions within Lumiera
* there are no hard-coded defaults regarding the specific nature of the manipulated objects (format, number of channels)
* there is not "the" timeline, but we have multiple timelines
* sequences might be nesetd and thus be brought into a different context
* framerate is a property of the stream type of a channel; it depends on the output. There is no global format or framerate.

!Usage situations
The complexity arises from the mixture of several concerns, which often blend into a single usage situation.
;Time span is an object prototype
:Within the context of an NLE, a time interval located at a given time point can be considered as the least common denominator of all entities to be manipulated by the user. //Manipulating// such objects is the whole point of using such an application.
:* an object or boundary point can be //dragged//
:* we want to //nudge// by defined amounts
:* we want to put it "somewhere"
;dragging
:as far as the GUI is concerned, dragging some element creates an //offset in display coordinates.//
:Now the task is to tranform this into a change on the object, which typically is grid-aligned
;nudging
:the primary interaction in this case just sends an numeric offset of ±N steps
:but the further processing might involve a global or maybe even local //nudge amount.//
:the recieving object is typically grid-aligned, thus nudge-grid and object-grid need to cascade, i.e. be applied in sequence
;placing
:an object or boundary point gets relocated to some predefined time position, which in turn can be quantised (grid aligned).
:now either we treat this target position as a value, to be applied, possibly with re-quantisation
:or we might treat it as a //source for time values// -- in which case the object just gets attached and consumes time coordinates produced elsewhere.

Considering the analysis this far, there seems to be a common denominator in all these operations: chaining of multiple grids.
A mutation or an increment gets applied to a grid-aligned variable, which then (maybe after transformation into a different scale) in turn is used again as a mutation to be applied to another grid-aligned variable. Now, because at this point we don't need to solve the whole bunch of wide scale design problems, it is sufficient to extract the following operations
::value &rArr; value
::value &rArr; quantised
::increment &rArr; quantised

This results in the following combinations to be implemented:
|  !mutation|!receiver ||!result |
| ~TimeValue|Time || -- |
|~|Duration || ↯ |
|~|~TimeSpan ||set start |
|~|quantised ||set raw value |
| Duration|~TimeSpan ||set length |
|~|Duration  ||set length |
|~|quantised || ↯ |
| Offset|Time || -- |
|~|Duration ||adjust length |
|~|~TimeSpan ||adjust start |
|~|quantised ||adjust raw value |
| ~QuTime|any raw ||materialise, then set raw |
|~|~QuTime ||same &rarr; chanined |
| increment|~QuTime ||snap to grid point |
As rationale, consider the following
* immutable values are bliss. 
* by materialising a quantised change prior to applying it, we get the "chaining effect"

----------
this leads to the following solutions approach
* time values are immutable (as far as possible)
* for simple time calculations the ~TimeVar is sufficient
* but we allow to fed an abstract //modification object.//
* ''nudge by unit(s)'' is subsumed here as a special case

As the actual change logic to apply depends both on the target value and the kind of the change, we end up with //double dispatch.//

 -- can we avoid accepting mutations for Time (and thus only mutate ~TimeSpan)?
Rationale: allowing mutations for Time bears the danger of making ~TimeVar obsolete
* //4/11 yes, implementing it now this way....//
* we have ~TimeVar, so the only thing that //needs// to be mutable is a whole object &rArr; ~TimeSpan
* err, because MObject will include a Duration separate from the start time in the Placement, Duration needs to be mutable too

!!!usage considerations
{{red{Question 5/11}}} when moving a clip taken from 50fps media, the new position might be quantised to the 50fps grid established by the media, even when the target timeline runs with 25fps, allowing for finer adjustments based on the intermediate frames present in the source material.

!!!design draft
The special focus of this problem seems to lead itself to a __visitor pattern__ based implementation. Because the basic hierarchy of applicable types is fixed, but the behaviour is open ended (and not yet fully determined). A conventional implementation would scatter this behaviour over all the time entities, thus making it hard to understand and reason about. The classical ~GoF cyclic visitor solution to the contrary allows us to arrange closely related behaviour into thematically grouped visitor classes. As a plus, the concrete action can be bound dynamically, allowing for more flexibility when it comes to dealing with the intricate situations when a quantised time span (= a clip) recieves a quantised mutation (= is re-alligend to a possibly different frame grid)

Thus the possibly mutalble time entities get an {{{accept(time::Mutation&)}}} function, which is defined to reflect back into the (virtual) visitation functions of the Mutation Interface. Additionally, we provide some static convenience shortcuts right in the Mutation interface, performing the trivial mutations.

!!!life value changes and monitoring
Based on this time::Mutation design, we provide a specialised element for dealing with //running time values:// When attached to a target time entity, a "life" connection is established. From then on, continuous changes and mutations can be fed to the target by invoking a functor interface. Besides, a change notification signal (callback) can be installed, which will be invoked on each change. This {{{time::Control}}} element is the foundation for implementing all kinds of running time display widgets, spin buttons, timeline selections, playheads, loop playback and similar.
The term &raquo;Time&laquo; spans a variety of vastly different entities. Within a NLE we get to deal with various //flavours of time values.//
;continuous time
:without any additional assumptions, ''points in time'' can be specified with arbitrary precision.
:the time values are just numbers; the point of reference and the meaning is implicit.
:within Lumiera, time is encoded as integral number of //micro ticks,// practically continuous
;time distance
:a range of time, a ''distance'' on the time axis, measured with the same arbitrary precision as time points.
:distances can be determined by //subtracting// two time points, consequently they are //signed numbers.//
;offset
:a distance can be used to adjust (offset) a time or a duration: this means applying a relative change.
:the //target// of an offset operation is a time or duration, while it's //argument// is a distance (synonymous to offset).
;duration
:the length of a time range yields a ''time metric''.
:the duration can be defined as the //absolute value//&nbsp; of the offset between start and endpoint of the time range.
:a duration always abstracts from the time //when// this duration or distance happens, the relation to any time scale remains implicit
;time span
:contrary to a mere duration, a ''time interval'' or time span is actually //anchored// at a specific point in time.
:it can be seen as a //special kind of duration,// which explicitly states the information //when// this time span takes place.
;changing time
:Time values are //immutable,// like numbers. Only a ''time variable'' can be changed.
:yet some of the special time entities can recieve [[mutation messages|TimeMutation]], allowing e.g. for adjustments to a time interval selection from the GUI

;internal time
:While the basic continuous time values don't imply any commitment regarding the time scale and origin to be used, actually, within the implementation of the application, the meaning of time values is uniform and free of contradictions. Thus effectively there is an ''implementation time scale'' -- but its scope of validity is //strictly limited to the implementation level of a single application instance.// It is never exposed and never persisted. It might not be reproducible over multiple instantiations of the application. The implementation reserves the right to recalibrate this internal scale. Later, when Lumiera gains the capability to run within a network of render nodes, these instance connections will include a negotiation about the internal time scale, which remains completely opaque to the outer world. This explains, why {{{lumiera::Time}}} instances lack the ability to show their time value beyond debugging purposes. This is to avoid confusion and to stress their opaque nature.
;wall clock and system time
:The core property of any external real world time is that it is //running// -- we have to synchronise to an external time source.
:This implies the presence of a //running synchronisation process,// with the authority to adjust the time base;
:contrast this to the internal time, which is static and unconnected -- 
;quantised time
:The ''act of quantisation'' transforms a continuous property into a ''discrete'' structure. Prominent examples can be found in the domain of micro physics and with digital information processing. In a broader sense, any measurement or //quantification// also encompasses a quantisation. Regarding time and time measurement, quantisation means alignment to a predefined ''time grid''. Quantisation necessarily is an //irreversible process// -- possible additional information is discarded.
:Note that quantisation introduces a ''time origin'' and a ''reference scale''
;frame count
:within the context of film and media editing, the specification of a ''frame number'' is an especially important instance of quantisation.
:all the properties of quantisation apply indeed to this special case: it is a time measurement or specification, where the values are aligned to a grid, and there is a reference time point where the counting starts (origin) and a reference scale (frames per second). Handling of quantised time values in Lumiera is defined such as to ensure the presence of all those bits of information. Without such precautions, operating with bare frame numbers leads itself to all kinds of confusions, mismatches, quantisation errors and unnecessary limitations of functionality.
;timecode
:Quantisation also is the foundation of all kinds of formalised time specifications
:actually even a frame count is some kind of (informal) timecode -- other timecodes employ a standardised format.
://every// presentation of time values and every persistent storage and exchange of such values is based on time codes.
:yet quantisation and time code aren't identical: a given quantised time value typically can be cast into multiple timecode formats.

!Patterns for handling quantised time
When it comes to actually handling quantised time values, several patterns are conceivable for dealing with the quantisation operation and representing quantised data. As guideline for judging these patterns, the general properties of time quantisation, as detailed above, should be taken into account. Quantising a time value means both //discarding information,// while at the same time //adding explicit information// pertaining the assumptions of the context.

__casual handling__: this is rather an frequently encountered ''anti pattern''. When reading such code, most striking is the sense of general unawareness of the problem, which is then "discovered" on a per case base, which leads to numerous repetitions of the same basic undertakings, but done with individual treatment of each instance (not so much copy-n-paste). Typical code smells:
* the rounding, modulo and subtract-base operations pertinent with scale handling are seemingly inserted as bugfix
* local code path forks to circumvent or compensate for otherwise hard wired calculations based on specific ways to invoke a function
* playing strikingly clever tricks or employing heuristics to "figure out" the missing scale information from accessible context after the fact
* advertising support for some of the conceivable cases as special feature, or adding it as plugin or extension module with limited scope
* linking parts of the necessary additional information to completely unrelated other structures, thus causing code tangling and scattering
* result or behaviour of calculations depends on the way things are set up in a seemingly contingent way, forcing users to stick to very specific procedures and ordered steps.
[>img[Time and Time Quantisation in Lumiera|uml/fig142725.png]]
__static typing__: an analysis of the cases to be expected establishes common patterns and some base cases, which are then represented by distinct types with well established conversions. This can be combined with generic programming for the common parts. Close to the data input, a factory establishes these statically typed values.

__tagged values__: quantised values are explicitly created out of continuous values by a quantiser entity. These quantised data values contain a copy of the original data, adjusted to be exactly aligned with respect to the underlying time grid. In addition, they carry a tag or ID to denote the respective scale, grid or timecode system. This tag can be used later on to assess compatibility or to recast values into another timecode system.

__delayed quantisation__: with this approach, the information loss is delayed as long as possible. Quantised time values are rather treated as promise for quantisation, while the actual time data remains unaltered. Additionally, they carry a tag, or even a direct link to the responsible quantiser instance. Effectively, these are specialised time values, instances of a sub-concept, able to stand-in for general time values, but exposing additional accessors to get a quantised value.

!!!discussion
For Lumiera, the static typing approach is of limited value -- it excels when values belonging to different scales are actually treated differently. There are such cases, but rather on the data handling level, e.g. sound samples are always handled block wise. But regarding time values, the unifying aspect is more important, which leads to prefering a dynamic (run time typed) approach, while //erasing// the special differences most of the time. Yet the dynamic and open nature of the Lumiera high-level model favours the delayed quantisation pattern; the same values may require different quantisation depending on the larger model context an object is encountered in. This solution might be too general and heavy weight at times though. Thus, for important special cases, the accessors should return tagged values, preferably even with differing static type. Time codes can be integrated this way, but most notably the ''frame numbers'' used for addressing throughout the Vault, can be implemented as such specifically typed tagged values; the tag here denotes the quantiser and thus the underlying grid -- it should be implemented as hash-ID for smooth integration with code written in plain C.

At the level of individual timecode formats, we're lacking a common denominator; thus it is preferrable to work with different concrete timecode classes through //generic programming.// This way, each timecode format can expose operations specific only to the given format. Especially, different timecode formats expose different //component fields,// modelled by the generic ''Digxel'' concept. There is a common baseclass ~TCode though, which can be used as marker or for //type erasure.//
&rarr; more on [[usage situations|TimeUsage]]
&rarr; Timecode [[format and quantisation|TimecodeFormat]]
&rarr; Quantiser [[implementation details|QuantiserImpl]]
the following collection of usage situations helps to shape the details of the time values and time quantisation design. &rarr; see also  [[time quantisation|TimeQuant]]

;time position of an object
:indeed the term "time position" encompasses two quite different questions
:* a time or timing specification within the object
:* determining the time point in reference to an existing scale
;time and length of an object
:basically the same situation, but length could be treated in two ways for quantisation
:* having a precise specification and then quantise the start and endpoint
:* quantise the start position and then establish an (independently quantised length)
;moving and resizing an object
:this can in itself be done in two different ways, and each of them can be applied in a quantised flavour
:which sums up to 8 possible combinations, considering that position and length are 2 degrees of freedom.
:* a variable can be //changed// by an offset
:* a variable can be //defined// to a new value
:another (hidden) degree of freedom lies in how to apply an quantised offset to an unquantised value (and reversed)
:because this operation might be done both in the quantised or non-quantised domain, and also the result might be (un)quantised
;updating the playback position
:this can be seen as a practical application of the above; basically we can choose to show the wall clock time or we can advance the playback position in frame increments, thus denoting the frame currently in display. For video, these distinctions may look moot, but they are indeed relevant for precise audio editing, especially when combined with loop playback (recall that audio is processed block wise, but the individual sample frames and thus the possible loop positions are way finer than the processing block size)
;dispatching individual frames for calculation
:when a [[render or playback process|PlayProcess]] is created, at some point we need to translate this logical unit ("calculation stream") into individual frame job entries. This requires to break continuous time into individual frames, and then ennumerating these frames.
;displaying time intervals
:for display, time intervals get //re-quantised// into display array coordinates.
:While evidently the display coordinates are themselves quantised and we obviously don't want to cancel out the effect of an quantisation of the values or intervals to be displayed (which means, we get two quantisations chained up after each other), there remains the question if the display array coordinates should be aligned to the grid of the //elements to be displayed,// and especially if the allowed zoom factors should be limited. This decision isn't an easy one, as it has an immediate and tangible effect on what can be showed, how reversible and reproducible a view is and (especially note this!) on the actual values which can be set and changed through the GUI.
;time value arithmetic
:Client code as well as the internal implementation of time handling needs to do arithmetic operations with time values. Time values are additive and totally ordered. Distance, as calculated by subtraction, can be made into a metric. Another and quite different question is to what extent a quantised variant of this arithmetics is required.
;relative placement
:because of the divergence between quantised and unquantised values, the question arises, if placement relative to another object refers to the raw position or the already quantised position. Basically all the variations discussed for //time and length of an object// also do apply here.

!notable issues
''Direct quantisation of length is not possible''. This is due to the non-linear nature of all but the most trivial time grids: Already such a simple addition like a start offset destroys linearity, and this still the more is true within a compound grid where the grid spacing changes at some point. Thus, the length has to be re-established at the target position of an time interval after each change involving quantisation. Regarding the //strategy// to apply when re-establishing the length, it seems more appropriate to treat the object as an entity which is moved, which means to do quantisation in two steps, first the position, then the endpoint (the second option in the description above). But it seems advisable not to hard wire that strategy -- better put it into the quantiser.

We should note, that the problems regarding quantised durations also carry over to //offsets:// it is difficult to ''define the semantics of a quantised offset''. Seemingly the only viable approach is to have a //intended offset,// and then to apply a re-quantisation to the target after applying the (raw) offset.

''When to materialise a quantisation''. Because of the basic intention to retain information, we delay actually applying the quantisation to the stored values as much as possible. But not materialising immediately at quantisation has the downside of possibly accumulating off-grid values without that being evident. Most notably, if we apply the raw offsets produced by GUI interactions, the object's positions and lengthes are bound to accumulate spurious information never intended by the user.

Thus, an especially important instance of that problem is ''how to deal with updates in a quantised environment''. If we handle quantisation stictly as a view employed on output, we run into the problems with accumulating spurious information. On the other hand, allowing for quantised changes inevitably pulls in all the complexity of mixing quantised and non-quantised values. It would be desirable somehow to move these distinctions out of the scope of this design and offload them onto the client (code using these time classes).

Another closely related problem is ''when to allow [[mutations|TimeMutation]]'', if at all. We can't completely do away with mutations, simply because we don't have a pure functional language at our disposal. The whole concept of //reference semantics// doesn't play so well with immutable objects. The Lumiera high-level (session) model certainly relies on objects intended to be //manipulated.// Thus we need a re-settable length field in {{{MObject}}} and we need a time variable for position calculations. Yet we could make any //derived objects// into immutable descriptor records, which certainly helps with parallelism.

The ''problem with playback position'' is -- that indeed it's an attempt to conceptualise a non-existing entity. There is no such thing like "the" playback position. Yet most applications I'm aware off //do// employ this concept. Likely they got trapped by the metaphor of the tape head, again. We should do away with that. On playback, we should show a //projection of wall-clock time onto the expected playback range// -- not more, not less. It should be acknowledged that there is //no direct link to the ongoing playback processes,// besides the fact that they're assumed to sync to wall-clock time as well. Recall, typically there are multiple playback processes going on in compound, and each might run on a different update rate. If we really want a //visual out-of-sync indicator,// we should treat that as a separate reporting facility and display it apart of the playback cursor.

An interesting point to note for the ''frame dispatch step'' is the fact, that in this case quantised values and quantisation are approached in the reverse direction, compared with the other uses. Here, after establishing a start point on the time scale, we proceed with ennumerating distinct frames and later on need to access the corresponding raw time, especially to find out about the [[timeline segment|Segmentation]] to address, or for retrieving parameter automation. &rarr; FrameDispatcher.

Note that the ''display window might be treated as just an independent instance of quantisation''. This is similar to the approach taken above for modifying quantised time span values. We should provide a special kind of time grid, the display coordinates. The origin of these is always defined to the left (lower) side of the interval to be displayed, and they are gauged in screen units (pixels or similar, as used by the GUI toolkit set). The rest is handled by the general quantisation mechanisms. The problem of aligning the display should be transformed into a general facility to align grids, and solved for the general case. Doing so solves the remaining problems with quantised value changes and with ''specifying relative placements'' as well: If we choose to represent them as quantised values, we might (or might not) also choose to apply this //grid-alignment function.//

!the complete time value usage cycle
The way time value and quantisation handling is designed in Lumiera creates a typical usage path, which actually is a one-way route. We might start out with a textual representation according to a specific ''timecode'' format. Assumed we know the implicit underlying ''time grid'' (coordinate system, framerate), this timecode string may be parsed. This brings us (back) to the very origin, which is a raw ~TimeValue (''internal time'' value). Now, this time value might be manipulated, compared to other values, combined into a ''time span'' (time point and duration -- the most basic notion of an //object// to be manipulated in the Session). Anyway, at some point these time values need to be related to some ''time scale'' again, leading to ''quantised'' time values, which -- finally -- can be cast into a timecode format for external representation again, thus closing the circle.

!substantial problems to be solved
* how to [[align multiple grids|TimeGridAlignment]]
* how to integrate [[modifications of quantised values|TimeMutation]].
* how to isolate the Time/Quantisation part from the grid MetaAsset in the session &rarr; we use the [[Advice]] system
* how to design the relation of Timecode, Timecode formatting and Quantisation &rarr; [[more here|TimecodeFormat]]
The handling of [[Timecode]] is closely related to [[time representation and quantisation|TimeQuant]]. In fact, these two topics blend into one another. Time will be quantised into a //grid,// but this grid only makes sense when linked to a externally relevant meaning and representation, which is the Timecode. But a timecode value also denotes a specific point in time -- performing operations on a timecode is equivalent to manipulating a quantised time value.

!Problem of dependencies
The general design of time and time quantisation in Lumiera requires us to do the actual quantisation as late as possible. The relation of the time-like entities creates a //one way route:// starting from the ''internal time'', which is contiguous and definite, but completely opaque for the client, the usage proceeds to a ''quantised time value'', requiring the specification of a concrete ''time grid''. Now, the possible ''timecode formats'' can be determined, and commiting to one specific timecode format finally allows to get an explicit, number-like value, e.g. the frame count, or the seconds part of a SMPTE Timecode.

In this handling sequence, the ~QuTime with the embedded link to a ''quantiser'' plays the role of an coordinating hub.
* a quantised time element depends on a concrete quantiser, which might be fetched by symbolic ID.
* the quantiser draws upon a configured collection of concrete timecode ''formats''.
* only the concrete format knows how to build the components of a value representation in that format, but in doing so, has to rely on the primitives exposed by the quantiser.
* the concrete individual ''timecode value'' is comprised of several value-like components called ''Digxel'' (generalised digits)
* the timecode value can be //(re)-built,// assumed there is a concrete quantised time and a format and a quantiser

!Usage and operations
Timecode //is a value.// It has an explicit and distinct structure, and the component parts can be accessed as plain numeric values, or in a formatted string representation. To this end, these values need to be (re)-built. This operation -- building the value(s) -- is what effectively //materialises// the quantisation: At this point, the actual rounding and truncating operation is performed.

But timecode values can also be //mutated.// In this respect they differ from a plain ~TimeValue, which is immutable like a number.
* the individual components (digxels) can be assigned, causing the rebuilding of the timecode
* the value as a whole can be incremented, decremented or be offset.

And last but not least, it is possible to get a new ~TimeValue, reflecting the current (quantised) time of this timecode. This closes the circle of time handling and quantisation; actually this is the usual way to enter a new time value into the system, by starting out from an explicitly specified grid aligned timecode value.

!!{{red{WIP 1/11}}}design tasks
* @@color:green;✔@@ find out about the connection to the MetaAsset &rarr; [[Advice]]
* @@color:green;✔@@ determine the primitives which need to be on the //effective quantiser API.//
* @@color:green;✔@@ find out how a format can address the individual components it's comprised of &rarr; direct access by concrete type
* @@color:green;✔@@ decide how a concrete TC value can refer to his quantiser &rarr; always using smart-ptr
* @@color:green;✔@@ decide on how to handle wrap-around and negative values &rarr; by strategy

!negative values and wrap-around
Many timecode formats were defined in the era of analogue media -- typically the key point of a timecode format was the way it can be encoded into some nits and bits within the available channel bandwidth. Often this causes the use of fixed field length representations, imposing hard limits on the number range. Adhering strictly to such ancient specifications in the context of a general purpose computer program usually results in lots of additional complexity without a fundamental reason.

When in doubt, for the design of Lumiera we tend to err for the basic mathematical definition. For time values this means to use an practically unlimited time axis with an arbitrary time origin. Under this assumptions, negative time values are just natural and will be handled without case distinctions. This way, at least the core is kept simple and straight forward. Which leaves us with some of the aforementioned additional complexities showing up when it comes to interfacing with an existing timecode format. Especially, the following points need to be clarified:
* wrap-around: when a timecode component (e.g. the seconds) exceeds the defined range, this creates a propagation (e.g. to the minutes) and a remainder (wrapped seconds value).
* range limitation: what happens when an internal time value exceeds the range of possible timecode values? (e.g. what is 23:59:59 + 70 frames?)
* how to extend the definition to negative values, if applicable.

Especially ''SMPTE Timecode'' is limited to the range from 0:0:0:0 to 23:59:59:##.
But because for Lumiera the SMPTE format is just a presentation rule applied to a more orthogonal internal repesentation, we could think of extending the allowed values...
# by just letting the hours increase arbitrarily
# by letting the timecode wrap from 23:59:59:## to 0:0:0:0 and treat this junction as continuous (effectively a "modulo 1 day").
# by making the hours-field signed, but otherwise contain the same wrapping scheme (0:0:0:0 - 1sec = -1:59:59:0)
# by a representation symmetrical to the zero point (0:0:0:0 - 1sec = -0:0:1:0) -- but beware: //the underlying frame quantisation won't flip//
Because this decision is considered arbitrary and any reasoning will be context dependant, the decision is to provide a hook for a //strategy// --
Currently (1/11), the strategy is implemented according to (1) and (4) above, leaving the actual configuration of a strategy for later.
!!{{red{WIP 1/11}}}BUG
Implementation of this strategy is still broken: it doesn't work properly when actually the change passing over the zero point happens by propagation from lower digits. Because then -- given the way the mutators are implemented -- the //new value of the wrapping digit hasn't been stored.// It seems the only sensible solution is to change the definition of the functors, so that any value will be changed by side-effect {{red{Question 4/11 -- isn't this done and fixed by now??}}}
Timeline is the top level element within the [[Session (Project)|Session]]. It is visible within a [[timeline view in the GUI|GuiTimelineView]] and represents the effective (resulting) arrangement of media objects, to be rendered for output or viewed in a Monitor (viewer window). A timeline is comprised of:
* a time axis in abolute time ({{red{WIP 1/10}}}: not clear if this is an entity or just a conceptual definition) 
* a list of [[global Pipes|GlobalPipe]] representing the possible outputs (master busses)
* //exactly one// top-level [[Sequence]], which in turn may contain further nested Sequences.
* when used for Playback, a ViewConnection is necessary, allowing to get or connect to a PlayController

Please note especially that following this design //a timeline doesn't define tracks.// [[Tracks form a Tree|Fork]] and are part of the individual sequences, together with the media objects placed to this //track fork.// Thus sequences are independent entities which may exist stand-alone within the model, while a timeline is //always bound to hold a sequence.// &rarr; see ModelDependencies
[>img[Fundamental object relations used in the session|uml/fig136453.png]]

Within the Project, there may be ''multiple timelines'', to be viewed and rendered independently. But, being the top-level entities, multiple timelines may not be combined further. You can always just render (or view) one specific timeline. A given sequence may be referred directly or indirectly from multiple timelines though. A given timeline is represented within the GUI according to [[distinct principles and conventions|GuiTimelineView]]

''Note'': in early drafts of the design (2007) there was an entity called "Timeline" within the [[Fixture]]. This entity seems superfluous and has been dropped. It never got any relevance in existing code and at most was used in some code comments.

!Façade and implementation
Actually, Timeline is both an interface and acts as façade. Its an interface, because we'll need "timeline views" ({{red{really? is that a reason to create a hierarchy right here, or shouldn't that be rather conceptual?}}}. It is a façade to the raw structures in the model, in this case a {{{Placement<BindingMO>}}} attached immediately below the [[root scope|ModelRootMO]]. The implementation of the timeline(s) is maintained as StructAsset within the AssetManager, managed by shared-ptr. It always depends on a [[Sequence]], which might be created automatically as empty container when referring to a timeline.

Besides building on the asset management, implementing Timeline (and Sequence) as StructAsset yields another benefit: ~StructAssets can be retrieved by query, allowing to specify more details of the configuration immediately on creation. //But on the short term, this approach causes problems:// there is no real inference engine integrated into Lumiera yet (as of 2/2010 the plan is to get an early alpha working end to end first). For now we're bound to use the {{{fake-configrules}}} and to rely on a hard wired simulation of the intended behaviour of a real query resolution. Just some special magic queries will work for now, but that's enough to get ahead.
As [[detailed here|GuiTimelineSlave]], there are various good reasons to provide several UI views onto the same timeline. Yet taking an architectural viewpoint, we prefer representing such a slave display attachment as first-class citizen, right within the session model. Some further problems remain to be settled
* the primary Timeline entity need to be aware of the clone's presence, since any effects of a Builder run must be propagated through the latter
* we need a way to duplicate the diff feed, so the UI element representing the clone gets notified appropriately
* and even when following this design approach with a //materialised duplication,// we still need some awareness and collaboration among the UI elements involved

This topic is {{red{postponed as of 10/2018}}} &rarr; [[#1083|http://issues.lumiera.org/ticket/1083]]
//guide and control the concrete display properties of the various sub components (tracks, clips) comprising a timeline display.//

The »Display Manager« of the timeline actually is an abstraction, a control interface, revolving around the guidance individual components need in order to settle on a proper display. Those components themselves are represented as mediating entities, the TrackPresenter and the ClipPresenter, each of which controls and manages a mostly passive GTK widget. To this end, the presenters need to know at which virtual coordinates their display widgets would show up, and they should be aware if these corresponding widgets need actually to be visible at the moment. Moreover, especially the ClipPresenter has to choose the appropriate  &rarr;[[»appearance style«|GuiClipWidget]] for the corresponding slave widget.

!display evaluation pass
[>img[Clip presentation control|uml/Timeline-display-evaluation.png]]Since the timeline display is formed by several nested collections of elements, a proper display layout must incorporate information from all those parts. A naive approach likely would just iterate over them and reach into their innards in order to extract the necessary data -- or even worse, build a separate display data store, which then must be //kept in sync// with the component hierarchy proper. Any of these naive coding approaches would lead to high coupling and turns later adjustments and necessary changes into a liability, due to evolution of requirements. On the long run, such a dangerous situation can only be mitigated by a collaboration of self contained local structures. And, to get there, we need to distil the common ground shared between the global and the local concerns, and reshape it into some kind of invariant -- allowing a common pattern to re-emerge at several levels of locality.

The purpose of the display evaluation pass is to make the timeline display globally coherent. Parts of our display are implemented as GTK compound widgets -- and GTK ensures a proper layout within these parts. However, our layout creates interconnections and dependencies beyond what can be expressed with GTK building blocks. Most notably, the track space must be aligned horizontally, both in header and track body display, while each individual track has to accommodate to its various content's extension. And since clips might in turn contain nested structures, we can not hope to achieve a finished layout just in one pass, since some of our local adjustments in turn need to be re-evaluated by the GTK space allocation mechanism. So we end up with several sweeping passes over our timeline control structure, each collecting data and //gradually increasing the allocated space.// This kind of //monotonous adjustment// is key for avoiding oscillations and to guarantee a quick convergence of this iterative space allocation process. Each pass visits the timeline UI components in //strict layout order// (from top down and from left to right), causing a strike of size adjustments to embedded widgets, which, after being accommodated by GTK into appropriate local layout, will in turn trigger our custom drawing code eventually -- at which point the next display evaluation pass is hooked in, continuing this cycle until convergence is achieved.
!!!Trigger
The //Display Evaluation Pass// is triggered by various events from different sources
* after any //structural change// by MutationMessage
* when any Widget within the [[nested timeline display|GuiTimelineView]] detects the necessity of resizing
* after notification from the ZoomWindow control to indicate a change for any of the //display metric parameters//

!mapping service for the widgets
Another, closely related topic, handled within this context, is the mapping from model structures and units into display layout coordinates. Within the model, we can distinguish several dimensions (degrees of freedom). For one, there is the topology, the ''location or part'' of the model to expose through the UI. Moreover, there is the ''degree of detail'' for this UI representation. And then, there is the ''temporal position and extension'' to represent.

By principle, //widgets always work exclusively in pixel coordinates// -- thus we need the help of a coordinating entity to find out where some entity shall be located on the GTK drawing canvas. This becomes especially relevant for the timeline body, since there we're relying on a GuiCustomWidget to perform and control some of the UI drawing by specific custom rules and procedures. And to add to this complexity, several structures are //nested,// thus anchoring their coordinates relative to the parent entity. These challenges are resolved by the introduction of yet another abstraction, the [[»canvas interface«|GuiCanvasInterface]], allowing to attach widgets relative to a //reference canvas// and to maintain uniform coordinates and a [[zoom metric|ZoomWindow]] used for a properly calibrated timeline display.
//The TimelineNavigator is a small widget at the left top side of the timeline display and allows to jump and control the current location.//

{{red{Unspecified and implementation postponed for later, as of 2/2023}}}
//The »''Patchbay''« at the left of the timeline provides controls to influence the scopes within the fork of tracks, most notably levels, mix-mode and routing.//
Structurally, it reflects the nesting of scopes in the ''Fork'' of tracks; each track is represented as a TrackHead widget, with sub-Tracks displayed recursively.
There is a three-level hierarchy: [[Project|Session]], [[Timeline]], [[Sequence]]. Each project can contain ''multiple timelines'', to be viewed and rendered independently. But, being the top-level entities, these timelines may not be combined further. You can always just render (or view) one specific timeline. Each of those timelines refers to a Sequence, which is a bunch of [[media objects|MObject]] placed to a [[fork ("tree of tracks")|Fork]]. Of course it is possible to use ~sub-sequences within the top-level sequence within a timeline to organize a movie into several scenes or chapters. 

[>img[Relation of Timelines, Sequences and MObjects within the Project|uml/fig132741.png]]
As stated in the [[definition|Timeline]], a timeline refers to exactly one sequence, and the latter defines a [[tree of tracks|Fork]] and a bunch of media objects placed to these tracks. A Sequence may optionally also contain nested sequences as [[meta-clips|VirtualClip]]. Moreover, obviously several timelines (top-level entities) may refer to the same Sequence without problems.
This is because the top-level entities (Timelines) are not permitted to be combined further. You may play or render a given timeline, you may even play several timelines simultaneously in different monitor windows, and these different timelines may incorporate the same sequence in a different way. The Sequence just defines the relations between some objects and may be placed relatively to another object (clip, label,...) or similar reference point, or even anchored at an absolute time if desired. In a similar open fashion, within the track-tree of a sequence, we may define a specific signal routing, or we may just fall back to automatic output wiring.

!Attaching output
The Timeline owns a list of global [[pipes (busses)|Pipe]] which are used to collect output. If the track tree of a sequence doesn't contain specific routing advice, then connections will be done directly to these global pipes in order and by matching StreamType (i.e. typically video to video master, audio to stereo audio master). When a monitor (viewer window) is attached to this timeline, similar output connections are made from those global pipes, i.e. the video display will take the contents of the first video (master) bus, and the first stereo audio pipe will be pulled and sent to system audio out. The timeline owns a ''play control'' shared by all attached viewers and coordinating the rendering-for-viewing. Similarly, a render task may be attached to the timeline to pull the pipes needed for a given kind of generated output. The actual implementation of the play controller and the coordination of render tasks is located in the Vault, which uses the service of the Steam-Layer to pull the respective exit nodes of the render engine network.

!Timeline versus Timeline View
Actually, what the [[GUI creates and uses|GuiTimelineView]] is the //view// of a given timeline. This makes no difference to start with, as the view is modelled to be a sub-concept of "timeline" and thus can stand-in. All different views of the //same// timeline also share one single play control instance, i.e. they all have one single playhead position. Doing it this way should be the default, because it's the least confusing. Anyway, it's also possible to create multiple //independent timelines// &mdash; in an extreme case even so when referring to the same top-level sequence. This configuration gives the ability to play the same arrangement in parallel with multiple independent play controllers (and thus independent playhead positions)

To complement this possibilities, I'd propose to give the //timeline view// the possibility to be re-linked to a sub-sequence. This way, it would stay connected to the main play control, but at the same time show a sub-sequence //in the way it will be treated as  embedded// within the top-level sequence. This would be the default operation mode when a meta-clip is opened (and showed in a separate tab with such a linked timeline view). The reason for this proposed handling is again to give the user the least surprising behaviour. Because, when &mdash; on the contrary &mdash; the sub-sequence would be opened as //separate timeline,// a different absolute time position and a different signal routing may result; doing such should be reserved for advanced use, e.g. when multiple editors cooperate on a single project and a sequence has to be prepared in isolation prior to being integrated in the global sequence (featuring the whole movie).
//Timing constraints of a render or playback process.//

!constituents
The Timings record is used at various places at the engine interface level, since it links together several pieces of information crucial for playback control.
* a reference to the underlying frame grid.
** the frame grid defines time origin and framerate
** additionally the grid translates nominal time into frame numbers
* the Timings record provides information about latency -- both output latency and engine latency are exposed here
* indication of the service class or playback urgency is included, e.g. {{{ASAP, NICE, TIMEBOUND}}}
* from the usage view the Timings record represents a //constraint// -- with the ability to specify additional constraining conditions

!!!!role of the ~TimeAnchor
While the timings define the nature of a specific playback feed, they omit the actual link to wall clock time. The latter is established and maintained during the planning process, which preceds in chunks of evaluation: each planning chunk is pinned to a specific real time deadline through the ''time anchor'' used as closure to define this planning chunk. Whenever a chunk of frames has been planned, a new time anchor is established and scheduled as a follow-up job to pick up the frame planning work.

note: still {{red{WIP as of 1/2013}}}

!where are the timings definied?
Timing constraint records are used at various places within the engine. Basically we need such a timings record for starting any kind of playback or render.
The effective timings are established when //allocating an OutputSlot// -- based on the timings defined for the ModelPort  to be //performed,// i.e. the global bus to render.
What //exactly&nbsp;// is denoted by &raquo;Track&laquo; &mdash; //basically&nbsp;// a working area to group media objects placed at this track at various time positions &mdash; varies depending on context:
* viewed as [[structural asset|StructAsset]], tracks are nothing but global identifiers (possibly with attached tags and description)
* regarding the structure //within each [[Sequence]],// tracks form a tree-like grid, the individual track being attached to this tree by a [[Placement]], thus setting up properties of placement (time reference origin, output connection, layer, pan) which will be inherited down to any objects located on this track and on child tracks, if not overridden more locally.
* with respect to //object identity,// a given track-ID can have an incarnation or manifestation as real track-object within several [[sequences|Sequence]] (meaning you could select, disable or choose for rendering all objects in any sequence placed onto this track). Moreover, the track-//object// and the //placement&nbsp;// of this track within the tree of tracks of a given sequence are two distinguishable entities (meaning a given track &mdash; with a couple of objects located on it &mdash; could be placed differently several times within the same sequence {{red{really??}}}, for example with different start offset or with different layering, output mode or pan position)
{{red{3/2010: no!}}} &mdash; it doesn't work this way. Relative placements attach to other placements, not ~MObjects. Thuse the clips in this example are within the scope of //one// placement. We could create //another// placement of the same track and attach it to to a differenc sequence, but this other placement wouldn't "inherit" the clips attached to the first one!

!Identification
Tracks thus represent a blend of several concepts, but depending on the context it is allways clear which aspect is meant. Seen as [[assets|Asset]], tracks are known by a unique track-ID, which can be either [[queried|ConfigQuery]], or directly refered to by use of the asset-ID (which is a globally known hash). Usually, all referrals are via track-ID, including when you [[place|Placement]] an object onto a track. 
Under some cincumstances though, especially from within the [[Builder]], we refer to a {{{Placement<Track>}}} rather, denoting a specific instantiation located at a distinct node within the tree of tracks of a given sequence. These latter referrals are always done by direct object reference, e.g. while traversing the track tree (generally there is no way to refer to a placement by name {{red{Note 3/2010 meanwhile we have placementIDs and we have ~MObjectRef. TODO better wording}}}).

!creating tracks
Similar to [[pipes|Pipe]] and [[processing patterns|ProcPatt]], track-assets need not be created, but rather leap into existence on first referral. On the contrary, you need to explicitly create the {{{Placement<Fork>}}} for attaching it to some node within the tree of tracks of an sequence. The public access point for creating such a placement is {{{MObject::create(forkID}}} (i.e. the ~MObjectFactory. Here, the {{{forkID}}} denotes the track-asset. This placement, as returned from the ~MObjectFactory isn't attached to the session yet; {{{Session::current->attach(trackPlacement)}}} performs this step by creating a copy managed by the PlacementIndex and attaching it at the current QueryFocus, which was assumed to point to a track previously. Usually, client code would use one of the provided convenience shortcuts for this rather involved call sequence:
* the interface of the fork-~MObjects exposes a function for adding new child forks.
* the session API contains a function to attach child tracks. In both cases, existing forks can be referred by plain textual ID.
* any MObjectRef to an object within the session allows to attach another placement or ~MObjectRef

!removal
Deleting a Fork is an operation with drastic consequences, as it will cause the removal of all child forks and the deletion of //all object placements to this fork,// which could cause the resepctive objects to go out of scope (being deleted automatically by the placements or other smart pointer classes in charge of them). If the removed fork was the fork root ("root track") of a sequence, this sequence and any timeline or VirtualClip binding to it will be killed as well. Deleting of objects can be achieved by {{{MObjectRef::purge()}}} or {{{Session::purge(MObjectRef)}}}

!using Tracks
The '''Track Asset''' is a rather static object with limited capabilities. It's main purpose is to be a point of referral. Track assets have a description field and you may assign a list of [[tags|Tag]] to them (which could be used for binding ConfigRules).  Note that track assets are globally known within the session, they can't be limited to just one [[Sequence]] (but you are allways free not to refer to some track from a given sequence). By virtue of this global nature, you can utilize the track assets to enable/disable a bunch of objects irrespective of what sequence they are located in, and probably it's a good idea to allow the selection of specific tracks for rendering.
Matters are quite different for the placement of a Track within the tree of tracks of a given sequence, and for placing some media object onto a given track. The track placement defines properties which will be inherited to all objects on this track and on all child tracks and thus plays a key role for wiring the objects up to some output pipe. Typically, the top level track of each sequence has a placement-to "the" video and "the" audio master pipe.

!!!!details to note
* Tracks are global, but the placement of a track is local within one sequence
* when objects are placed onto a track, this is done by referal to the global fork asset ID. But because this placement of some media object is allways inherently contained within one sequence, the //meaning&nbsp;// of such a placement is to connect to the properties of any track-placement of this given track //within this sequence.//
* thus tracks-as-ID appear as something global, but tracks-as-propperty-carrier appear to the user as something local and object-like.
* in an extreme case, you'll add two different placements of a track at different points within the track tree of an sequence. And because the objects placed onto a track refer to the global track-ID, every object "on" this track //within this sequence&nbsp;// will show up two times independently and possibly with different inherited properties (output pipe, layering mode, pan, temporal position)
* an interesting configuration results from the fact that you can use an sequence as a [["meta clip" or "virtual clip"|VirtualClip]] nested within another sequence. In this case, you'll probably configure the tracks of the "inner" sequence such as to send their output not to a global pipe but rather to the [[source ports|ClipSourcePort]] of the virtual clip (which are effectively local pipes). Thus, within the "outer" sequence, you could attach effects to the virutal clip, combine it with transitions and place it onto another track, and any missing properties of this latter placement are to be resolved within the "outer" sequence <br/>(it would be perfectly legal to construct a contrieved example when using the same track-ID within "inner" and the "outer" sequence. Because the Placement of this track will probably be different in the both sequences, the behaviour of this placement could be quite different in the "inner" and the "outer" sequence. All of this may seem weird when discussed here in a textual and logical manner, but when viewed within the context and meaning of the various entities of the application, it's rather the way you'd expect it to be: you work locally and things behave as defined locally)
* note further, the root of the tree of tracks within each sequence //is itself again a //{{{Placement<Fork>}}}. There is no necessitiy for doing it this way, but it seemed more stright forward and logical to Ichthyo, as it allowes for an easy way of configuring some things (like ouput connections) as a default within one sequence. As every track can have a list of child tracks, you'll get the "list of tracks" you'd expect.
* a nice consequence of the latter is: if you create a new sequence, it automatically gets one top-level track to start with, and this track will get a default configured placement (according to what is defined as [[default|DefaultsManagement]] within the current ConfigRules) &mdash; typically starting at t=0 and being plugged into the master video and master audio pipe
* nothing prevents us from putting several objects at the same temporal location within one track. If the builder can't derive any additional layering information (which could be provided by some other configuration rules), then //there is no layering precedence// &mdash; simply the object encountered first (or last) wins.
* obviously, one wants the __edit function__ used to create such an overlapping placement&nbsp; also to create an [[transition|TransitionsHandling]] between the overlapping objects. Meaning this edit function will automatically create an transition processor object and provide it with a placement such as to attach it to the region of overlap.
//The TrackHead display is always visible to the left of the timeline and provides controls for a Track's scope, including all nested sub track//
In Lumiera, »Tracks« are arranged as a system of nested scopes, starting top-down from the root-track of the Timeline. For each Track, there is a {{{timeline::TrackBody}}} element in the //content area// of the Timeline display, and a corresponding {{{timeline::TrackHeadWidget}}}, arranged at apropriate level within the [[»Patchbay«|TimelinePatchbay]] area to the left. The //model entity »Track«// corresponds to a TrackPresenter in the timeline display, which in turn manages and synchronises both related display zones in the body and header space.

!Structure of the Track Head display
While the //content area// is rendered by [[custom drawing onto a canvas widget|GuiTimelineDraw]], the //header display// is built up „conventionally“ -- <br/>based on nested {{{Gtk::Grid}}} widgets (i.e. the {{{TrackHeadWidget}}} inherits from {{{Gtk::Grid}}}).
* For any track, this grid is comprised of two columns and will initially be populated with three rows:
**a row holding the Track Header label and menu (actually an [[ElementBoxWidget|GuiElementBoxWidget]])
**a row corresponding to the //content// of the track itself, to hold the controls  to govern this track's scope,<br/>i.e. the track //together with all nested sub-tracks.//
** a padding row to help synchronising track head and track body display.
*Additional sub-Tracks are added as additional lines to the grid, while nested sub-Tracks will be handled by nested TrackHead widgets
*The column to the left side will be increased accordingly to display the nested fork structure in the form of [[nested brackets|TrackStaveBracket]]
''towards a definition of »Track«''. We don't want to tie ourself to some naive and overly simplistic definition, just because it is convenient. For classical (analogue) media, tracks are physical entities dictated by the nature of the process by which the media works. Especially, Tape machines have read/writing heads, which creates fixed tracks to which to route the signals. This is a practical geometric necessity. For digital media, there is no such necessity. We are bound primarily by the editor's habits of working.

!!!Assessment of Properties
Media are used as Clips (contiguous chunks), they are a compound of several elementary streams, and they have a temporal extension (length). Indeed, the temporal dimension is the only fundamental property that can't be changed. Orthogonal to this dimension, we find one or more organisational dimensions forming a grid:
* a media stream may be sent to one of several possible output destinations (image or sound, subgroup busses, MIDI instruments)
* for any given output destination there may be variations in the //way of connecting// (overlay mode and layer, pan position, MIDI channel)
* besides, we often just want to stash away some clip without using it, e.g. as an alternative or for later referral
This is to say we have //several degrees of freedom// within this organisational grid. Just because some sound is located on this track doesn't mean he will be sent to a given output, rather the clip is located on this track //and// is connected to that output //and// &mdash; supposed we have full-periphonic surround sound &mdash; it is located 60° to the right and with 30° elevation. Combined with the (always contiguous) temporal dimension, this discrete grid is thus extruded to form something like discrete Tracks.

!!!do we really need Tracks?
Starting with the assumption "everything is just connected processing nodes", Tracks may seem superfluous. The problem with this approach is: it doesn't scale well. While it is fine to be able to connect clips and effects as we see fit (indeed, we want to build such a system), it is clearly not feasible to wire every clip manually to the output ports or add a panner effect to each and every audio sample. Because while editing, most of the time things are done in a fairly regular and systematic manner. Preferably we use the tracks as //preconfigured group setup// and just //place media onto them;// any such [[Placement]] can do the necessary wiring semi-automatic (rule-based).

!!!the constant part
there seems to be some non time-varying part in each sequence, that doesn't fit well with the simple model "objects on a timeline". Tracks seen as an organisational grid fall into this category: they are a global property of the given sequence. They could be associated to the Session as a whole, but effectively this would subvert the concept of having [[several sequences|SessionOverview]]. On the other hand, 
[[pipes|Pipe]] for Video and Sound output are obviously a global property of the Session. There can be several global pipes forming a matrix of subgroup busses. We could add ports or pipes to tracks by default as well, but we don't do this, because, again, this would run counter to our attempt of treating tracks as merely organisational entities. We have special [[source ports|ClipSourcePort]] on individual clips though, and we will have ports on [[virtual clips|VirtualClip]] too.

!Design
[[Forks ("tracks")|Fork]] are just a structure used to organize the Media Objects within the session. They form a grid, and besides that, they have no special meaning. It seems convenient to make the tracks not just a list, but allow grouping (tree structure) right from start. __~MObjects__ are ''placed'' rather than wired. The wiring is derived from the __Placement__. Placing can happen in several dimensions:
* placing in time will define when to activate and show the object.
* placing onto a track associates the ~MObject with this track; the GUI will show it on this track and the track may be used to resolve other properties of the object.
* placing to a __Pipe__ brings the object in conjunction with this pipe for the build process. It will be considered when building the render network for this pipe. Source-like objects (clips and exit nodes of effect chains) will be connected to the pipe, while transforming objects (effects) are inserted at the pipe. (you may read "placed to pipe X" as "plug into pipe X")
* depending on the nature of the pipe and the source, placing to some pipe may create additional degrees of freedom, demanding the object to be placed in this new, additional dimensions: Connecting to video out e.g. creates an overlay mode and a layer position which need to be specified, while connecting to a spatial sound system creates the necessity of a pan position. On the other hand, placing a mono clip onto a mono Pipe creates no additional degrees of freedom.
Placements are __resolved__ resulting in an ExplicitPlacement. In most cases this is just a gathering of properties, but as Placements can be incomplete and relative, there is room for real solving. The resolving mechanism tries to __derive missing properties__ from the __context__: When a clip isn't placed to some pipe but to a Track, than the Track and its parents will be inspected. If some of them has been placed to a pipe, the object will be connected to this pipe. Similar for layers and pan position. This is done by [[Placement]] and LocatingPin; as the [[Builder]] uses ~ExplicitPlacements, he isn't concerned with this resolving and uses just the data they deliver to drive the [[basic building operations|BasicBuildingOperations]]
&rarr; [[Definition|Fork]] and [[handling of Tracks|TrackHandling]]
&rarr; [[Definition|Pipe]] and [[handling of Pipes|PipeHandling]]
//mediating entity used to guide and control the track-like nested working space in the timeline display of the UI.//

Similar to the ClipPresenter, judged from a global angle, this element fulfils a model-like role, while at the same time guiding and controlling a mostly passive view component, implemented as GTK widget. Here, the authority of the presenter over the widget must be total, since display management //needs to work automatically,// due to model updates and mutations arriving as [[diff messages|MutationMessage]]. In addition, this structure is prerequisite for (possibly) implementing UI rendering optimisations, since it allows us to leave out widgets entirely, when it is clear they won't become visible: A ''display evaluation pass'', which is effectively a //tree walk,// consecutively visits each part of the timeline structure, to negotiate its concrete display properties in collaboration with a global TimelineDisplayManager. As a result, the presenter knows where to show its corresponding view, and it knows if to show it at all, allowing to either adjust, create or destroy actual GTK widgets within its local reference frame.

A special twist arises from the fact that track display has to happen aligned and in sync within the two display panes of the timeline at the same time. This means that each TrackPresenter has to hold and manage //two slave display elements,// each of which is inserted within a disjoint hierarchy of display elements. For one, there is the {{{timeline::HeaderPaneWidget}}} which in turn renders a [[Navigation Control|TimelineNavigator]] and below the [[»Patchbay«|TimelinePatchbay]], holding a [[timeline::TrackHeadWidget|TrackHead]] for each track -- while on the other side there is a {{{TrackBody}}} element, which is not strictly a widget of itself, but inserted into our custom drawing {{{timeline::BodyCanvasWidget}}} to manage the custom drawing of the respective track working area.

To deal with this typical problem of recursive programming, we introduce a binding element, the {{{DisplayFrame}}}
* the TrackPresenter holds a {{{DisplayFrame}}} member, which in turn houses the two head and body widgets. So basically the widgets are allocated within the presenter.
* however, after constructing the {{{DisplayFrame}}}, it invokes an anchor functor, which is passed in from the recursive outer scope, and which does the actual work of installing those widgets into the apropriate parent widgets
* but the {{{DisplayFrame}}} itself also exposes such an anchor functor, which can be passed down when (recursively) creating a TrackPresenter for a sub-Track
* obviously, at top-level, we need to digress from that general pattern -- here it is the {{{timeline::TimelineLayout}}} responsible for the display structure of the whole timeline which exposes the necessary anchor function to attach the root track for a timeline.

!Organisation of track profile and rulers
Related to the problem of building the nested track structure is the problem of proper graphical representation of this nested structure -- especially within the right pane, //the body part,// where the actual track content will be shown. Due to various limitations of typical UI frameworks, we prefer to handle this body part by [[custom drawing|GuiCustomWidget]] onto a canvas widget. Which places us into the situation to generate suitable graphical elements based on the effective nested structure of the tracks. We solve this challenge by performing an //evaluation pass// over the actual {{{TrackBody}}} structure, which is built starting from the {{{DisplayFrame}}} for each track. The {{{TrackBody}}} elements are helper objects, and serve the sole purpose of getting this drawing and display control requirements settled; basically they mirror the track structure, and the corresponding structure of GTK widgets in the header pane, and they likewise form a tree structure, which can be traversed to collect information. Which is what we, for building up a "track profile", which can then later be &rarr; [[rendered consistently onto the body canvas|GuiTimelineDraw]].

Another related concern are the [[track rulers|TrackRuler]]. These are overview panes running alongside the track time axis. As UI model entities, they are attached to the corresponding {{{TrackBody}}} element, while collaborating with the {{{TrackPresenter}}} to retrieve the actual content to be shown in the overview. The purpose of track rulers is to show effects, aggregation marks or selection states; and especially in the top-level track the ''time ruler'' indicates the current time and frame position.
//A small overview pane running alongside a track in the timeline display.//
The presence and specific configuration of a ruler is governed by the [[Session model|HighLevelModel]], while the actual content is derived from the context appropriately by the presentation code in the UI. Rulers can be used for various purposes
* the ''time ruler'' at the top of the timeline display indicates the time axis position in various formats.
* an ''overview ruler'' shows the effective content of a track in condensed form
* location- and range markers can be shown on top of a ruler
* selection- and processing state may likewise be visualised.

!Organisation of track ruler presentation
The actual presentation of ruler elements is a fusion of parametrisation, track content and (persistent) UI state.
* The fact that a ruler is present, and also its presentation style is controlled by the Session. {{red{7/19 not clear yet in which form they are represented in the model}}}
* Rulers are sensitive to the expanded/collapsed state of their corresponding track; there is a distinct parametrisation for three distinct cases
** the ruler of a //collapsed// track can either be hidden, or shown in condensed form
** the ruler of an //expanded fork// (a track with nested child tracks) will typically give a summary of effective content, but can also be hidden, or feature other content
** the ruler of an //expanded leaf track// (no child tracks) will typically be hidden, but can likewise be configured to show specific data
//Graphical representation of the nested Track scopes as part of the TrackHead display.//
The design (and the name) of this graphical display is inspired by musical notation, where it is customary to indicate with a bracket the grouping of stems (or staves) forming an ensemble, while a brace typically groups the staves to be played on a single instrument, e.g. a grand piano or organ.

In the Lumiera Timeline UI, »Tracks« are arranged into a ''Fork'' of nested scopes; the Timeline has a single root Track, which in turn might hold several nested sub-Tracks, and the nesting can be continued recursively. In the [[»Patchbay«|TimelinePatchbay]], settings and properties can be controlled to influence a specific part of the scope hierarchy, and this part is always represented by the TrackHead of the specific track forming this scope. [>img[Nested Track Heads grouped by stave brackets|draw/TrackHeadNesting.png]] In this {{{TrackHeadWidget}}}, there is a cell holding the actual controls, while further grid cells below possibly hold further nested sub-Track heads -- while the left side of this Track head arrangement is used to indicate the grouping of scopes with the aforementioned brackets.

!Design
The design of the grouping Bracket in the graphical display is based on proportions governed by the ''Golden Ratio'' Φ = ½·(1+√5)
Relying on those proportions has much tradition, and often leads to a design perceived as natural and unobtrusive.

The Golden Ratio creates a division into a larger part and smaller part, denoted as Φ-major and Φ-minor -- and the defining property of Φ is that these divisions can be extended or subdivided endlessly, always reproducing the same ratio Φ: A distance divided by Golden ratio gan be //extended// by adding the Φ-minor, with the surprising result that the now extended distance and the original distance are again governed by the Golden Ratio, leading to geometric structures intricately interwoven.
[<img[Stave Bracket construction|draw/StaveBracket.png]]

This way, the size and shape of the Stave Bracket is determined by choosing a single ''base width'', which in this case is the horizontal extension of the double vertical line. This base width is divided by Φ, and the smaller part, the Φ-minor is used as the filled bold vertical bar, while the larger distance, the Φ-major is used to define the spacing to the smaller vertical line. The right side of the bold vertical bar is used as anchor line, with the origin of the coordinate system at the top end, just below the top cap.

The cap is enclosed into a bounding box, which is again defined through the Golden Ratio, by using the //base width// as it's Φ-minor. Consequently, the size of this bounding square is thus Φ². The curved contour of the cap is inscribed into that bounding square, along the main diagonal, and with a tangent at the tip pointing down to the end of the smaller line (and thus at Φ-minor of the bounding square's lower base). To find the centre of this curved arc, a perpendicular is erected in the middle of the main diagonal (since the arc can be assumed to be symmetrical to this line); furthermore the radius of this arc must touch the tangent at the tip of the figure, and thus a perpendicular attached at the tip to the tangent will subtend the symmetry line and yield the centre point. By a similar construction, the inner arc -- which forms the outer side of the cap -- can be constructed along the „upper diagonal“, running down from the tip to the Φ-minor at the left side.

__''Implementation''__: while rather simple in geometrical terms, obtaining the coordinates by trigonometric calculations can be challenging. To simplify this tedious task, the ''Constraint'' system of ''~FreeCAD'' was used to directly define the geometric relations outlined above; the resulting coordinates were then be picked from the ~FreeCAD XML document, and embedded as hard wired constants into the {{{timeline::StaveBracketWidget}}} code. For the purpose of this construction, the base width was defined as 1 Unit -- yet for the actual drawing code, an //uniform scaling factor// is applied, based on the font size available in the CSS style context of the widget. As GTK uses the vector graphics libary ''Cairo'' for drawing, the outline of the shape can be directly translated into path instructions, and the resulting contour will then be filled with a solid colour, as defined by the foreground {{{color}}} attribute of CSS. For the curved arcs, a slight adaptation is necessary, since ~FreeCAD defines circle segments in mathematical orientation ↶, while SVG and libCairo use a clockwise ↻ orientation starting at +X axis.
Transitions combine the data from at least two processing chains and do this combining in a time varying fashion. So, any transition has
* N input connections
* either one or N output connections
* temporal coordinates (time, length)
* some control data connection to a ParamProvider, because in the most general case the controling curves are treated similar to  [[automation data|AutomationData]]

!!!how much output ports?
The standard case of a transition is sort of mixing together two input streams, like e.g. a simple dissolve. For this to be of any use, this input streams need to be connected to the same ouput destination before and after the transition (with regards to the timeline), i.e. the inputs and the transition share placement to the same output pipe. In this case, when the transition starts, the direct connections can be suspended and the transition will switch in seamlessly.

Using transitions is a very basic task and thus needs viable support by the GUI. Handling of transitions need to be very convienient, because it is so important. Because of this, it is compelling to subsume a more complicated situation and treat this more complicated case similar. This is the case, when two (or N) elements have to be combined in the way of a transition, but their further processing in the processing chain //after// the transition needs to be different, maybe they belong to differnent subgroups, have to appear on different layers or with different pan positions. Of courese, the workaround is simple, at least "simple" from the programmers point of view. It is //not// simple from the editor's point of view the moment the editor has to do this simple thing (changing the wiring and add manualy synced fade curves to the individual parts) a dozend or even a hundred times in some larger project.

Because of this experience, ichthyo wants to support a more general case of transitions, which have N output connections, behave similar to their "simple" counterpart, but leave out the mixing step. As a plus, such transitions can be inserted at the source ports of N clips or between any intermediary or final output pipes as well. Any transition processor capable of handling this situation should provide some flag, in order to decide if he can be placed in such a manner. (wichin the builder, encountering a  inconsistently placed transition is just an [[building error|BuildingError]])
At various points within the implementation we encounter problems of structural difference computation, like finding the effective changes in a tree data structure, or using a diff represetation as notification message format. It might be in place to point out some observations, especially relating our approach(es) to the problem as generally known .

!classical diff problem
There is very wide spread use of a very specific flavour of diff calculations: the classical diff of textual data. Here, the treated data has a completely uniform linear structure -- it is either a sequence of lines or a sequence of bytes ("binary diff"). The individual data cells are built from a finite alphabet of comparable values (characters). Since a given character or word may be present several times in the same document, we need to establish a //matching// as foundation of any difference calculation. We need to establish the identical parts to find the differences. This matching is not unique, and largely determines the differences found, so it is hard to determine if a given diff is //correct.// Thus, the core idea to attack and solve this classical diff problem was to look for an ''optimal diff''. Some variations include
* longest common substrings
* largest common subsequence
* minimal edit script to perform the diff

!challenge of the diff problem
While, from a mathematical point of view, the above optimisation problem can be considered as solved, in practice the available solutions are far from perfect. The fundamental assumption of a linear sequence to base the diff turns out as an oversimplification -- real world data carries meaning and can be judged by imposing additional structure. There can be structure violating diffs and there can be nonsensical and misguiding diffs. In the light of this notion, every diff is in fact a structural diff.
Unfortunately, attempts to amend the beautiful mathematical solution by incorporating this additional structure turns out to be incredible hard. Either, the general case problem can be proven to be ~NP-hard, or, at least, exploiting some special structural properties successfully renders the meaning of an "optimal" diff solution more or less arbitrary. The quest for a generic diff problem and an universal solution turns out to be a dead end.

!fundamentals of diff handling
The most fundamental distinction is the difference between //finding// a diff and //representing// a diff. The former is concerned with uncovering structural relations, while the latter deals with knowledge about structural relations and thus is more general. It is possible to capture structural relations while they emerge -- this way describing a process of transformation. A likewise fundamental distinction is between //reordering// elements and //mutating// them. This is related to the notion of //identity,// which in turn implies an underlying model of the elements and entities to be considered for diffing. If, as an example, we model the tokens, functions and classes of program code as mere characters, we will be happily matching curly braces and can not expect meaningful diffs. So first we need a theory about what can possibly happen, and we use this as a foundation to establish the representation of such possible processes. The choices taken here can make all the difference towards efficient and usable methods. The //matching problem// should be viewed from here: a matching is actually a hypothesis about possible processes of transformation -- this observation explains why the matching strategy is at the core of any diff algorithm. If we use some arbitrary notation of optimality, we have to consider endless combinations and end up with a lot of accidental complexity. Yet if we manage to base the calculation on a representation well aligned with the nature of the entities in question, the matching can be as simple as retrieving a given entry by ID. The actual work is reduced to extracting the raw changes in data.
Before we can consider a diffing technique, we need to clarify the primitive operations used as foundation. These primitives form //a language.// This incurs a problem and choice of context. We might base the representation language on the situation where the diff is applied, we may care for the conciseness of the representation, or the effort and space required to apply it. We should consider if the focus is on reading and understanding the diff, if we intend to derive something directly from the diff representation -- say, if and what and when something is changed -- and we should consider how the context where the diff is established relates to the context where it is applied.

&rarr; [[Implementation considerations|TreeDiffImplementation]]

!diff as data representation
We haven't said anything regarding the purpose of dif representation yet. There might be several. A human reader, for instance, may use a diff description to spot and judge the temporal evolution of data and structures. Another approach is to use diff representation as ''architecture for data exchange''. Connection through //diff messages// allows collaboration //without the need for shared data.// Going this route helps with focus and parallelisation. A self contained subsystem based on a private data representation is requires less resources for maintenance and inception. It incurs less overhead for coupling and dependencies with other subsystems, both in terms of human understanding and machine execution power.
Used this way, diff representation helps to separate structure and raw data in exchange. Any changes are broken down into mutations of the structure and chunks of atomic data updates. The data still embedded into the diff description is reduced to //elementary data.// The intricacies of typing and type based special treatment of data is stripped out by this usage pattern. Types aren't going away, yet they are subsumed to the structure and pushed to the inside. //Elementary data// is data form a small selection of fundamental elementary types, all of which can be represented textually: strings, integers, floats, date, time and binary byte sequences. This data is complemented by a small number of //structuring primitives:// the ordered sequence, the unordered set and the associative array of (key, value) pairs. A complete diff representation language needs provisions for any of these fundamentals.
;internal diff
:this kind of exchange language is usable on the level of implementation. It confines raw data to the small set of basic types outlined above.
:These are still passed on in binary form, either as value copies, by an ID based lookup mechanism, or by reference, supported by a synchronisation protocol
;external diff
:going full circle, this representation abstracts completely and removes the dependency on any binary format.
:Chunks of raw data are attached inline to the structural diff, assuming that each element implicitly knows the kind of data to expect
//This page details decisions taken for implementation of Lumiera's diff handling framework//
This topic is rather abstract, since diff handling is multi purpose within Lumiera: Diff representation is seen as a meta language and abstraction mechanism; it enables tight collaboration without the need to tie and tangle the involved implementation data structures. Used this way, diff representation reduces coupling and helps to cut down overall complexity -- so to justify the considerable amount of complexity seen within the diff framework implementation.

!outline of the diff handling framework
Provided as a loosely coupled collection of tools in the namespace {{{lib::diff}}}, the diff framework revolves around generic representation and handling of structural differences. Beyond some rather general assumptions, to avoid stipulating the usage of specific data elements or containers, the framework is cast in terms of ''elements'', ''sequences'' and ''strategies'' for access, indexing and traversal.
;elements
:the atomic units treated in diff detection, representation and application are considered to be
:*lightweight copyable values
:*equality comparable
:*bearing distinct identity
:*unique //as far as relevant//
;data sequence
:any collection of data is delivered in the form of a sequence, which is //stable// (retains a given sequence order), and allows for defined traversal in that order.
;diff language
:a diff represents the changes necessary to transform an input sequence ("old sequence") into a target sequence ("new sequence")
:*differences are spelled out in ''linearised form'': as a sequence of constant-size diff actions (called &raquo;diff verbs&laquo;)
:*these are conceived as operations, which, when applied consuming the input sequence, will produce the target sequence of the diff.

!building a list diff detector
transforming the algorithm sketch (&rarr; see [[technical documentation|http://lumiera.org/documentation/technical/library/DiffFramework.html]]) into working code incurs some challenges at the level of technical details.

!!!interface of the index component
Obviously we want the helper indices to be an internal component abstraction, so the outline of the algorithm remains legible. Additionally this allows to introduce a strategy for obtaining the index. This is important, since in case of performance problems we might consider to integrate the indexing efforts into the external data structure, because this might enable to exploit external structural knowledge.
So the challenge is to come up with an API not too high-level and not too low-level

!!!how to implement re-ordering
A first attempt was made to come up with a rather clever swapping and rotating scheme. But this turned out to be rather complex, due to the fact that we //do not want index numbers in the diff representation.// It turns out that a presumably optimal solution could be built on top of ''cycle sort'' -- but we're lacking any point of reference to determine if such an elaborate solution is worth the effort. Thus the decision to KISS and stick to plain flat insertion sort (which is known to be the best choice for small data sets). With clever usage of the indices, this approach allows us to emit the diff description right away, without any need to build a meta table or even to touch the given input sequences.

!!!goal of a generic implementation
The diff handling framework we intend to build here is meant to be //generic// -- the actual element data type, as well as the underlying data structure and the index access shall be supplied by strategy and specialisation. This has the downside (maybe this is even a benefit?) that most efficiency considerations are moot at this level; we need to look at actual use cases and investigate the composite performance in practice later.

!building the tree diff handling
Our tree diff handling framework is conceived as a direct specialisation of and extension to list diffing. We rely on a very specific element type for the tree nodes: the GenNode. This specific node element type is inherently generic, and the tree diff handling explicitly relies on some part of this generic nature to represent object features. Following the gist of the linearised list diff representation, also tree handling relies on an implicit ''tree representation protocol'':
* every GenNode is a distinct element and considered unique within context.
* nodes are inherently typed, but this typing is outside of the diff representation scope
* nodes and subtrees are spelled out in prefix order; a node optionally is followed by a list of children.
* nodes representing //objects// may have attribute children, which are always mentioned before the sub node children.
* when a node is //opened for diffing,// we first spell out any structural alterations (added, deleted or re-ordered children)
* the recursive descent into children happens postfix depth-first, each enclosed into a bracketing construct.
Consequently, when an element appears in the diff description sequence, first of all, its type is assumed to be known implicitly, so the receiver can use an appropriate handler for this kind of element. Moreover, if the element is of atomic type (an attribute), its value is part of the element itself and thus is known just by spelling out the element. Any structural changes can be dealt with on a completely generic level, without further knowledge about the object's nature. And finally, any internal alterations of the object and its children happen after the generic part and clearly delineated in the sequence of diff tokens -- a sub handler can be invoked recursively

!!!problem of the typed context
This is a problem every introspective framework has to face. When individual elements in the data structure require specific type information for proper handling, this information must be readily available, once we start doing anything of significance with the generic data representation. A naive implementation would attach the type identification to a descriptor record exposed alongside with the data representation, thereby forcing the client back into the worst conceivable programming style, suffocating any beneficial effect the use of types and contracts might have on understandability and maintainability of the code: Either, you'll just assume everything goes well, or you end up being prepared for anything conceivable, or -- even worse -- a combination of both evils.
There are two known remedies for this dilemma
* a visitor (double dispatch scheme), which has its own maintenance problems
* a //pull// approach, where the consuming client implicitly knows the full context, since it works from within this context
Both approaches incur some complexity on behalf of the involved parties, and a runtime indirection overhead, which might be considered the residual cost for every //reflective data handling.//
This design prefers the //pull// approach, with a special twist: we provide a completely generic, fixed implementation of the pull handling, yet offer the client to install a closure to receive any actual changes. Obviously, such a closure embodies the full typed context, and the validity of this typing can be checked at least dynamically, on //installation of the closure.// This is not as much type safety as is achievable through a visitor, but also less involved and ceremonial -- at least we can ensure the validity of the connection the moment we actually hook it up and remove the risk of running into fundamental mismatch problems in the middle of processing. The latter is the recurring plague haunting most "dynamic" systems.

!!!representation of objects
It should be noted, that the purpose of this whole architecture is to deal with »remote« stuff -- things we somehow need to refer and deal with, but nothing we can influence immediately, right here: every actual manipulation has to be turned into a message and sent //elsewhere.// This is the only context, where some, maybe even partial, generic and introspective object representation makes sense.

__Questions for Design (6/15)__
* do we need to //alter// object contents -- or do we just replace? &larr; provide a ''Mutator''
* to what degree is the distinction between attributes and children even relevant -- beyond the ability to address attributes by-name?
* how do we describe an object from scratch? &larr;''object builder''
* how do we represent the break between attributes and children in this linearised description?
** using a separator element?
** by convention through the element names? &larr; ''This''
** as additional metadata information sent beforehand?
* we need an object-reference element, since we do not want to copy whole subtrees while processing a diff

!!!Mapping a Diff Language to Object structures
"Objects" can be spelled out literally in code. We care to make the respective ctor syntax expressive enough. For nested objects, i.e. values of type {{{diff::Record}}}, a dedicated object builder notation is provided, because this is the point, where the syntax gets convoluted. Yet the interesting questions arise when it comes to spelling out a diff language description against an existing object tree. While a conventional list diff implicitly relies on the structural properties of a list, in our case, the //actual, concrete object// tree serves as structural backdrop and interpretation context of the description in diff language. Effectively this makes the language self contained: it is possible to unfold a new structure from scratch, and use this new structure as implicit context for further manipulations henceforth.

Within this framework, we represent //object-like// entities through a special flavour of the GenNode: Basically, an object is a flat collection of children, yet given in accordance to a distinct ''object protocol''. The relevant ''meta'' information is spelled out first, followed by the ''attributes'' and finally the ''children''. The distinction between these lies in the mode of handling. Meta information is something we need to know before we're able to deal with the actual stuff. Prominent example is the type of the object. Attributes are considered unordered, and will typically be addressed by-name. Children are an ordered collection of recursive instances of the same data structure. (Incidentally, we do not rule out the possibility that also an attribute holds a recursive subtree; only the mode of access is what makes the distinction).

Here the question arises as to what extent the //language// needs to know about these object semantics. While a commitment for precision might lead us towards strict language definition, in fact, languages usable in practice need largely not be defined at all, since they are applied against a context. And since the use of the language itself might guide us from one context to another, the possibility of multiple levels of language arises. We use this observation as guideline and hint to keep our diff language open. Basically, it is just a sequence of verbs, which needs an actual interpreter implementation, which in turn naturally leads itself to attachment to some working context. At this point, we get a //binding// between sequences of language terms and the operational semantics, which in turn defines the limits of legal language constructs. As long as both sides agree upon the same structural conventions, the exchange works without strict codification. We should better strive at defining our object semantics precisely though. Any leeway can be allowed, as long as it conforms with the general layout and as long as it doesn't open a path to confusion later.

Based on these considerations we establish the "two lists" schematics:
* we make our objects look like lists of attributes and children
* we define our protocol rules as
** attributes first
** metadata given by //magic attributes// (just a {{{"type"}}} attribute for now)
** occurrence of the first child switches from attribute zone to child scope
** children are recognisable by the form of their ID
Relying on these rules, we're able to systematically build and derive a sensible binding, while most of the implementation is just a specialisation of list diffing.

!!!handling of actual mutation
This question is closely linked to the semantics of equality. In a simple list diff, this matter doesn't pose any problems; when an element is different, it is a different element, and this change can be encoded as a deletion and insertion of a new element. Not so in tree diff handling. We do not want to delete and re-build whole subtrees, because some tiny bit is altered somewhere down. Thus, a recursive sub-structure can be considered //the same entity,// yet still //mutated.// Our diff handling framework deals with the identity first, followed by an recourse into investigating //inner changes.// This recursive investigation is spelled out as a bracketed construct, which can be processed by recursive invocation. In the end, at the level of the tree leaves, handling those inner mutations boils down to invoking the //mutation closure,// as mentioned above. The knowledge of type context is thus confined to the receiving client, as long as every GenNode implementation offers support to detect an inner mutation and allows to install and invoke such a specifically typed closure to deal with the mutation. The twist to note is the point, //where// this closure is installed: it certainly doesn't make sense to install it on the generating side.

!!!conceptual mismatch
In the attempt to represent changes to a data structure in the form of //abstracted diff(erences),// we're facing a conceptual mismatch on two levels
* even a generic, [[DOM-like structure|ExternalTreeDescription]] has more inherent rigidity than is compatible with the notion of describing differences. //We'll have to reject some diffs//
* any real language-object implementation bears rich shades of semantics, which are impossible to represent symbolically. //We'll have to paraphrase and omit some aspects//

!!!use cases of tree diff application
Within the context of GuiModelUpdate, we discern two distinct situations necessitating an update driven by diff:
* a GenNode representing an object pulls diff information provided by ~Steam-Layer. This information contains mutations of attributes, and //some// of these attributes are relevant for the GUI and thus represented within the GuiModel
* a widget was notified of pending changes in the GuiModel and calls back to pull a diff. While interpreting the attributes mentioned in the diff, it has to determine which widget state corresponds to the mentioned attributes, if applicable. Detected changes need to be interpreted and pushed into the corresponding widget and GTK elements.
the second case is what poses the real challenge in terms of writing well organised code. Since in that case, the receiver side has to translate generic diff verbs into operations on hard wired language level data structures -- structures, we can not control, predict or limit beforhand. We deal with this situation by introducing a specific intermediary, the &rarr; TreeMutator.

!!!apply a diff onto implementation data
Thus we need to carry the idea one step further. The goal is to allow a loose cooperation of components, without the need of a common data structure set in stone. Up to now, with our symbolic diff language and tree representation, we're able to render objects in some kind of DOM notation, and we're able to apply changes onto them. This is fine for maintaining an intermediary GuiModel, but to really achieve the desired loose coupling of cooperating components, we need //the ability to apply changes to implementation data structures.// To do so, we need a ''binding'' from the diff language to »''implemetation data''« -- without imposing restrictions onto the latter: We do not intend to invent yet another introspection framework, working on yet another meta object type. Thus, at some point, finally we need the help of the implementation context to //make sense of the diff verbs.//

As said, this turns out to be a tricky challenge: implementing the application of diff messages is inevitably an involved and technical task; we should avoid imposing these technicalities onto client code. But, at minimum, we need to know some basic traits regarding the target data structures, in order to handle receiving change messages properly. We solve this problem through implicit conventions plus an indirection layer
* we impose some very vague requirements onto the structure of target data: it is required to comply to our conceptual notion of an "object"; target data //needs to be congruent to [[ETD|ExternalTreeDescription]].//
* we introduce an adapter interface, the TreeMutator. Diff application can then be implemented in terms of this new abstraction
* we offer pre configured building blocks and a DSL, for client code to assemble a suitable {{{TreeMutator}}} implementation, privately tied into the opaque implementation data structure
* client code has to fill in some necessary information in the form of ''closures'' (lambdas)
** how to construct a new data element
** how to match a //diff verb// against a given data element
** how to determine applicability of a verb, allowing for multi layered binding
** how to assign a new value, given the corresponding diff verb
** how to open a nested scope for recursive mutation
 
for the purpose of handling updates in the GUI timeline display efficiently, we need to determine and represent //structural differences//
This leads to what could be considered the very opposite of data-centric programming. Instead of embody »the truth« into a central data model with predefined layout, we base our achitecture on a set of actors and their collaboration. In the mentioned example this would be the high-level view in the Session, the Builder, the UI-Bus and the presentation elements within the timeline view. Underlying to each such collaboration is a shared conception of data. There is no need to //actually represent that data// -- it can be conceived to exist in a more descriptive, declarative [[external tree description (ETD)|ExternalTreeDescription]]. In fact, what we //do represent// is a ''diff'' against such an external rendering.

We build a slightly abstracted representation of tree changes and use this to propagate //change notifications// to the actual widgets. To keep the whole process space efficient, a demand-driven, stateless implementation approach is chosen. This reduces the problem into several layered stages.
* our model is a heterogeneous tree &rArr; use demand-driven recursion
* the nodes are heterogeneous collections &rArr; use filtering by type tag
* find changes in ordered collections of children &rArr; symbolic list diffing algorithm
* problems with identity and state &rArr; encapsulate state and have a airtight object identity scheme

Doubtless we're dealing with a highly specific application here.
&rarr; see [[discussion of diffing solutions|TreeDiffFundamentals]]
&rarr; see [[diff handling implementation technique|TreeDiffImplementation]]

!list diffing algorithm
| !source data|!|!desired result |
|(a~~1~~, a~~2~~, a~~3~~, a~~4~~, a~~5~~) |&hArr;| {{{delete}}}(a~~1~~, a~~2~~)<br/>{{{permutate}}}(a~~3~~, a~~5~~, a~~4~~)<br/>{{{insert}}}(//before a~~3~~//, b~~1~~)<br/>{{{insert}}}(//before a~~4~~//, b~~2~~, b~~3~~)<br/>{{{append}}}(b~~4~~)|
|(b~~1~~, a~~3~~, a~~5~~, b~~2~~, b~~3~~, a~~4~~, b~~4~~)|~|~|
to cover reordering, we need to determine the deletes and (possible) updates in one set operation.
After reordering the remaining updates to the target order, the inserts are determined in a final merging pass.

Here all the fuzz about our {{{LUID}}} and identity management in the PlacementIndex definitively pays off: A standard multiset implementation should do.

!typed ordered views
The consumer -- in our case the GUI widgets -- impose a preconfigured order of things: elements not expected in a given part of the session will not be rendered and exposed. Thus the expectations at the consumer side constitute a typed context. So all we need to do is to intersperse a filter and then let the diffing algorithm work on these views filtered by type. All of this sounds horribly expensive, but it isn't -- functional programming to the rescue! We are dealing with lightweight symbolic value representations; everything can be implemented as a filtering and transforming pipeline. Thus we do not need any memory management, rather we (ab)use the storage of the client pulling the representation.

!structural differences
The tricky part with changes in a tree like structure is that they might involve rearrangements of whole sub-trees. So the question we need to pose is: to what extend do we need, and want to capture and represent those non local changes? In this respect, our situation here is significantly different than what is relevant for version management systems; we are not interested in //constructing a history of changes.// A widget moved into a completely different part or the model likely needs to be rebuilt from scratch anyway, so it doesn't hurt if we represent this change as deletion and insert of a new sub-tree. But it would be beneficial if we're able to move a sequence of clips in a fork ("track"), or even a whole fork at the current level. As a corner case, we might even consider representing a &raquo;fold-down/up&laquo; operation, where a sequence of elements is wrapped into a new sub-node, or extracted up from there -- but this is likely the most far-reaching structural change still worth to be represented first class.

!diff representation
Thus, for our specific usage scenario, the foremost relevant question is //how to represent the differences,// since our aim is to propagate complex structural changes through a narrow data mutation API as communication channel. The desired representation -- call it ''linearised diff representation'' -- can be constructed systematically from the predicate like notation used above to show the list differences. The principle is to break the representation down into atomic terms, and then to //push back//  any term repreatedly, until we come accross a term which can be //consumed right-away// at the current top of our "old state" list. This way we consume the incoming change messages and our existing data simultaneously, while dropping off the mutated structure in a single pass. Applying this technique, the above example becomes
|!Message|!| !Result List|!remaining old |
|            |!| ()|(a~~1~~, a~~2~~, a~~3~~, a~~4~~, a~~5~~) |
|{{{del}}}(a~~1~~) |!| ()|(a~~2~~, a~~3~~, a~~4~~, a~~5~~) |
|{{{del}}}(a~~2~~) |!| ()|(a~~3~~, a~~4~~, a~~5~~) |
|{{{ins}}}(b~~1~~) |!| (b~~1~~)|(a~~3~~, a~~4~~, a~~5~~) |
|{{{pick}}}(a~~3~~) |!| (b~~1~~, a~~3~~)|(a~~4~~, a~~5~~) |
|{{{find}}}(a~~5~~) |!| (b~~1~~, a~~3~~)|(a~~5~~, a~~4~~) |
|{{{pick}}}(a~~5~~) |!| (b~~1~~, a~~3~~, a~~5~~)|(a~~4~~) |
|{{{ins}}}(b~~2~~) |!| (b~~1~~, a~~3~~, a~~5~~, b~~2~~)|(a~~4~~) |
|{{{ins}}}(b~~3~~) |!| (b~~1~~, a~~3~~, a~~5~~, b~~2~~, b~~3~~)|(a~~4~~) |
|{{{pick}}}(a~~4~~) |!| (b~~1~~, a~~3~~, a~~5~~, b~~2~~, b~~3~~, a~~4~~)|() |
|{{{ins}}}(b~~4~~) |!| (b~~1~~, a~~3~~, a~~5~~, b~~2~~, b~~3~~, a~~4~~, b~~4~~)|() |

__Implementation note__:The representation chosen here uses terms of constant size for the individual diff steps; in most cases, the argument is redundant and can be used for verification when applying the diff -- with the exception of the {{{ins}}} term, where it actually encodes additional information. Especially the {{{find}}}-representation is a compromise, since we encode as "search for the term a~~5~~ and insert it at curent position". The more obvious rendering -- "push term a~~4~~ back by +1 steps" -- requires an additional integer argument not neccesary for any of the other diff verbs, defeating a fixed size value implementation.

!!!extension to tree changes
Diff description and diff handling can be applied to tree-like data structures as well. Some usages of textual comparison (e.g. diffing of programming language texts) are effectively working on tree structures -- yet they do not build on the structure of the diffed data explicitly. But if we represent the data structures symbolically, the change form text diffing to data structure diffing is marginal. The only relevant change is to handle embedded recursive diff descriptions of the child nodes. As it stands, each node or "object" can be represented as a list of properties plus the attachment of child nodes. This list can be treated with the methods developed for a stream of text tokens.

Basically the transition from text diffing to changes on data structures is achieved by exchanging the //type of the tokens.// Instead of words, or lines of text, we now use //data elements.// To do so, we introduce a symbolic ExternalTreeDescription of tree-like core data structures. The elementary token element used in this tree diff, the GenNode, embodies either simple plain data elements (numbers, strings, booleans, id-hashes, time values) -- or it describes a //recursive data element,// given as {{{Record<GenNode>}}}. Such a recursive data element describes object-like entities as a sequence of metadata, named attributes and ordered child-nodes -- it is handled in two phases: the first step is to treat the presence and ordering of child data elements, insertions and deletes. The second phase opens for each structurally changed child data element a recursive bracketing construct, as indicated  by explicit (and slightly redundant) //bracketing tokens://
*{{{mut}}}(node-ID)  : recurse into the designated node, which must be present already as result of the preceding changes. The following diff tokens describe //mutations// of the child
*{{{emu}}}(node-ID)  : close the current node context and return one step up; the node-ID is given for verification, but can be used to restore the working position at parent level
*{{{set}}}(node)  : shortcut notation for simple value elements; assign the new payload value to the designated element, retaining identity
In addition, in a future extension, we might consider to introduce up/down folding primitives
*{{{fold}}}(node-ID) : pick the following elements and fold them down into a new child with given node-ID. The down folding continues until the next {{{emu}}} token
*{{{lift}}}(node-ID) : remove the next child node, which must be node-ID, and insert its children at current position

Since the bracketing construct for mutation of child structures bears the ID of the parent, a certain degree of leeway is introduced. In theory, we could always open such a bracketing construct right after the {{{pick}}} token accepting the parent -- yet, while minimal, such a strictly depth-first representation would be hard to read -- so we allow to group the recursive treatement of children //post-fix,// after the messages for the current node. In a similar vein, we introduce another token to describe a //short-cut://
*{{{after}}}(node-ID) : fast-forward through the sequence of elements at current level until the position after the designated element.
To complement this language construct, we define some special, magical (meta) element-~IDs
*{{{_CHILD_}}} : marks an //unnamed// ID. The implementation exploits this specific marker to distinguish between nodes which are (named) attributes of an object, and real children.
*{{{_ATTRIBS_}}} : can be used to jump {{{after(_ATTRIBS_)}}} when mutating the contents of an object. So the following diff verbs will immediately start working on the children
*{{{_END_}}} : likewise can be used to jump {{{after(_END_)}}} to start appending new elements without caring for the existing current content.
All these additional language constructs aren't strictly necessary, but widen the usability of the language, also to cover the description of incomplete or fuzzy diffs.

!!!representation of objects
While we are talking about //structured data,// in fact what are about to handle are objects, understood in the standard flavour of object orientation, where an object is the instance of a type and offers a contract. Incidentally, this is not the original, "pure" meaning of object orientation, but the one that became prolific in brining our daily practice closer to the inherent structuring of modern human organisation. And in dealing with this kind of object, we sometimes get into conflict with the essentially open and permissive nature of structured data. So we need to establish a mapping rule, which translates into additional conventions about how to spell out matters in the diff language.

We choose to leave this largely on the level of stylistic rules, thus stressing the language nature of the diff. Even when this bears the danger to produce an exception very late, when it comes to applying the diff to a target data structure. The intention behind this whole diff approach is to transform tight coupling with strict rules into a loose collaboration based on a common understanding. So generally we'll assume the diff is just right, and if not, we'll get what we deserve.

The ''protocol to describe an object'' is as follows
* the ID is the objects identity and is once given, never changed
* we spell out any metadata (esp. a type information) first, followed by all attributes, and then followed by contents of the object's scope (children)
* attributes are to be given in a way not in contradiction to the more stringent semantics of an object field or property
** never attempt to re-order or delete such attributes, since their presence is fixed in the class definition
** when a field is mandatory //by its nature,// it shall be required in construction, and the corresponding data is to be given with the {{{ins}}} verb causing the constructor call
** on the other hand, the data for an optional field, when present, shall be spelled out by {{{ins}}} verb after construction, with the first //population diff.//
** we do not support attribute map semantics (or extended "object properties" of any kind). If necessary, treat them as nested entity with map semantics

!!!deriving conventional representations
On receiving a stream of tokens of this "diff language", it is possible to generate the well known and more conventional diff representations,
i.e. a ''unified diff'' or the ''predicate notation'' used above to describe the list diffing algorithm, just by accumulating changes.
The TreeMutator is an intermediary to translate a generic structure pattern into heterogeneous local invocation sequences.
within the [[diff framework|TreeDiffModel]], this is a crucial joint, since here the abstract, generic, ~DOM-like ExternalTreeDescription meeds opaque, local and undisclosed data structures.

&rarr; see TreeMutatorEvolution for an extensive discussion about the principles and architecture, which gradually leads to the final design of the ~TreeMutator, as realised now in code.

!Motivation
on a very large scale within the Lumiera application, disjoint subsystems are collaborating based on the notion of a //shared conceptual structure.//
This means, we do not share a data model -- rather we exchange ''diff messages'' with respect to this assumed, common, conceptual structure. In practice, this kind of very loose coupling is realised with the help of a ~DOM-like notation based on a small selection of primitive base types, plus an object-like //record// made from such base elements. Based on these elements represented as [[generic nodes|GenNode]], we can build up a symbolic representation of shared structures, which we term the ExternalTreeDescription. In fact, this is not meant and not used as actual implementation data structure -- rather it is used to derive and validate the diff messages handled within our [[diff framework|TreeDiffModel]].

//But --// if this symbolic representation is not meant for implementation -- how are we then able to relate it in any useful way to actual implementation data structures? The answer is two fold. For one, and most importantly, we do not collaborate on a shared data model, which means that the implementation data structures are always an implementation detail confined within some subsystem or even part of an subsystem. And this implementation layer can then -- secondly -- expose an adapter or intermediary, which offers a "generic side" for the diff framework to attach to. This adapter is the TreeMutator.

!Definition
;interface
:the tree mutator is an interface providing a set of virtual functions, the ''mutation primitives''
;adapter
:the implementation of this interface acts as adapter, to translate ''verbs of the diff language'' into invocations on opaque, private data structures
;binding
:this adapter is //created// by binding those opaque, private data structures into some generic implementation patterns for the TreeMutator
;builder interface
:the TreeMutator offers a builder interface to build up those bindings, with the help of closures provided by the client, which is the private, opaque data structure

So, from the usage side, some kind of implementation data structure will build a TreeMutator and tie itself into its binding Then, by exposing this {{{TreeMutator}}} implementation, this client data structure gets the ability to receive diff messages, which means this local and undisclosed data structure can be attached to and  collaborate with some other subsystem; it can be altered, extended and reshaped remotely, without the need to share implementation details.

Incidentally, the ''diff language'' itself is meant to be readable and expressive. On the //sending side,// typically diff messages will be constructed directly in code. To this end, the GenNode offers a constructor and mutator syntax specifically crafted to allow writing clear and compact diff messages. Since the tree diff language embeds a simple list diff language as a subset, it is possible to generate structural changes by observing changes on data collections. Such //detected diffs// may be embedded within explicitly written diff message sequences.

The canonical example are changes to the editing session. After maybe running some resolution steps, an actual effective change is determined within the session. At this point, a minimal diff message to describe this effective change will be created. This diff message can be logged, to make the change persistent. And then this message will be "cast towards the UI". There is a communication structure, the UI-Bus, which allows to "direct" such a message towards an element only known and designated by its ID. In fact, this receiver element will be a widget within the UI, and this widget exposes a custom configured TreeMutator to consume this message and effect the corresponding changes within the widget structure of the UI.

!Semantics
The TreeMutator is an //adapter,// allowing the diff language to perform some abstracted generic operations, and at the same time enabling access to an (not further disclosed) implementation data structure in a standardised way. The underlying idea is to deal with //structured data//, i.e. anything which can be described in a DOM like tree. More specifically, our ExternalTreeDescription is the blueprint and exemplifies the generic notion of //structural data to receive a change by diff.// Anything, which can be brought into that form -- after sufficient abstraction -- can be handled through a TreeMutator with appropriate binding, and thus is able to receive diff messages. To be yet more specific
* data is arranged in an order retaining collection
* the type of data within that collection is possibly mixed
* each element within that collection has an unique ID (identity)
* but this uniqueness is only required locally, within a scope at hand
* data can be classified in two ways
** elements might be named ("attributes") or unnamed ("children")
** elements might be primitive values or nested records ("objects")
* and  such an "object"
** is recursively again "data" as defined here
** can, optionally, carry a type field (metadata)
** is expected to be spelled out as //attributes first, then children.//

When it comes to applying a diff, we assume that this data abstractly behaves in a specific way, which is outlined by the ''mutation primitives'' -- these are the operations on the TreeMutator interface. Again, to be more specific...
* data elements are //value-like,// but have an attached identity.
* data elements can be moved.
* when the diff application starts, all existing data elements will be moved into a temporary buffer
* the sequence order of the data elements is retained in that move
* after that, the data in the temporary buffer represents the //old state//, while the new state arrives in the form of diff instructions.
* "applying" those diff instructions will transform the exiting data into new shape, possibly insert and delete some elements.
* in the end, the data will be back into the original location, but reshaped and transformed.

!!!mutation primitives
the building blocks, from which such a diff application can be combined, are embodied into the TreeMutator interface
;lifecycle
:the TreeMutator is a one-way, "throwaway" object. It starts (with moving content into the old state buffer), is operated and then disposed.
;ownership
:the TreeMutator //only referres// to the data, but during the application process, it owns the data in the old state buffer
;failure guarantee
:''none'' given. If diff application fails, data is possibly partially transformed, partially in the old state buffer. Thus data is //corrupted.//
;old sequence position
:during diff application, the old state sequence is traversed once. There is an implicit position.
;operation {{{hasSrc}}}
:determine if there are further elements waiting from the old state sequence (to be either picked in the new sequence or discarded)
;operation {{{skipSrc}}}
:skip the next element in the old state sequence, effectively dropping it into obsolescence
;operation {{{injectNew}}}
:create and inject a new element into the transformed data, based on the given //spec// (from the verb in the diff message)
;operation {{{matchSrc}}}
:decide if the next element in the old state sequence //matches// the given //spec// from the diff verb. This spec is given as GenNode
;operation {{{acceptSrc}}}
:move the next element from the old state as-is into the new transfomed data, after verifying match with the provided //spec//.
;operation {{{accept_until}}}
:repeatedly //accept// elements from the old state buffer, until hitting a condition. The condition is given within the //spec//
:* when the spec is {{{Ref::END}}}, then accept all (remaining) elements, i.e. the remainder of the old sequence as-is.
:* when the spec is {{{Ref::ATTRIBS}}}, and the implementation supports the notion of "an attribute", then accept and stop behind attributes
:* otherwise, accept //until after// finding an element that matches the //spec//. That is, the matching element will be the last one accepted
;operation {{{findSrc}}}
:without changing the current position, delve further into the old state sequence, to find an element //matching the given spec.//
:this element shall than be //accepted// (moved) into the new transformed data, leaving a //waste position// in the old state buffer.
:later, when encountering this //waste position// during regular traversal, it //must be skipped//.
:The regular diff language messages will ensure this to happen in proper order
;operation {{{assignElm}}}
:locate an element //already accepted into// the new transformed data. Then assign a new value corresponding to the given //spec//.
:The search order is to look at the last added element first, then to start search from the beginning of the new transformed data sequence;
:a match is identified by matching the ID part of the given //spec// against the //identity// of the data element in question. 
;operation {{{mutateChild}}}
:likewise locate an element //already accepted//. This element is assumed to be a //nested scope// (object)
:for this element, a new TreeMutator shall be //fabricated recursively// and placed into a given buffer. This new TreeMutator is assumed
:to be attached to the child element in question, and can be used to delve into mutation of sub scopes

!!!ways of binding
With ''binding'' we denote the way to attach a TreeMutator to concrete, implementation local data structures. It is accomplished using the ''builder interface''.
Besides being a base class (with default {{{NOP}}} implementation of all mutation primitives), through this builder interface the TreeMutator offers several
pre configured ways of binding to actual data, thereby easing the process to provide a suitably customised TreeMutator instance. Moreover, by using this
builder interface, several ways of binding can be //layered on top of each other,// so the actual TreeMutator is structured into multiple //onion layers,// where
each layer is responsible for a specific aspect of the binding. This is crucial to deal with "objects" comprised of a mixture of children, like e.g. the [[Pipe]]
attached to a clip, or the mixture of clips, effects and labels found within a [[Fork]] ("track").
;the selector
:this is a lambda {{{isApplicableIf : bool(GenNode const&)}}}, provided separately by the concrete //onion layer//
:* the selector is //optional// and only necessary when there is more than one binding layer
:* this predicate decides if a given //spec,// as taken from a //diff verb// to be applied, is of relevance for the given //onion layer//
:* if this onion layer is not concerned or affected by this message, it will be passed down to further layers, possibly ignoring it at the end
:* thereby, the meaning of a diff message is //segregated// into various domains or type scopes, each of which corresponds to a //onion layer// at the implementation side
;test mutation target
:this is an "onion layer" for unit tests and diagnostic purposes. It is assumed to be layered on top of other, real bindings.
:it acts like a //wire tap// and logs any encountered primitives into a {{{TestEventLog}}}.
;STL collection binding
:this is the most important case, allowing to bind to objects managed within a STL collection.
:all non trivial and non default aspects of this binding need to be given by installing some ''closures'' (lambdas)
:* lambda {{{matchElement : bool(GenNode const& spec, Elm const& elm)}}}
:** implements the concrete meaning when a local data element can be considered to match the given spec
:* lambda {{{constructFrom : Elm(GenNode const& spec)}}}
:** create a new element to conform to the given spec
:** return this element by value
:* lambda {{{isApplicableIf : bool(GenNode const&)}}}
:** decide applicability of the given //spec// to //this collection binding layer.//
:* lambda {{{assignElement : bool(Elm&, GenNode const&)}}}
:** implement assignment of a new value to the (implementation) data element passed in as reference
:** might return false to indicate that assignment can not be handled
:* lambda {{{buildChildMutator : bool(Elm&, GenNode::ID const&, TreeMutator::MutatorBuffer)}}}
:** fabricate a recursive sub-TreeMutator to deal with the (implementation data) element passed as reference
:** the ID, as taken from the (diff) spec, is also passed, for orientation
:** the lambda is expected to //place// the generated mutator into the given buffer smart handle, which takes ownership of this object
;STL map binding
:...can be supported, as an extension to the ''collection binding'' --
:* elements are (key, value) pairs
:* //use is discurraged,// since it contradicts and breaks diff semantics (due to the inherent ordering of maps)
;object field binding
:this maps the concept of "Attribute" onto concrete, mutable data fields
:any re-ordering  and deletion of fields is ''prohibited'', while defaultable optional fields may be tolerated
:there is only one common combined binding for object fields (i.e. we don't offer a customisable selector),
:where each individual binding is implemented on its own, as a self contained, isolated binding layer
:binding is created ''alternatively'' through the following closures
:* ''change'': "key", setter lambda {{{void(T val)}}}
:** binding for a regular setter to assign a new value
:** the value type is picked up from the provided lambda
:** possible value types are restricted to the selection of GenNode payload types
:** any further type conversions are to be handled by the closure and thus the client code
:* ''mutateAttrib'': "key", mutator-builder lambda {{{void(TreeMutator::MutatorBuffer)}}}
:** fabricate a recursive sub-TreeMutator to handle an //object valued// attribute
:** implicit typing -- the fabricated sub-TreeMutator just needs to be able to deal with the bound attribute
;GenNode binding
:in the end, it turnes out that the //implementation// underlying our ExternalTreeDescription itself fits nicely into the above mentioned standard bindings --
:all we have to do is to layer two collection bindings on top of each other, with a suitable //selector// to separate //attributes// from //children.//
:obviously, this unified implementation comes at the price of one additional indirection (namely through the ~TreeMutator ~VTable), while the requirements on temporary storage size are almost identical (two iterators and two back references instead of one). Moreover, all the explicitly coded complexities regarding the attribute / children distinction is gone, relying on the logic of layered decorators solely. So the effort to shape the TreeMutator API payed off in the end, to improve and rectify the whole design...


You might have noticed already that this entire design is recursive: there is some part, where -- for handling a sub-structure -- a nested mutator is fabricated and placed into a given buffer. And only after understanding that part, it becomes clear to what end we are creating a customised mutator: we always need some (relative) parent scope, which happens to know more about the actual data to be treated with a TreeMutator. This scope assists with creating a suitable binding. Obviously, from within that binding, it is known what the sub-structures and children of that local data are all about and what semantics to give to the fundamental operations.
The TreeMutator is an intermediary to translate a generic structure pattern into heterogeneous local invocation sequences.
This page details some of the reasoning and documents decisions taken while shaping the chosen design.

!Motivation
In the context of Lumiera's metadata model and data exchange framework, we're facing a quite characteristic problem of generic programming: While in a classical ~OO-solution, a problem is defined by imposing distinct structural and typing constraints, the very idea of generic programming is to treat heterogeneous structures. More specifically, the whole approach of collaboration between loosely coupled components through the exchange of structural diff messages was chosen to subsume a wide array of individual structures under a model largely conceptual in nature. Without the need of a »master model« to be defined up-front. Without the danger of obsoleting this master plan in the course of discovering the actual specifics of implementation.

While such an approach is certainly highly desirable, we may expect some kind of serious //impedance mismatch// at some level down into the implementation. On one side we'll get messages referring to an implicitly typed, hierarchically organised (meta) data structure. On the other hand we'll get bare bone implementation facilities, which represent those structures rather in the way of sticking to a logical and conceptual pattern of implementation. To describe this in a more tangible way: On one side we get a GuiModel, represented as tree of symbolic descriptor nodes, on the other side we get a widget, reacting on the notification message due to pending changes pushed up from the lower layers. The widget implementation code calls back to pull a diff. While interpreting the attributes mentioned in the diff, it has to determine which widget state corresponds to the mentioned attributes, if applicable. Detected changes need to be interpreted and pushed into the corresponding widget and GTK elements. None of these details can be described in any way beforehand or designed as library component, since only down into the gory details of GUI implementation we'll get a real chance to give the abstract, symbolic meta description a tangible meaning.

At the end of the day, this turns into a pragmatic challenge in terms of writing well organised code. Since in that case, the receiver side has to translate generic diff verbs into operations on hard wired language level data structures. Through the classical ~OO-style aproach -- which is always a good choice for backbone structures -- the //diff interpreter//-interface provides a backbone to be implemented, mapping the verbs of the diff language onto the invocation of virtual functions. Basically this is a double dispatch situation, i.e. some sort of visitor interface. While the implementation is still fairly straight-forward for re-ordering list elements, and can maybe even stretched to cover the management of children (adding, reordering, removing) -- the really tricky design quest comes with the elementary mutations: here we end up with a single call from the interface
{{{
  mutate(string attributeID, Attribute newValue)
}}}
where {{{Attribute}}} is a generic union type -- which unfortunately gets us right into {{{if}}}-cascades or forces us into switch-on-type cascades, combined with hard casts. This results in writing some kind of boilerplate code repeatedly into each and every widget implementation:
{{{
     if (attributeID == "bla") {
         float val = newValue.get<float>();
         this.setXYZ(val);
     } else
     if (attributeID == "foo") {
         string val = newValue.get<string>();
         int bar = interpretZYX(val);
         this.collectionRR.add(bar);
     } else...
}}}

Not only is this kind of code repetitive, noisy, redundant and error prone -- implementing the thinly disguised cast is either completely unsafe, or forces us to embed a type tag into the implementation of the {{{Attribute}}} type and perform an additional match operation on each invocation.


!customisable intermediary
There is one fundamental problem, we'll never be able to overcome: the //openness.// If we want to ensure -- structurally -- that all operations are wired up completely and correctly, we have to impose a structure plan onto the clients. And this always means the clients have to comply to a type or type class. This kind of build time safety is gone the moment we loosen this requirement (and there are very good reasons indeed to create some kind of loophole in every architecture you create, since only loopholes at the right places allow for growth and adaptation). Yet beyond that fundamental limitation, we may be able to remedy most of the practical problems by turning around the direction of the interaction. Yes, this is again the »Invetrsion of Control« principle at work. At the usage site, the transformed solution even looks almost identical to the naive solution:
{{{
     TreeMutator(...)
       .change("bla", [&](float val) {
           this.setXYZ(val);
         })
       .change("foo"), [&](string val) {
           int bar = interpretZYX(val);
           this.collectionRR.add(bar);
         })
     ...
}}}
So this looks similar, but indeed now we create //operation bindings// up front, at creation time of the client (widget). Thus, the initiative originates from the implementing code, which //offers// a method as building block to implement the mutation of a conceptual tree-like structure. Actually, these "methods" are given as //closures//, implicitly tied into the implementation context. And since this configuration invocation carries implicit type information, we may generate the necessary adapters and visitor implementation at compile time.

This construction pattern can be extended to offer several optional extension hooks for the clients to supply. Extended even by offering more generic hooks to deal with recursive changes to whole sub-trees or collections of children of a specific type. Facing towards the diff application, the generated TreeMutator will expose a standardised interface to support implementing all the diff verbs necessary to describe structural and data changes. Operations, attributes and properties not outfitted by the client with actual bindings will be supplemented as ~NOP-implementation, silently ignoring any changes targeted towards the corresponding structures. This is the remaining loophole: if a client "forgets" to install a binding for some part of the structure, any changes directed towards this part will pass by unnoticed.

!!!practical problems
//posed as of 5/2015...//
* still not sure how to design the generic data elements representing "Objects" / "records"
* how to translate the {{{add}}} operation of tree diff into a native invocation
* how to integrate the recursive invocation properly
* how to deal with collections of children
* how to integrate typed children

&rarr; largely, these problems are settled {{red{as of 3/2016}}}
;objects
:...are represented within the fixed structures of the [[ETD|ExternalTreeDescription]], that is as combination of [[generic node elements|GenNode]]
;binding
:...between generic diff operations and native invocations happens with the help of pattern like building blocks, which are layered on top of the {{{TreeMutator}}} interface
;recursive tree mutation
:...means to build a //sub-//mutator for some part of the opaque, private data structure -- which in turn means to build a {{{TreeMutator}}} subclass into a provided buffer, as polymorphic value
;collections of children
:...are just one (actually the most important) building pattern, and we provide a template to handle ~STL-collections of children
;typed children
:...are integrated into this onion like layering of building blocks with the help of a //selector// to decide which building block is responsible for some concrete sub structure

!!!working with children
Handling tree structured object data imposes some additional constraints, in comparison to generic changes done to a flat list. One notable difference is that there are pre-existing //attributes,// which can not be added and deleted, are known by-name, not by positional order. Another point worth noting is the fact that child objects may be segregated into several collections by type. Since our goal is to provide an intermediary with the ability to map to arbitrary structures, we need to define the primitive operations necessary for implementing the concrete structural operations represented in the form of a diff
* add a child
* remove a child
* step to the next child
* find a specific child and place it at current position
* enter recursive mutation of a child
All these basic operations are implicitly stateful, i.e. they work against an assumed implicit state ("the current child"): we assume the children to be organised into ordered collection(s), possibly multiple collections segregated by type, and we assume that there is a current position, possible a separate current position for each typed sub collection.. Moreover, we assume that the diff description comes in accordance with the order of actual children, so that the implementation can be organised into a "pass" over the children. This also implies that children are recognisable (have an identity), and that there is a way to enter a recursive mutation for a given child.

!!!how to provide the actual operations
To ease the repetitive part of the wiring, which is necessary for each individual application case, we can allow for some degree of //duck typing,// as far as building the TreeMutator is concerned. If there is a type, which provides the above mentioned functions for child management, these can be hooked up automatically into a suitable adapter. Otherwise, the client may supply closures, using the same definition pattern as shown for the attributes above. Here, the ID argument is optional and denotes a //type filter,// whereas the closure itself must accept a name-ID argument. The purpose of this construction is the ability to manage collections of similar children. For example
{{{
       .addChild("Fork", [&](string type, string id) {
           ForkType kind = determineForkType(type);
           this.forks_.push_back(MyForkImpl(kind, id);
         })
       .mutateChild("Fork", [&](string id) {
           MyForkImpl& fork = findFork(id);
           return fork.getMutator();
         })
     ...
}}}
The contract is as follows:
* a hander typed as {{{"Fork"}}} wil only be invoked, if the type of the child to be handled starts with such a type component, e.g. {{{"Fork.ruler"}}}
* the mutation handler has to return a reference to a suitable TreeMutator object, which is implicitly bound to the denoted fork. It can be expected to be used for feeding mutations to this child right away.

!!!remarks about the chosen syntax
The setup of these bindings is achieved through some kind of //internal DSL// -- so the actual syntax is limited by the abilities of the host language (C++).
Indeed, my first choice would have been a yet more evocative syntax
{{{
       .addChild("Fork") = { ...closure...}
       .mutateChild("Fork") = { ... another closure }
}}}
Unfortunately, the {{{operator=}}} is right-associative in C++, with no option to change that parsing behaviour. Together with the likewise fixed high precedence of the dot (member call), which also can not be overloaded, we're out of options, even if willing to create a term builder construct. There is simply no way to prevent the parser from invoking the dot operator on the preceding closure. The workarounds would have been to use something other than '{{{=}}}' to create the bindings,  to use a comma instead of a dot, or to disallow chaining altogether. All these choices seem to be rather counter intuitive -- and the most important rule for defining a custom syntax is to stay within the realm of the predictable.


!Architecture
The distinguising trait or Lumiera's diff handling framework is to treat a //diff language// as a scope, where some verbs or predications gain a specific meaning. While a concrete {{{DiffApplicator}}} binds to a specific agent or target environment, the meaning of the binding remains implicit. This approach draws upon the observation, that dealing with structural differences actually addresses //several layers of language.// An ordered collection is one of them, some kind of object notion another one, and the exact meaning of structure yet another one.

In accordance with this understanding, the TreeMutator is shaped to form the attachment point of such a language construct onto a given structural context, which, through this very construct, remains opaque and thus decoupled. In the simple exemplary case of list diffing, the {{{DiffApplicator}}} requires the specialisation of some {{{DiffApplicationStrategy}}} to the concrete data container in use, most likely a STL {{{vector}}}. In a similar vein, a {{{DiffApplicator}}} for structural transformations requires the {{{DiffApplicationStrategy}}}'s specialisation to a TreeMutator. Which -- consequently -- needs to expose an API well suited to implement what a structural diff language might want to express. Yet, in addition, the TreeMutator also needs a binding API, allowing him to be linked into the actual structures subject to manipulation.

At this point, practical performance considerations settle the remaining decisions: Language application is a ''double dispatch'' at least, so we won't be able to get below two indirections ever. We could strive at getting close to this theoretical minimum though. Unfortunately, one of the indirections is taken by the verbs of the language, while the other one is consumed by decoupling the language from the Applicator or Interpreter, i.e. abstracting the diff representation from the meaning of the described changes when considered within a specific target context -- which is the very core of using a diff representation.
So we get to choose between
* binding the Interpreter directly into a target context, which imposes an interface onto the constituents of this target -- or --
* set up a flexible mapping through a third layer of indirection functors -- which, in a nutshell, is the design choice taken by the TreeMutator

!!!the problem with object fields
Since our goal is to represent changes to structured objects in the form of a diff message sequence, the ability to capture the essence of objects is crucial. We are not concerned with behaviour and state, yet we have to deal with the //object's visible properties.// This gets us into a design dilemma; the notion underlying our diff language is that of //structured data,// which indeed offers the concept of an //attribute// -- but visible properties are based on //object fields,// and thus have more limited capabilities, as being rooted in the class definition underlying each object. Consequently our diff language may express changes beyond the abilities of any ordinary object.
* attributes may be reordered
* attributes may be added and deleted
-- none of which has any tangible meaning for a regular (language) object. Contrast this with ExternalTreeDescription, which we crafted specifically to support diff language and diff messages, resulting in the ability to represent those changes explicitly. So, in theory, a given diff, when applied both to a GenNode and via TreeMutator to a native data structure, might lead to states not semantically equivalent. We can not reliably protect ourselves against that possibility, but, on the other hand, it is not clear if such poses an actual threat. Thus we'll tolerate extensions to the diff binding, to acknowledge usefulness in specific situations

''A possible path to reconcile'' these inner contradictions is to support the mutation primitives ''as far as is sensible''.
Through analysis of the semantics, we could distinguish several flavours of "attributes", especially...
;object fields
:mandatory elements enforced by class definition, where values need to be supplied at construction time
;defaultable object fields
:regular object fields proper, where some construction logic is able to fill in defaults. After construction, they are indistinguishable from mandatory object fields
;optional properties
:elements rooted in the class definition, yet not necessarily given. The object is able to //detect their absence//, and may fill in a default or ignore missing properties
;attribute map
:an ordered collection of key-value associations, which can be enumerated, searched and extended
Thus we //could define a sensible handling for each of those cases, and deal with combinations// -- but --
it seems more adequate to limit ourselves just to object fields and to include the defaultable object fields through implementation leeway. Because, in the end we really do need object fields, and anything beyond is just a materialisation of special behaviour and can be kept out of the diff system altogether. Through diff messages, we want to express structured changes, not metadata, nor closures, nor strategies, nor prototypes. And the attribute map is really something different than an object, and should be implemented separately, based on binding ot a //attribute collection,// when it comes to applying diff messages to GenNode elements; the latter are able to //emulate// or //represent// objects, but by their nature are rather key-value associations arranged in nested scopes.

!!!mapping rules to handle object fields
So we basically ''disallow'' anything related to ''order'', ''re-ordering'' and ''deletion'' of object fields.
* when an object requires field values at construction time, this requirement has to be satisfied when inserting the object. The {{{ins}}} message must hold a complete value description in this case
* beyond that, fields can //only// be either assigned, or opened for nested mutation.
* for sake of consistency with the handling of a GenNode based [[ETD|ExternalTreeDescription]], we translate an {{{ins}}} message into an auto-accept followed by an assignment. In fact, any //defaultable object field,// when specified for the first time, //should// be given as {{{ins}}} message, because otherwise we would not be able to apply the same diff sequence to an equivalent GenNode representation of the same logical structure.
Consequently, we're left with only a very limited subset of diff expression applicable to an »object field onion layer«
* either a {{{pick}}} for every known field, possibly (optionally) followed by a {{{set}}} message with a new value
* or an {{{after END}}}, followed (optionally) by an arbitrary sequence of {{{set}}} messages.
To create a ''binding'' for any object field, we need
* a ''name'' for matching keys
* either a ''setter'' (lambda) or a ''mutator builder'' (lambda)
We have a distinct typing for each individual attribute, but this typing remains implicit: the lambda has to know what value to extract from the GenNode payload within the accepted diff message and how to apply this value. Especially for ''time entities'', which are modelled as immutable values, the lambda might want to build a [[time mutator|TimeMutation]]. And especially in the UI, timing specifications are expected to be translated into a //presentation grid.//

!!!policy to deal with non optional object fields
It turns out we're frequently forced to make a problematic decision here: in practice we often have the choice to treat a field as optional or mandatory. The diff protocol demands to send all mandatory fields immediately, with the {{{INS}}}  verb causing object creation, but, generally speaking, both cases can be supported. Now, what makes this decision tough is the fact it seems way more cheap to cheat our way around this decision: we may implement the fields as being optional, but actually treat them as being mandatory. And we just hope the diff will always provide the necessary values with further {{{INS}}} verbs right within the first population diff. Judging by widely accepted principles of pragmatism, this way to deal with the problem is even the most sensible way. Incidentally, this also relates to the pervasive use of property setters and getters in typical object based programming.
Unfortunately, there is a price to pay on the long run. Taking this approach creates fudge, and this fudge tends to spread into neighbouring code, and even worse, over time, what is deemed //practically irrelevant// tends to happen non the less, sooner or later, causing fixes and complexities to creep into the code. A good guideline thus is not to base this decision on practicality, but to base it on the nature of those matters to be represented in objects: when something //by its own nature is mandatory,// then we should take the necessary steps to //represent it as mandatory.// If something is optional, maybe we can even try to decompose to get rid of it altogether.

So we arrive a the following guidelines:
* what is essential to the concept represented with an object should be initialised in the ctor and thus needs to be sent with a single {{{INS}}} verb.
* optional and configurable parts need a setter or similar mechanism, and can then be populated or manipulated through an //attribute binding.//


!!!types, attributes and layering
During the analysis it turned out that support for a mixed set of //typed child elements// is an important goal. However, we may impose the restriction that these typed sub collections will be kept segregated and not be mixed up within the child scope. Effectively, this was already the fundamental trait for the //symbolic object representation// chosen here. Thus, "attributes" become just one further kind of typed children; even "metadata" might be treated along these lines in future (currently, as of 2016 we have only one single piece of metadata, the type field in {{{diff::Record}}}, which is treated explicitly in the code).
This observation leads directly to an implementation based on layered decorators. Each kind of target elements will be treated by another decorator exclusively, and control is passed through a chain of such decorators. Consequently, each decorator (or "onion-layer") gets into charge based on a //selector predicate,// which works on the given diff verb solely (no introspection and no "peeking" into the implementation data). We should note though, that this implies some fine points of the diff language semantics are subject to this architectural decision -- especially the precise behaviour of the {{{AFTER}}}-verb depends on the //concrete layer structure// used at the target of the diff. Yet this seems acceptable, since it is the very nature of diff application to be dependent on internals of the target location -- just by definition we assume this inner structure of the target to be semantically congruent with the diff's source structure.

!!!an alternative approach to »attributes«
In the end, the implementation resulting from this layered decorator solution leads to another, alternative solution to deal with //object properties:// using an ''Attribute Map''. Obviously, this solution is in direct contrast to the model of an object field as defined through the //object's class.// However, sometimes, some entities in practice will support an //open ended collection of optional custom properties// -- and this is where the attribute map model shines. And it should be noticed that the implementation of the attribute map binding comes basically for free, once we have a standard binding in place to handle STL collections. There is only one technical discrepancy to bridge: a mapping container might be ordered (and in fact {{{stl::map}}} is, implemented as red-black tree); an intrinsic ordering of the container's contents is in contradiction to the basic concept of the diff framework, which assumes there is an explicit order, which can be changed by fetching and inserting some elements. And thus, this alternative representation of »attributes« should not be pushed too far.


!Implementation
__June 2016__: after defining and implementing several concrete bindings successfully, the TreeMutator interface can be considered a given now. It seems conceivable even to try re-implementing the (already existing, hard wired) binding for GenNode in terms of a TreeMutator with specially outfitted binding. Anyway, matters are investigated and clarified enough in order to build the actual diff application based on this new TreeMutator interface. Some tricky structural and technical problems remain to be solved for that

!!!size of the concrete TreeMutator
Since our design avoids subclassing and does not demand the target data to comply to any interface, we get a //late binding// where the actual configuration and thus the behaviour of the concrete mutator is configured with lambdas at runtime, just before the diff is actually applied. This gets us into a situation somewhere between static and dynamic typing -- the functionality of the binding, as embodied into the lambdas, is of course known at compilation time, but we can't access this knowledge until right before invocation, when the actual instance is created at runtime. The problem with this is that we need to know the size of the mutator beforehand, since our diff language is defined in a way preventing recursive evaluation on the stack. We're bound to create the stack for dealing with nested scopes explicitly, as a data structure -- which means we need to know the maximum size of nested mutators to expect, or we're forced into implementing a flexible stack structure with explicitly pointer linked global allocations. There are several possibilities how to deal with that problem
* the //simplest approach that could possibly work// is to define a global constant for a buffer size known to be sufficient for all relevant cases. An attempt to exceed this pre-set limit will terminate the program
* a slight extension to this approach is to differentiate this constant based on the type of the top-level data structure to be exposed
* we could catch exceptions caused by exceeding a heuristically chosen buffer limit and //learn// the actually required size for successive attempts. Depending on the actual implementation situation, this requires some way of aborting and re-starting a failed attempt of diff application, which is an complicated and expensive undertaking. This also prompts us to make those size information part of the application parametrisation, with the ability to adjust those values persistently as a result of this self-adapting mechanism.
Right now it looks sensible to start with the simplistic approach, while keeping the more elaborate possibilities in mind. Which especially means to turn the size management into an opaque implementation detail not linked to the visible diff-interpreter interface.

!!!implementation structure and invocation
To receive and apply a diff, client code has to build a {{{DiffApplicator<TreeMutator>}}} and somehow attach this to the target data structure. Then, after feeding the diff sequence, the target data structure will be transformed. This leads to kind of a chicken-and-egg problem, since only the target data structure has the ability to know how to fabricate a suitable concrete TreeMutator. Assuming the //size problem// (as detailed above) is already solved one way or the other, we're bound to use an interface where the target structure is prompted to build a suitable mutator into a provided buffer or storage frame. Typically, we'd either use metaprogramming or some ADL extension point to derive a suitable //mutator building closure// from the given target structure. In fact, the same situation repeats itself recursively when it comes to mutating nested scopes, but in that case, the necessary //mutator building closure// is already part of the mutator binding created for the respective parent scope, and thus can just be invoked right away.

The {{{DiffApplicator}}} itself can be made non-copyable; preferably it should be small and cheap to construct. At least we should strive at keeping the number of indirections and heap allocations low. Together with the requirement to keep it separate from the actual TreeMutator implementation, which is built //late, just before invocation,// we arrive at a ~PImpl structure, with the TreeMutator as ~PImpl and a scope manager hidden within the implementation part; the latter will be responsible for creating a suitable stack, and to use the //mutator building closure// in order to incorporate the concrete TreeMutator into the stack frames.

Pulling all those building blocks together, this architecture finally allows us to offer a (tree) diff-applicator for //»basically anything«:// The ctor respective the init method of {{{DiffAplicator<Something>}}} relies on the metaprogramming or ADL technique embedded within the {{{TreeDiffTraits<Something>}}} to //somehow// get at a {{{DiffMutable&}}} -- the latter offers the one central and crucial method to build a TreeMutator implementation, internally wired in a proper way to {{{Something}}}'s internals. Which in turn enables us to build a suitalbe {{{ScopeManager}}} implementation with a stack size sufficient to hold this TreeMutator implementation. After this type specific setup, the rest of the diff application works entirely generic and can thus be delegated to the non-templated baseclass {{{TreeDiffMutatorBinding}}}...
//drafted service as of 4/10 &mdash; &rarr;[[implementation plans|TypedLookup]]//
A registration service to associate object identities, symbolic identifiers and types.

!Motivation
For maintaining persistent objects, generally an unique object ID is desirable. Within Lumiera, we employ 128 hash-~IDs (&raquo;{{{LUID}}}&laquo;). But hash-~IDs are difficult to handle for testing and configuration, as they aren't self explanatory for human readers. They're best used in a way avoiding textual representation altogether.

Formal symmetry might be another motivation: we separate into objects and assets, where the latter represent the //bookkeeping view.// But there remain some cases where the asset side, the bookkeeping, is void of any substantial functionality. Just for the sake of orthogonality it //would be preferable//&nbsp; to maintain a registration (for scripted use, for overview and organisational activities by the user, for diagnostics and repair work in the saved session state). A mere data record suffices to fulfil these requirements -- yet  there are indeed other kinds of asset exposing a substantial amount of functionality.

Both these motivations highlight a tension between having a single global namespace of unique ~IDs, and having type-bound sub namespaces of limited scope, as the latter allows for using human readable symbolic ~IDs. Usually, when it comes to writing rules and configuration or creating instances explicitly from code, the specific context of the situation allows or even requires to focus down upon a single and distinct kind of objects immediately &mdash; global uniqueness is not much of a concern here.

!desired functionality
A registration service backed by an index table can be used to //translate//&nbsp; between these two seemingly contradictory usage view angles. 
* re-accessing the specific type to deal with data known just by unique object ID.
* retrieving the unique implementation instance, given just a symbolic ID and the type to use.
* ensuring uniqueness of symbolic ~IDs //within one kind of entities.//
* enumerating all instances of a specific kind
* ability to handle instance registration and de-registration automatically

!!!usage scenarios
* automatically maintained lists of all clips, labels, forks and sequences in the &raquo;Asset&laquo; section of the application
* addressing objects in script driven operations, just based on symbolic names, allowing for additional conventions
* implementing a predicate {{{isTypeXXX(Element)}}}, (a »type guard«) which is crucial for [[rules based configuration|ConfigQuery]].

!!!{{red{WIP}}}Analysis and discussion
Still some contradictions &mdash; and rather seems helpful, not so much necessary right now.
We //already have an registration service,// both for Assets (AssetManager) and for Placements (PlacementIndex). These facilities maintain not only a raw ID &harr; object association, but also structuring information, albeit bound to more specific circumstances (the system of placement scopes, and the asset category). The lookup uniqueID &rArr; object could be implemented by sequentially querying this small number of central registration facilities. Thus, still lacking is a ''system of sub index tables''.

As mentioned above, an ID &harr; type association plays a crucial role when it comes to implementing any kind of rules based configuration. It would allow to bridge from our session objects to rules and resolution working entirely symbolic. (&rarr; [[more|ConfigQueryIntegration]]). But, as of 3/2010 this is a //planned feature and not required to get the initial pipeline working.// Thus, according to the YAGNI principle, we shouldn't engage into any implementation details right now and just create the extension points.
{{red{1/2015 Remark}}}: meanwhile we've come accross several situations calling for //some element// present here in this design draft. So: yes, we //will nedd that lookup system,// but not right now.

The immediate need prompting to start development on this facility, is how to get sub-selections from the objects in the session and for certain kinds of asset &mdash; especially how to deal with retrieving the referred fork for the &rarr; [[sequence and timeline handling|ModelDependencies]].
<<<
So the plan is to use ''per type'' mapping tables for an association: ''symbolic-ID &rarr; unique-ID''
There should be a configurable slot to ''attach an object reference'' &mdash; extensions to be defined later
Just an ''registration scheme'' should be implemented right now, working completely automatic
<<<

!!!!notes
* //code bloat// &mdash; need a frontend and an untyped backend for the raw storage
* one table or many tables?
** obviously, one table is more space efficient
** but it would be redundant (both PlacementIndex and AssetManager //are// implemented with a large hashtable)
** moreover, ennumeration of all elements of a specific kind was one of the reasons to build this facility
* supporting 1:n ? (e.g. one track-ID for many track objects?). Rather not directly, because this defeats lookup by ID
* possible drawback is Lock contention -- instance creation and disposal tends to become single threaded.
see [[implementation planning|TypedLookup]]
TypedID is a registration service to associate object identities, symbolic identifiers and types. It acts as frontend to the TypedLookup system within ~Steam-Layer, at the implementation level. While TypedID works within a strictly typed context, this type information is translated into an internal index on passing over to the implementation, which manages a set of tables holding base entries with a combined symbolic+hash ID, plus an opaque buffer. Thus, the strictly typed context is required to re-access the stored data. But the type information wasn't erased entirely, so this typed context can be re-gained with the help of an internal type index. All of this is considered implementation detail and may be subject to change without further notice; any access is assumed to happen through the TypedID frontend. Besides, there are two more specialised frontends.

!Front-ends
* TypedID uses static but templated access functions, plus an singleton instance to manage a ~PImpl pointing to the ~TypedLookup table
* the individual //registration groups// (see below) are automatically exposed as MetaAsset.
* TypeHandler is heavily used by the ConfigRules {{red{planned}}}; each participating primary type provides an implementation

!Tables and index
The Table consists of several registration groups, each of which contains a hashtable and deals with one specific type. Groups are created on demand, but there is initially one special group to hold the internal type index (translation table). There may be arbitrary further groups, with special meaning for parts of the application, there may be even sub-groups, usable to create clusterings within one group.

__Individual entries__ are comprised of an EntryID as key (actually a ~BareEntryID, without the typing) and a payload, which //doesn't store data,// but information necessary to //lookup and access// the registered object. Obviously, this information is type specific, and thus the ~TypedLookup implementation can't know how to deal with it directly. Rather, we store a ''functor'' in the payload of the type index group. This functor is directly linked to the TypeHandler, and the notion of a ''primary type'' within Lumiera. The system is not based on a hierarchy of data, but a network of related concepts. Any type involved into the metadata and self-adaptation encoded into this setup -- more specifically, any type to act as first calss citizen  within the ConfigRules -- any such type needs to provide a suitable functor implementation through its TypeHandler. The concrete "incantation" of these functors, as neccessary to access and use the type, is stored in the mentioned special ''type index group''. From there it can be invoked by the ~TypedID frontend, to interpret the access data in the individual entries, to be able to retrieve or re-access a registered entity by ID, by name or by type.

!link for automatic registration
An entity can be linked with the TypedLookup system to be registered and deregistered automatically. This is achieved by mixing in the {{{TypedID::Link}}}. On creation, this will set up an EntryID for this individual instance and cause creation of an empty entry within the suitable registration group. As a side-effect, uniqueness of any symbolic-ID within one group (type) is enforced. Obviously, the dtor of this registration Link cares for de-registration automatically. Be forwarned though, by creating an unique identity, this mechanism will interfere with copying and cloning of the registered entity.

In most cases, the //actually usable instance// of an entity isn't identical to a implementation class (language) instance. Typically, there is a frontend, a smart-ptr, reference or similar, which in turn might link to another registration mechanism. This ''actual instance tag'' needs to be attached deliberately through the {{{TypedID::Link}}} to be stored into the payload of the registration group table entry. This involves invocation of the functor provided by the TypeHandler.

!basic usage patterns
* ''Assets'' are maintained by the AssetManager, which always holds a smart-ptr to the managed asset. Assets include explicit links to dependent assets. Thus, there is no point in interfering with lifecylce management, so we store just a ''weak reference'' here, which the access functor turns back into a smart-ptr, sharing ownership.
* Plain ''~MObjects'' are somewhat similar, but there is no active lifecycle management &mdash; they are always tied either to a placement of linked from within the assets or render processes. When done, they just go out of scope. Thus we too use a ''weak reference'' here, thereby expecting the respective entity to mix in {{{TypedID::Link}}}
* Active ''Placements'' of an MObject behave like //object instances// within the model/session. They live within the PlacementIndex and cary an unique {{{LUID}}}-ID. Thus, it's sufficient to store this ''~Placement-ID'', which can be used by the access functor to fetch the corresponding Placement from the session.
Obviously, the ~TypedLookup system is open for addition of completely separate and different types.
[>img[TypedLookup implementation sketch|uml/fig140293.png]]
|>| !Entity |!pattern |
|1:1| Fork|Placement|
|~| Label|Placement|
|~| Sequence| Asset|
|~| StreamType| Asset|
|1:n| Tag| Asset|
|~| Clip| ~MObject|
|~| Effect| ~MObejct|
|~| Transition| Asset(?)|
//the problem of processing specifically typed queries without tying query and QueryResolver.//<br/>This problem can be seen as a instance of the problematic situation encountered with most visitation schemes: we want entities and tools (visitors) to work together, without being forced to commit to a pre-defined table of operations. In the situation at hand here, we want //some entities// &mdash; which we don't know &mdash; to issue //some queries// &mdash; which we don't know either. Obviously, such a problem can be solved only by controlling the scope of this incomplete knowledge, and the sequence in time of building it.
Thus, to re-state the problem more specifically, we want the //definition//&nbsp; of the entities to be completely separate of those definitions concerning the details of the query resolution mechanism, so to be able to design, reason, verify and extend each one without being forced into concern about the details of the respective other side. So, the buildup of the combined structure has to be postponed &mdash; assuring it //has//&nbsp; happened at the point it's operations are required.

!Solution Stages
* in ''static scope'' (while compiling), just a {{{Query<TY>}}} gets mentioned.<br/>As this is the only chance for driving a specifically typed context, we need to prepare a registration mechanism, to allow for //remembering//&nbsp; this context later.
* at the same time, but otherwise independently, compilation needs to drive the generation of possible query resolution mechanisms. As C++ only allows very limited influence on the compilation process, which is always based on a common visibility scope, and on the other hand doesn't allow to re-access the static level after the fact (no language-rooted introspection), we're forced to use an external tool, or to rely on manual specification. As we're building a rather specialised facility, not a general purpose library, the latter seems adequate, if done close to where the actual query resolution mechanism is implemented.
* during ''initialisation'', a complete list of all specifically typed scopes needs to be collected. This might be a flat list, but may well become structured in multiple dimensions later, when using multiple //kinds of query.// It is important to notice that this buildup of a complete list won't happen, unless triggered explicitly, so we need registration into a system lifecycle hook.
* on ''first use'' (which me must ensure to be //after//&nbsp; initialisation) the list-of-typed queries will be hooked into the query resolving facility to build a dispatcher table there.
* the ''actual query'' just picks the respective type-ID for accessing the dispatcher table, making for a total of //two//&nbsp; function-pointer indirections.

!{{red{Preliminary solution}}}
The above outlines the planned full version of this service.
For now, as of 6/10, we use specialised QueryResolver instances explicitly and don't need the lifecycle hook registration yet. But a dispatcher table is in place already
&rarr; QueryRegistration
Abstraction used in the Backbone of Lumiera's GTK User Interface
The UI-Bus is a ''Mediator'' -- impersonating the role of the //Model// and the //Controler// in the [[MVC-Pattern|http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller]] in common UI architecture.

The ~MVC-Pattern as such is fine, and probably the best we know for construction of user interfaces. But it doesn't scale well towards the integration into a larger and more structured system. There is a tension between the Controller in the UI and other parts of an application, which as well need to be //in control.// And, even more important, there is a tension between the demands of UI elements for support by a model, and the demands to be placed on a core domain model of a large scale application. This tension is resolved by enacting these roles while transforming the requests and demands into //Messages.//

!!!rationale
This way, we separate between immediate local control of UI state and the more global, generic [[interaction control|InteractionControl]] and [[command binding|GuiCommandBinding]].
And we arrive at a backbone like structure, which can be wired, programmed and reasoned about while abstracting from the local concerns and the state management related to the UI toolkit set in use. Any element //of more than local relevance// -- be it a controller or be it a widget, representing some entity from the model -- will be attached to the UI-Bus by virtue of a ''bus terminal'', establishing a bi-directional connection. This enables any UI element to invoke commands and persist presentation state (by sending //"state mark" mesages//), and in turn this allows the session and engine core to "cast" state updates towards the UI, without knowing in detail where and how to reach the relevant presentation entities.

!!!topology
The UI-Bus has a simple star shaped topology without hierarchical nesting and forwarding. It is not necessarily connected this way, since protocol and contracts do not assume any topology explicitly -- and the topology may be subject to change without further notice. We'll only assume there is a central hub somewhere in the bus, which maintains a routing table. Actually this hub represents the centre of the star and is known as ''Nexus''. Whenever an UI element connects to the bus, a routing entry is added, which allows the element to be directly invoked to receive messages. For that reason, every element connected this way must stay put, at a fixed memory location.

!!!degree of connectedness
A connection point to the UI-Bus is called a ''Bus Terminal'' and implemented by class {{{BusTerm}}}. In fact, this class is the foundation of the ~UI-Bus network, with the Nexus being a specialisation. The //contract// involved with this connection requires the connected entity to respond to several messages, which effectively also means that this connected entity is a tangible UI-Element ({{{gui::model::Tangible}}} implements the necessary functions). But this pertains only to fully bidirectional bus connections, where the connected entity can receive messages addressed by it's ID. Beyond that, it is very much possible to operate a "free standing" bus terminal, which allows just to send "uplink" messages into the bus, without the need to add this terminal itself to the central routing table.

!!!core services
As far as the UI is concerned, »the application core« appears just as a node somehow connected to the bus. In fact there is a special backbone entity known as CoreService to fulfil that role. Even more, this service manages the lifecycle of the UI-Bus as a whole, insofar it incorporates the Nexus as a member, and is responsible for exposing the external interfaces of the GUI. And the CoreService is responsible for actually handling the {{{act}}} and {{{note}}} messages at one central location.


!Bus interactions
The UI-Bus has a star shaped topology, with a central "bus master" hub, which maintains a routing table. Attachment and detachment of elements can be managed automatically, since all of the UI-Bus operations perform within the UI event thread. We distinguish between up-link messages, directed towards some central service (presentation state management or command invocation) and down-link messages, directed towards individual elements. The interactions at the bus are closely interrelated with the elementary UI-Element operations.
;act
:send a GenNode representing the action
:* a command prototype corresponding to the message's ID is cloned and outfitted with actual parameter values
:* the resulting command instance is handed over to the SteamDispatcher for execution
;note
:send a GenNode representing the //state mark//
:some (abstracted) [[presentation state|PresentationState]] manager is expected to listen to these messages, possibly recording state to be restored later
;mark
:down-link communication to //feed back// state updates or to replay previously recorded  //state marks//
;change
:direct a MutationMessage towards the designated UI-Element, causing the latter to build a TreeMutator to receive the embedded [[diff-sequence|TreeDiffModel]]
While our UI widgets are implemented the standard way as proposed by [[GTKmm|http://www.gtkmm.org/en/documentation.html]], some key elements -- which are especially relevant for the anatomy and mechanics of the interface at a whole -- are made to conform to a common interface and behaviour protocol. {{red{WIP 11/15 work out gradually what this protocol is all about}}}. #975
As a starting point, we know
* there is a backbone structure known as the UI-Bus
* the latter is somehow related to the [[UI-model|GuiModel]] (one impersonates or represents the other)
* each {{{gui::model::Tangible}}} has a ''bus-terminal'', which is linked to the former's identity
* it is possible to wire ~SigC signals so to send messages via this terminal into the UI-Bus
* these messages translate into command invocations towards the Steam-Layer
* ~Steam-Layer responds asynchroneously with a diff message
* the GuiModel translates this into notifications of the top level changed elements
* these in turn request a diff and then update themselves into compliance.

!Behaviours
For some arbitrary reason, any element in the UI can appear and go away. This corresponds to attachment and deregistration at the UI-Bus

In regular, operative state, an interface element may initiate //actions.// These correspond to invocation of [[Steam-Layer commands|CommandHandling]], after having supplied the necessary arguments. Commands are referred and called //by name// (command-ID) and they are thus not bound to a specific UI-Element. Rather, it is the job of the UI to provide sensible command arguments, based on the context of the operation. There might be higher-level, cooperative [[gestures|Gesture]] within the interface, and actions might be formed like sentences, with the help of a FocusConcept -- however, at the the end of the day, there is a ''subject'' and a ''predicate''. And the interface element takes on the role of the underlying, the subject, the ''tangible''.

Some actions are very common and can be represented by a shorthand. An example would be to tweak some property, which means to mutate the attribute of a model element known beforehand. Such tweaks are often caused by direct interaction, and thus have the tendency to appear in flushes, which might be batched to remove some load from the lower layers.

And then there are manipulations that //alter presentation state:// Scrolling, canvas dragging, expanding and collapsing, moving by focus or manipulation of a similar presentation control.
These manipulations in itself do not constitute an action. But there typically is some widget or controller, which is responsible for the touched presentation state. If this entity judges the state change to be relevant and persistent, it may send a ''state mark'' into the UI-Bus -- expecting this marked state to be remembered. In turn this means the bus terminal might feed a state mark back into the tangible element, expecting this state to be restored.

A special case of state marking is the presentation of transient feedback. Such feedback is pushed from "somewhere" towards given elements, which react through an implementation dependent visual state change (flushing, colour change, marker icon). If such state marking is to be persistent, the interface element has in turn to send a specific state mark. An example would be a permanent error flag with a explanatory text showed in mouse over. Which in turn means there can also be sweeping state reset messages, which are to be broadcasted: A general "reset", an indication to collapse everything, or to clear pending error flags.

And finally, there are the //essential updates// -- any changes in the model //for real.//
These are sent as notifications just to some top level element, expecting this element to request a diff and to mutate contents into shape recursively.


!Interactions
;lifecycle
:connect to an existing term, supply the {{{EntryID}}} of the new element.
:automatically detach at end of life.
;act
:send a GenNode representing the action
;note
:send a GenNode representing the //state mark//
;mark
:receive a GenNode representing the //feedback// or replayed //state mark//
;signals
:the individual element exposes some generic signal slots
:* slotExpand prompts the element to transition into expanded / unfolded state. If this state is to be sticky, the element answers with a ''state mark''
:* slotReveal prompts the element to bring the indicated child into sight. Typically, this request will "bubble up" recursively.
:these slots are defined to be {{{sigc::trackable}}} for automated disconnection (&rarr; see [[#940|http://issues.lumiera.org/ticket/940#comment:3]] for an explanation)
;diff
:ask to retrieve a diff, which
:* either is an incremental status update
:* or is a from-scratch reconfiguration

We should note that these conventions of interchange lead to a recursive or ''self similar design'' of the UI-Bus: Each {{{BusTerm}}} is a personalised connection to yet another {{{BusTerm}}}. Even the ''bus master'' appears as just another {{{BusTerm}}} to all the communication partners. The overall topology of the bus might be reshaped without the participating elements being aware of such a change.

!Persistent presentation state
There is the distinction between //model state// and //presentation state,// the latter being confined to the //way you work with the model.// Every UI is stateful and reshapes itself through interaction -- but the common UI toolkits just give us this state as //transient state,// maybe with some means to restore state. The overall topic of PresentationState in an elaborate work environment like a NLE can be quite complex -- in Lumiera we try to separate the persistent part from the transient state, and we do so by exploiting the loose coupling achieved through the UI-Bus. For this to work we need the individual //tangible elements// to announce any presentation state transition deemed ''relevant persistent presentation state'' by them. To stress this last point: it is a decision //local// to the UI-Element, what to consider //relevant// and //what meaning// to give to this state.

Anyway, whenever such a relevant change happens -- consider e.g. the presentation of a clip has been switched into //expanded display,// detailing all attached effects -- then the UI-Element responsible for that state shall emit a ''state mark'', which essentially appears as message on the UI-Bus. We expect a PresentationStateManager to listen somewhere on the bus, to note and to categorise these messages and to provide the actual mechanism for persistent state. Any such //state mark// message is implicitly associated with the ID of the originating entity (just through the way the UI-Bus works), and its payload is a GenNode. The symbolic ID of this state mark element can be chosen freely, as it lives within a distinct namespace. Obviously, some kind of protocol emerges here, where commonly used state transitions acquire a generic meaning. Terms like "{{{expand}}}", "{{{reset}}}" or "{{{revealYourself}}}" are so common as to warrant a handler within the {{{model::Tangible}}} base class, delegating to virtual implementation functions, like {{{virtual void doExpand(bool)}}}. This is to say that the individual UI-Element is expected to be able to //receive// state mark messages, because those messages might be ''re-played'' by the state manager, in a consecutive session, attempting to restore previously recorded UI state. Consequently, each concrete element is expeted
* to organise the relevant state transitions, so they can be represented with a (basically private) vocabulary
* to provide and implement a data representation of the state, using the data types available as payload within GenNode
* to create the ability to restore presentation state to the degree deemed adequate, on reception of such a message

!Command activation
While the above definitions might seem more or less obvious and reasonable, there is one tiny detail, which -- on second thought -- unfolds into a fundamental decision to be taken. The point in question is //how we refer to a command.// More specifically: is referring to a command something generic, or is it rather something left to the actual implementing widget? In the first case, a generic foundation element has to provide some framework to deal with command definitions, whereas in the second case just a protected slot to pass on invocations from derived classes would be sufficient. This is a question of fundamental importance; subsidiarity has its merits, so once we forgo the opportunity to build from a generic pattern, local patterns will take over, while similarities and symmetries have to grow and wait to be discovered sometimes, if at all. This might actually not be a problem -- yet if you know Lumiera, you know that we tend to look at existing practice and draw fundamental conclusions, prior to acting.
&rarr; InteractionControl
&rarr; GuiCommandBinding
&rarr; [[Command handling (Steam-Layer)|CommandHandling]]

!!!actual implementation of command invocation
In the simple standard case, an UI event (like pressing a button) leads directly to invocation of a known command with locally known arguments. In such cases, the command can be triggered right away, using the nearest UI-Bus connection available. But there are more complicated cases, where invoking the command happens as a result of user interaction, and some of the actual arguments need to be picked up from the current context by suitable match. To deal with such cases, the InteractionState helper is used to pick up this contextual data.
Anyway, at the end we send a message over the UI-Bus, indicating the command instance through the GenNode ID and providing the command arguments in the payload as {{{diff::Record<GenNode>}}}.

!Essential update {{red{WIP 9/2016}}}
It is clear by now (9/2016) how shape and content changes are to be represented as diff message. Moreover, we have an implementation framework to build the concrete TreeMutator, which allows to target diff messages towards a (otherwise undisclosed) opaque implementation data structure.
!!!questions
* how to integrate "diff calls" into UI-Bus and {{{Tangible}}}
* how to ensure that diff processing happens within the event thread
* what is the exact relation to the GuiModel -- this remained nebulous up to now ({{red{8/2018}}} &rArr; seems to be obsolete)

!!!Diff messages
The dispatch of //diff messages// is directly integrated into the UI-Bus -- which is not just some abstract messaging system, but deliberately made to act as backbone for the kind of operations and collaborations detailed here. This seamless integration helps to overcome several technical (detail) problems. For one, the diff framework itself is //generic,// which allows to apply diff sequences to a wide variety of different target data structures. And, secondly, choose the whole approach of diff messages delivered over a bus system for a very specific, architectural reason: we want to decouple from internal structures, while still allowing low-level collaboration over this abstracted interconnection channel. On both counts, we are thus bound to encapsulate the actual diff at interface level. So the MutationMessage marks the purpose to alter the designated {{{Tangible}}}, while offering an embedded, generic implementation to apply the (hidden) diff message, relying in turn on the interface {{{DiffMutatble}}}, which -- surprise -- is offered by {{{Tangible}}}. Basically this means that each tangible interface element offers the ability to be reshaped, by creating a TreeMutator on demand, internally wired to effect the changes inscribed within the diff.
The architecture of the Lumiera application separates functionality into three Layers: __Stage__, __Steam__ and __Vault__.

The Graphical User interface, the upper layer in this hierarchy, embodies everything of tangible relevance to the user working with the application. The interplay with Steam-Layer, the middle layer below the UI, is organised along the distinction between two realms of equal importance: on one side, there is the immediate //mechanics of the interface,// which is implemented directly within the ~UI-Layer, based on the Graphical User Interface Toolkit. And, on the other side, there are those //core concerns of working with media,// which are cast into the HighLevelModel at the heart of the middle layer.
//A topological addressing scheme to designate structural locations within the UI.//
Contrary to conventional screen pixel coordinates, here we aim at a topological description of the UI structure. Such a framework of structural reference allows us                                                                     
* to refer to some "place" or "space" within the interface                                    
* to remember and return to such a location
* to move a [[work focus (»Spot«)|Spot]] structurally within the UI
* to describe and configure the pattern view access and arrangement

As starting point for the design of such a framework, we'll pick the notion of an access path within a hierarchical structure    
* the top-level window
* the perspective used within that window
* the panel within this window
* a view group within the panel
* plus a locally defined access path further down to the actual UI element

!Properties
UI coordinates are a symbolic specification, and as such, their representation is immutable value-like. Essentially, a coordinate specification designates a pathway, and thus is comprised of a sequence of symbols. The individual path component symbol can be addressed by its depth index, starting with the top level window as rooting point. The meaning of first steps within each path is thus always fixed (top-level window, perspective, panel, view), followed by an essentially open sequence of local component names. Yet a given index position need not necessarily be actually defined. A coordinate spec can be indeterminate (NIL), but in all other cases a minimal consecutive sequence of symbols is guaranteed to exist.

!!!about component names
The individual components within such a coordinate spec are given as name-~IDs. In this context, we are more interested in the //topology// of UI components, and thus the names serve to distinguish sibling nodes. In fact, those names are created as type names and then decorated with number suffix while creating actual components. In this form, those name become real identifiers attached to those widgets, and can be used to access the respective implementation entity. Most of the elements designated and accessed this way are //mere UI entities// -- however, some elements can be representations of corresponding elements in the session (e.g. Tracks, Clips). If that is the case, obviously the name-ID has to match the "official" ID as used within the session.

!!!flavours of meaning
A given coordinate spec needs to be //interpreted// with the help of a resolver, which supplies additional knowledge regarding the actual window and UI configuration backing those interpretations. Such an interpretation allows to query for some typical predications, and it allows for //mutating operations,// which actually build a new coordinate spec from the given one. The general assumption is for all those coordinate interpretations to happen not within an excessively performance critical zone, where a little bit of iteration and repeated recomputation is deemed less costly than extended caching of evaluation state.

The first distinction to draw is the ''anchor point'' of a given coordinate spec. After anchoring, the designated path is explicitly rooted within a top level window. The act of anchoring can thus be obvious(trivial), it can be //covered by wildcard,// or it can be a free interpolation or match against the existing environment. Starting from the anchor, the next predication to consider is the ''coverage'': the extent to which a given coordinate spec is actually covered by real UI structures. The test for coverage starts at the anchor point and descends following the coordinate spec in question. Backed by this evaluation process of interpretation and matching, we may judge a coordinate spec with respect to several predications
* the coordinate spec may be totally ''explicit'' -- or it might contain wildcards
* one specific path component of the coordinate spec may be known(present) or unknown
* regarding the coverage, we may distinguish
** a totally covered coordinate spec
** a partially covered coordinate spec
** a coordinate spec impossible to cover

!UI coordinate path evaluation
As indicated above, evaluation of UI coordinates requires a ''resolver'', to perform a search and matching operation and to hold the evaluation state. Evaluation is accomplished by first constituting an anchoring, followed by traversal of the coordinate spec and matching against a navigation path within the actual UI window configuration. This process might involve interpretation of some meta-symbols and interpolation of wildcards.

Internally the coordinate resolver in turn relies on a context query interface, to find out about existing windows, panels, views and tabs and to navigate the real UI structure. The actual implementation of this context query interface is backed by the [[Navigator]] component exposed through the InteractionDirector. In practice, the Navigator relies on another service for the [[low level access to UI components|UILowLevelAccess]]
!!!Query operations
In addition to the //locally decidable properties// of a coordinate spec, which are the explicitness and the presence of some component, several contextual predications may be queried 
;anchorage
:the way the given coordinate spec is or can be anchored
:* it is already //explicitly anchored// by referring either to a specific window or by generic specification
:* it //can be a anchored// by interpolation of some wildcards
:* it is //incomplete// and need to be extended to allow anchoring
:* it is //impossible to anchor// in the current UI configuration
;coverage
:the extent to which a given coordinate spec is backed by the actual UI configuration
:please note: to determine the coverage, the spec needs to be anchored (either explicitly, or by interpolation, or by extension of an incomplete spec)
:* it is //completely covered//
:* it is //partially covered// with an remaining, uncovered extension part
:* it is //possible to cover completely//
:* it is //impossible to cover//
__Some fine points to note__: Anchorage and coverage are not the same thing, but coverage implies anchorage. Only when a path is complete (starts with the window spec) and explicit (has no wildcards), then anchorage implies also partial coverage (namely at least to depth 1). To determine the possibility of coverage means to perform a resolution with backtracking to pick the maximal solution. Moreover, since "covered" means that the path specification //is at least partially supported by the real UI,// we establish an additional constraint to ensure this resolution did not just match some arbitrary wildcards. Rather we demand that behind rsp. below the last wildcard there is at least one further explicit component in the path spec, which is supported by the real UI. As a consequence, the coverage resolution may fail altogether, while still providing at least a possible anchor point.
!!!Mutations
In addition to querying the interpretation of a given coordinate spec with respect to the current UI environment, it is also possible to rewrite or extend the spec based on this environment
;anchoring
:in correspondence to the possible states of anchorage, we may derive an explicitly anchored spec
:*by interpolating the given spec
:*by interpretation and extension of the given spec
;covering
:we may construct the covered part of a given spec, which includes automatic anchoring.
;extending
:a (partially) covered UI coordinate pattern is extended by a further suffix of path...
:* the given UI coordinate pattern is first anchored and covered
:* and //truncated// to the covered part
:* the given //extension suffix// is then attached behind

__Navigation mutations:__ //In theory,// it would even be possible to extend the path by creating suitable child components; but actually this would require all "elements" to implement a suitable mutation interface -- which in the case of //generic elements// might be far beyond the common ground. For this reason, we keep mutation of the backing environment outside of a path mutator's scope and rather keep mutation limited to the path itself.
//Cross cutting access to elementary UI structures.//
We have several orthogonal identification and access schemes within the UI. A naively written UI application just attaches the core logic below some widgets and controllers -- not only does this lead to a hard to maintain code base, this approach is even outright impossible in our case, due to the strict decoupling between core and GUI, which places us into the situation to connect a self contained core with a self contained UI. This is a binding, which, as a sideline, also generates a control structure of its own. We can indeed write code dealing with a generic UI element -- but there needs to be some place where this kind of generic designation is translated into internal structures of the UI toolkit (GTK in our case), to obtain a direct (language) reference to some implementation widget finally.

A service to translate some generic address scheme into actual entities builds the foundation of designating a "place within the UI" by abstracted topological [[UI coordinates|UICoord]]. It allows to define //rules// how some generic [[kind of view|GuiComponentView]] shall be //placed and allocated// into the existing UI structure.

!Notes about design and implementation {{red{WIP 4/2018}}}
At the time of this writing, it is not really clear if we need such a facility and what form its implementation will take -- which in turn places several further planning steps and design consideration into dangling state. The programmer's usual remedy in such a situation is to create yet another abstraction as a tie break. We do not know what it is, but at least we can write unit tests against it to find out what it could be.
* we need to confine ourselves to a small selection of base interfaces to cover all possible elements accessible this way: {{{gui::model::Tangible}}}, {{{Gtk::Widget}}}, ...?
* looks like this accessor service will be backed by a structure similar to {{{LocationQuery}}} -- maybe it will even be the same backing structure at the end.
* so effectively an access request will drill down the real UI topology, following the path of the given UI Coordinates.
* the result can then be packaged into a {{{lib::Variant}}} and use a //variant visitor// (double dispatch) to invoke the apropriate {{{dynamic_cast}}}
* together, this mechanism allows us to return direct language references to actual implementation widgets

!Extend UICoord to an opaque concrete {{{UILocation}}} handle?
This is a possible different turn in the design, considered as an option {{red{as of 6/2018}}}. Such would complement a symbolic coordinate specification with an opaque handle pointing to an actually existing UI widget. Access to this widget requires knowledge about its actual type -- basically a variant record tacked onto the UICoord representation, packaged into a subclass of the latter. The obvious benefit would be to avoid drilling down into the UI widget tree repeatedly, since there is now a way to pass along hidden //insider information// regarding actual UI elements. However, such a design bears a "smell" of being implementation driven, and undercuts the whole idea of a entirely symbolic layer of location specifications. Building such an extension can be considered sensible only under the additional assumption that this kind of //location token// is to be passed over various interfaces and indeed becomes a generic token of exchange and interaction within the UI layer implementation -- which, right now is not a given.
//Placeholder for now....//
For any kind of playback to happen, timeline elements (or similar model objects) need to be attached to a Viewer element through a special kind of [[binding|BindingMO]], called a ''view connection''. In the most general case, this creates an additional OutputMapping (and in the typical standard case, this boils down to a 1:1 association, sending the master bus of each media kind to the standard OutputDesignation for that kind).

Establishing a ~ViewConnection is prerequisite for creating or attaching an PlayController through the PlayService, thereby [[activating the viewer for playback|ViewerPlayActivation]]. Multiple "play control" GUI elements can be associated with such a play controller, causing them to work as being linked together: if you e.g. push "play" on one of them, the button states of all linked GUI controls will reflect the state change of the underlying play controller.

View connections are part of the model and thus persistent. They can be created explicitly, or just derived by //allocating a viewer.// And a new view connection can push aside (and thus "break") an existing one from another timeline or model element. When a view connection is //broken,// any associated PlayProcess needs to be terminated (this is a blocking operation). Thus, at any time, there can be only one active view connection to a given viewer or output sink; here "active" means, that a PlayController has been hooked up, and the connection is ready for playback or rendering. But on the other hand, nothing prevents a timeline (or similar model object) to maintain multiple view connections -- consequently the actual playback position behaves as if associated with the view connection; it has only meaning with respect to this specific connection. An obvious example is that you may play back, without interfering with an ongoing render.
//access and location of [[component views|GuiComponentView]] as abstracted from actual UI widgets.//
The ViewLocator is a close sibling and mutually interdependent with the [[Navigator]] -- the primary difference being the uniform, completely symbolic representation by [[UI-tree topology|UICoord]], as exposed through the latter, which precludes //immediate manipulation.// The ViewLocator likewise presents an abstracted view, but oriented towards individual components, which it allows to locate, allocate and expose by direct reference as UI-Element.

!Rules based handling by kind-of-view
A small selection of //relevant UI elements// can in fact be located "by-type", relying on a configuration in the form of //locating rules...// &rarr; elaborated in more detail [[here|GuiComponentView]].
This access mechanism allows to act on "the primary one of that kind", possibly creating a new instance on demand -- such UI elements are not to be confused with the model entities they expose for user interaction; typically they can either be re-linked or re-bound to a specific entity, or they are able to handle multiple model connections in separate tabs within the view (e.g. several [[Timelines|GuiTimelineView]]).
A [[structural Asset|StructAsset]] corresponding to a Viewer element in the GUI.

These Viewer (or Monitor) elements play an enabling role for any output generation. In order to [[initiate playback|PlayService]], we need a fully resolved OutputDesignation, which typcially can be achieved by creating a ViewConnection, i.e. connecting a timeline or similarily suitable model element to such a viewer. Actually, this connection is modelled by attaching to a BindingMO which is linked to a ViewerAsset. This way, the model tracks and persists the available viewer windows and the current connection situation.

When the GUI is outfitted, based on the current Session or HighLevelModel, it is expected to retrieve the viewer assets and for each of them, after installing the necessary widgetes, registers an OutputSlot with the global OutputManager.
for showing output, three entities are involved
* the [[Timeline]] holds the relevant part of the model, which gets rendered for playback
* by connecting to a viewer component (&rarr; ViewConnection), an actual output target is established
* the playback process itself is coordinated by a PlayController, which in turn creates a PlayProcess
{{red{4/24 maybe better call this {{{ModelPlayActivation}}} ?}}}
!the viewer connection
A viewer element gets connected to a given timeline either by directly attaching it, or by //allocating an available free viewer.// Anyway, as a model element, the viewer is just like another set of global pipes chained up after the global pipes present in the timeline. Connecting a timeline to a viewer creates a ViewConnection, which is a special [[binding|BindingMO]]. The number and kind of pipes provided is a configurable property of the viewer element &mdash; more specifically: the viewer's SwitchBoard. Thus, connecting a viewer activates the same internal logic employed when connecting a sequence into a timeline or meta-clip: a default channel association is established, which can be overridden persistently (&rarr; OutputMapping). Each of the viewer's pipes in turn gets connected to a system output through an OutputSlot registered with the OutputManager &mdash; again an output mapping step.

!playback activity
The ViewConnection, once processed by the [[Builder]], leads to the setup of an additional [[output adaptation network|OutputNetwork]], possibly scaling and adapting generated frames to be displayed within a small widget in the UI. When actual playback is started, this connection is //locked,// the corresponding OutputSlot is //allocated// and appropriate [[»calculation streams«|CalcStream]] are established in the Engine to produce the frames for continuous playback; moreover, activated playback entails the possibility of [[spontaneous and non-linear adaptation of playback|NonLinearPlayback]] in response to user interaction.
A ''~Meta-Clip'' or ''Virtual Clip'' (both are synonymous) denotes a clip which doesn't just pull media streams out of a source media asset, but rather provides the results of rendering a complete sub-network. In all other respects it behaves exactly like a "real" clip, i.e. it has [[source ports|ClipSourcePort]], can have attached effects (thus forming a local render pipe) and can be placed and combined with other clips. Depending on what is wired to the source ports, we get two flavours:
* a __placeholder clip__ has no "embedded" content. Rather, by virtue of placements and wiring requests, the output of some other pipe somewhere in the session will be wired to the clip's source ports. Thus, pulling data from this clip will effectively pull from these source pipes wired to it.
* a __nested sequence__ is like the other sequences in the Session, just in this case any missing placement properties will be derived from the Virtual Clip, which is thought as to "contain" the objects of the nested sequence. Typically, this also configures the fork ("tracks") of the "inner" sequence such as to [[connect any output|OutputMapping]] to the source ports of the Virtual Clip.

Like any "real" clip, Virtual Clips have a start offset and a length, which will simply translate into an offset of the frame number pulled from the Virtual Clip's source connection or embedded sequence, making it possible to cut, splice, trim and roll them as usual. This of course implies we can have several instances of the same virtual clip with different start offset and length placed differently. The only limitation is that we can't handle cyclic dependencies for pulling data (which has to be detected and flagged as an error by the builder)

!Implementation
The implementation is based on the observation, that a virtual clip is like a real clip, just referring to a ''virtual media''.
Thus we'll employ a special kind of media asset, which actually links to a [[binding|BindingMO]], which in turn connects to a [[Sequence]]. Its the binding's job to help translating the sequence's contents into a media-like entity, with just //content// apearing in several channels.
{{red{WIP as of 11/2010 -- most details yet to be defined}}} 
The ''Visitor Pattern'' is a special form of //double dispatch// &mdash; the goal is to select  the function actually to be executed at runtime based both on the concrete type of some tool object //and// the target this tool is applied to. The rationale is to separate some specific implementation details from the basic infrastructure and the global interfaces, which can be limited to describe the fundamental properties and operations, while all details relevant only for some specific sub-problem can be kept together encapsulated in a tool implementation class. Typically, there is some iteration mechanism, allowing to apply these tools to all objects in a given container, a collection or object graph, without knowing the exact type of the target and tool objects. See the [[Visitor Pattern design discussion|VisitorUse]]

!Problems with Visitor
The visitor pattern is not very popular, because any implementation is tricky, difficult to understand and tends to place quite some burden on the user code. Even Erich Gamma says that on his list of bottom-ten patterns, Visitor is at the very bottom. This may be due to the fact that this original visitor implementation (often called the ''~GoF visitor'') causes a cyclic dependency between the target objects and the visiting tool objects, and includes some repetitive code (which results in silent failure if forgotten). Robert Martin invented 1996 an interesting variation (commonly labelled ''acyclic visitor''). By using a marker base class for each concrete target type to be treated by the visitor, and by applying a dynamic cast, we can get rid of the cyclic dependencies. The top level "Visitor" is reduced to a mere empty marker interface in this design, while the actual visiting capabilities for a concrete target object is discovered at application time by the aforementioned dynamic cast. Besides the runtime cost of such a cast, the catch is that now the user code has still more responsibilities, because the need to maintain consistently two parallel object hierarchies. 
Long time there seemed to be not much room for improvement, at least before the advent of generic programming and the discovery of template metaprogramming. ''Loki'' (Alexandrescu, 2000) showed us how to write a library implementation which hides away the technicalities of the visitor pattern and automatically generates most of the repetitive code.

!Requirements
* cyclic dependencies should be avoided or at least restricted to some central, library related place.
* The responsibilities for user code should be as small as possible. Especially, we should minimize the necessity to do corresponding adjustments to separate code locations, e.g. the necessity to maintain parallel hierarchies.
* Visitor is about //double dispatch,// thus we can't avoid using some table lookup implementation &mdash; more specifically we can't avoid using the cooperating classes vtables. We can expect at least two table lookups for each call dispatch. Besides that, the implementation should not be too wasteful...
* individual "visiting tool" implementation classes should be able to opt in or opt out on implementing functions to treat some of the visitable subclasses.
* there should be a safe fallback mechanism backed by the visitable object's hierarchy relations. If some concrete visiting tool class decides not to implement a {{{treat(..)}}}-function for some concrete target type, the call should fall back to the next best match according to the target object's hierarchy, i.e. the next best {{{treat(..)}}}-function should be used.
The last requirement practically rules out the Loki acyclic visitor, because this implementation calls a general fallback function when an exact match based on the target object's type is not possible. Considering our use of the visitor pattern within the render engine builder, such a solution would not be of much use: Some specific builder tool may implement a {{{treat(CompoundClip&)}}}-function, while most of the other builder tools just implement a {{{treat(Clip&)}}}-function, thus handling any multichannel clip via the general clip interface. This is exactly the reason why we want to use visitor at first place. Reduce the scope of specific treatment, implement against rather generic interfaces.

!Implementation Notes
A good starting point to understand our library implementation of the visitor pattern is {{{tests/components/common/visitingtoolconcept.cpp}}}, which contains an complete, all-in-one-file proof of concept implementation, on which the real implementation ({{{"common/visitor.hpp"}}}) was based.
* similar to Loki, we use a {{{Visitable}}} base class and a {{{Tool}}} base class (we prefer the name "Tool" over "Visitor", because it makes the intended use more clear).
* the objects in the {{{Visitable}}} hierarchy implement an {{{apply(Tool&)}}}-function. This function needs to be implemented in a very specific manner, thus the {{{DEFINE_PROCESSABLE_BY}}} macro should be used when possible to insert the definition into a concrete {{{Visitable}}} class.
* similar to the acyclic visitor, the concrete visiting tool classes inherit from {{{Applicable<TARGET, ...>}}} marker base classes, where the template parameter {{{TARGET}}} is the concrete Visitable type this tool wants to treat, either directly by defining a {{{treat(ConcreteVisitable&)}}}, or by falling back to some more general {{{treat(...)}}} function.
* consequently our implementation is //rather not acyclic// &mdash; the concrete tool implementation depends on the full definition (header) of all concrete Visitables, but we avoid cyclic dependencies on the interface level. By using a typelist technique inspired by Loki, we can concentrate these dependencies in one single header file, which keeps things maintainable.
* we use the {{{Applicable<TARGET, ...>}}} marker base classes to drive the generation of Dispatcher classes, each of which holds a table of trampoline functions to carry out the actual double dispatch at call time. Each concrete Visitable, by using the {{{DEFINE_PROCESSABLE_BY}}}-macro, causes the generation of a dedicated, separate Dispatcher table, holding call slots for each concrete tool implementation class. To store and access the index position for these "call slots", we use a tag associated with the concrete visiting tool class, which can be retrieved by going though the tool's vtable
* __runtime cost__: the concrete tool's ctor stores the trampoline pointers (this could be optimized to be a "once per class" initialisation). Then, for each call, we have 2 virtual function calls and a lookup and call of the trampoline function, typically resulting in another virtual function call for resolving the {{{treat(..)}}} function on the concrete tool class
* __extension possibilities__: while this system might seem overengineered, you should note the specific focus on extension and implementing against interfaces:
** not every Visitable subclass requires to build a separate Dispatcher. As a rule of thumb, only when a class needs dedicated and specific treatment within some concrete visiting tool (i.e. when there is the need of a function {{{treat(MySpecialVisitable&)}}}), then this class should use the {{{DEFINE_PROCESSABLE_BY}}}-macro, leading to the definition of a distinct {{{apply()}}}-function and Dispatcher. In all other cases, it is sufficient just to extend some existing Visitable, which thus acts as an interface as far as visiting tools are concerned.
** because of the possibility of utilising virtual {{{treat(...)}}} functions, not every concrete visiting tool class needs to define a set of {{{Applicable<...>}}} base classes (and thus get a separate dispatcher slot). We need such only for each //unique set// of Applicables. All other concrete tools can extend existing tool implementations, sharing and partially extending the same set of virtual {{{treat()}}}-functions.
** when adding a new "first class" Visitable, i.e. a concrete target class that needs to be treated separately in some visiting tool, the user needs to include the {{{DEFINE_PROCESSABLE_BY}}} macro and needs to make sure that all existing "first class" tool implementation classes include the Applicable base class for this new type. In this respect, our implementation is clearly "cyclic". (Generally speaking, the visitor pattern should not be used when the hierarchy of target objects is frequently extended and remoulded). But, when using the typelist facillity to define the Applicable base classes, we'll have one header file defining these collection of Applicables and thus we just need to add our new concrete Visitable to this header and recompile all tool implementation classes.
** when creating a new "~Visitable-and-Tool" hierarchy, the user should derive (or typedef) and parametrize the {{{Visitable}}}, {{{Tool}}} and {{{Applicable}}} templates, typically into a new namespace. An example can be seen in {{{steam/mobject/builder/buildertool.hpp}}}
Using the ''Visitor Design Pattern'' is somewhat controversial, mainly because this pattern is rather complicated, requires certain circumstances to be usefull, and &mdash; especially when used in the original form described by Gamma et al &mdash; puts quite some burden on the implementor. Contrary to some patterns commonly used today (like e.g. singleton or factory), visitor is by no way a popular pattern. Ichthyo thinks that until now it's potential stregths remain still to be discovered, due to obvious and well known weaknesses. This problems can be ameliorated a good deal by using template based library implementation techniques along the lines of the [[Loki library|http://loki-lib.sourceforge.net/]].
&rarr; [[implementation deatails|VisitingToolImpl]]

!why bothering with visitor?
In the Lumiera Steam-Layer, the visitor pattern is used to overcome another notorious problem when dealing with more complex class hierarchies: either, the //interface// (root class) is so unspecific to be almost useless, or, in spite of having a useful contract, this contract will effectively be broken by some subclasses ("problem of elliptical circles"). Initially, when designing the classes, the problems aren't there (obviously, because they could be taken as design flaws). But then, under the pressure of real features, new types are added later on, which //need to be in this hierarchy// and at the same time //need to have this and that special behaviour// and here we go ...
Visitor helps us to circumvent this trap: the basic operations can be written against the top level interface, such as to include visiting some object collection internally. Now, on a case-by-case base, local operations can utilise a more specific sub interface or the given concrete type's public interface. So visitor helps to encapsulate specific technical details of cooperating objects within the concrete visiting tool implementation, while still forcing them to be implemented against some interface or sub-interface of the target objects.

!!well suited for using visitors
generally speaking, visitors are preferable when the underlying element type hierarchy is rather stable, but new operations are to be added frequently.
* Implementation of the builder. All sorts of special treatments of some MObject kinds can be added later
* Operating on the Object collection within the session. Effectively, this decouples the invoking interface on the session completely from the special operations to be carried out on individual objects.

To see an simple example of our "visiting tool", have a look at {{{tests/components/common/visitingtooltest.cpp}}}
The Intention of this text is to help you understanding the design and to show some notable details.

!!!!Starting Point
Design is an experiment to find out how things are related. We can't //plan// a Design top down, rather we have to start at some point with some hypothesis and look how it works out. The point of origin for Ichthyo's design is the observation that the Render Engine needs some Separation of Concerns to get the complexity down. And especially, this design ''uses three different Levels'' or Layers within the Render Engine and Session.
* the __high level__ within the session uses uniformly treated MObjects which are assembled/glued together by a network of [[Placements|Placement]].<br> It is supposed that the GUI will present this and //only this view //to the user, giving him the ability to work with the objects
* the __builder level__ works on a stripped-down subset of this ~MObject network: it uses the //same Object instances// but only assembled by [[Explicit Placements|ExplicitPlacement]] which locate the objects //on a simple (track, time) grid.// It's the job of the builder to create out of this simplified Network the Configuration of [[Render Nodes|ProcNode]] needed to do the actual rendering
* the __engine level__ uses solely [[Render Pipeline Nodes (ProcNode)|ProcNode]], i.e. a Graph of interconnected processing nodes. The important constraint here is that //any decisions are ruled out//. The core Render Engine with all its nodes is lacking the ability to do any tests and checks and has no possibility to branch or reconfigure anything. (this is an especially important lesson I draw from studying the current Cinelerra source code)

!!!!Performance Considerations
* within the Engine the Render Nodes are containing the ''inner loop'', whose contents are to be executed hundred thousands to million times per frame. Every dispensable concern, which is not strictly necessary to get the job done, is worth the effort of factoring out here.
* performance pressure at the builder level is far lower, albeit still existent. Compared to the effort of calculating a single processing step, looping even over some hundred nodes and executing quite some logic is negligible. Danger bears on creating memory pressure or causing awkward execution patterns (in the Vault) rather. So the main concern should be the ability of reconfiguring different aspects separately without much effort. If for example a given render strategy works out to create lots of deadlocks and waitstates in the Vault, the design should account for the possibility to exchange it with another strategy without having to modify the inner workings of the build process.<br>On the other hand, I wouldn't be overly concerned to trigger the build process yet another time to get some specific problem solved. However, the possibility to share one Render configuration for, say, 20 sec of video, instead of triggering the build process 500 times for every frame in this timespan, would sure be worth considering if it's not overly complicated to achieve.
* contrary to this, the session level is harmless with respect to performance. Getting acceptable responsiveness on user interactions is sufficient. We could consider using high level languages here, for it is much more important being able to express and handle complicated object relationships with relative ease. The only (indirect) concern is to avoid generating memory pressure inadvertently. Edit actions generating memory peaks could interfere with an ongoing background render process. If we decide to use lots of relation objects or transient objects, we should use an object pool or still better an garbage collector.

!!!!Concepts and Interfaces
This design strives to build each level and subsystem around some central concepts, which are directly expressed as Interfaces. Commonly used Interfaces clamp the different layers.
* MObject gives an uniform view on all the various entities to be arranged in the session.
* all the arranging and relating of ~MObjects is abstracted as [[Placement]]. The contract of a Placement is that it always has a related Subject, that we can change the //way of placement&nbsp;// by adding and removing [["locating pins"|LocatingPin]], call some test methods on it (still to be defined), and, finally, that we can get an ExplicitPlacement from it.
* albeit being a special form of a Placement, the ExplicitPlacement is treated as a separate concept. With respect to edit operations within the session, it can stand for any sort of Placement. On the other hand the Builder takes a list of ~ExplicitPlacements as input for building up the Render Engine(s). This corresponds to the fact that the render process needs to organize the things to be done on a simple two dimensional grid of (output channel / time). The (extended) contract of an ~ExplicitPlacement provides us with this (output,time).
* on the lower end of the builder, everything is organized around the Concept of a ProcNode, which enables us to //pull// one (freely addressable) Frame of calculated data. Further, the ProcNode has the ability to be wired with other nodes and [[Parameter Providers|ParamProvider]]
* the various types of data to be processed are abstracted away under the notion of a [[Frame]]. Basically, a Frame is an Buffer containing an Array of raw data and it can be located by some generic scheme, including (at least) the absolute starting time (and probably some type or channel id).
* All sorts of (target domain) [[parameters|Parameter]] are treated uniformly. There is a distinction between Parameters (which //could// be variable) and Configuration (which is considered to be fixed). In this context, [[Automation]] just appears as a special kind of ParamProvider.
* and finally, the calculation //process// together with its current state is represented by a StateProxy. I call this a "proxy", because it should encapsulate and hide all tedious details of communication, be it even asynchronous communication with some Controller or Dispatcher running in another Thread. In order to maintain a view on the current state of the render process, it could eventually be necessary to register as an observer somewhere or to send notifications to other parts of the system.

!!!!Handling Diversity
An important goal of this approach is to be able to push down the treatment of variations and special cases. We don't need to know what kind of Placement links one MObject to another, because it is sufficient for us to get an ExplicitPlacement. The Render Engine doesn't need to know if it is pulling audio Frames or video Frames or GOPs or OpenGL textures. It simply relies on the Builder wiring together the correct node types. And the Builder in turn does so by using some overloaded function of an iterator or visitor. At many instances, instead of doing decisions in-code or using hard wired defaults, a system of [[configuration rules|ConfigRules]] is invoked to get a suitable default as a solution (and, as a plus, this provides points of customisation for advanced users). At engine level, there is no need for the video processing node to test for the colormodel on every screen line, because the Builder has already wired up the fitting implementation routine. All of this helps reducing complexity and quite some misconceptions can be detected already by the compiler.

!!!!Explicit structural differences
In case it's not already clear: we don't have "the" Render Engine, rather we construct a Render Engine for each structurally differing part of the timeline. (please relate this to the current Cinelerra code base, which constructs and builds up the render pipeline for each frame separately). No need to call back from within the pipeline to find out if a given plugin is enabled or to see if there are any automation keyframes. We don't need to pose any constraints on the structuring of the objects in the session, besides the requirement to get an ExplicitPlacement for each. We could even loosen the use of the common metaphor of placing media sequences on fixed tracks, if we want to get at a more advanced GUI at some point in the future.

!!!!Stateless Subsystems
The &raquo;current setup&laquo; of the objects in the session is sort of a global state. Same holds true for the Controller, as the Engine can be at playback, it can run a background render or scrub single frames. But the whole complicated subsystem of the Builder and one given Render Engine configuration can be made ''stateless''. As a benefit of this we can run this subsystems multi-threaded without the need of any precautions (locking, synchronizing). Because all state information is just passed in as function parameters and lives in local variables on the stack, or is contained in the StateProxy which represents the given render //process// and is passed down as function parameter as well. (note: I use the term "stateless" in the usual, slightly relaxed manner; of course there are some configuration values contained in instance variables of the objects carrying out the calculations, but this values are considered to be constant over the course of the object usage).
within Lumiera's ~Steam-Layer, on the conceptual level there are two kinds of connections: data streams and control connections. The wiring deals with how to define and control these connections, how they are to be detected by the builder and finally implemented by links in the render engine.
&rarr; see OutputManagement
&rarr; see OutputDesignation
&rarr; see OperationPoint
&rarr; see StreamType


!Stream connections
A stream connection should result in media data travelling from a source to a sink. Here we have to distinguish between the high-level view and the situation in the render engine. At the session level and thus for the user, a //stream// is the elementary unit of "media content" flowing through the system. It can be described unambigously by having an uniform StreamType &mdash; this doesn't exclude the stream from being inherently structured, like containing several channels. The HighLevelModel can be interpreted as //creating a system of stream connections,// which, more specifically can be categorised into two kinds or types of connections -- the rigid and the flexible parts:
* the [[Pipes|Pipe]] are rather rigid ''processing chains'', limited to using one specific StreamType. <br/>Pipes are built by attaching processing descriptors (Effect ~MObjects), where the order of attachment is given by the placement.
* there are flexible interconnections or ''routing links'', including the ability to sum or overlay media streams, and the possibility of stream type conversions. They are controlled by the OutputDesignation ("wiring plug"), to be queried from the placement at the source side of the interconnection (i.e. at the exit point of a pipe)
Please note, the ''High-level model'' comprises a blue print for constructing the render engine. There is no real data "flowing" through this model, thus any "wiring" may be considered conceptual. Within this context, any wiring specifications just express //the intention of getting things connected in a specific way.// Consider the example of a clip, which might find out (through query of his placement) that he's intended to produce output for some destination or  bus or subgroup called "Ambiance sound"

The ''Builder'' to the contrary considers matters locally. He's confined to a given set of objects handed in for processing, and during that processing will successively collect all the [[plugs|WiringPlug]] and [[claims|WiringClaim]] encountered. While the former denote the desired //destination//&nbsp; of data emerging from a pipe, the wiring claim expresses the fact that a given object //claims to be some output destination.// Typically, each [[global pipe or bus|GlobalPipe]] raises such a claim. Both plugs and claims are processed on a "just in case they exist" base: When encountering a plug and //resolving//&nbsp; the corresponding claim, the builder drops off a WiringRequest accordingly. Note the additional resolution, which might be necessary due to virtual media and nested sequences (read: the output designation might need to be translated into another designation, using the media/channel mapping created by the virtual entity &rarr; see [[mapping of output designations|OutputMapping]])

The Processing of such a wiring request drives the actual connection step. It is conducted by the OperationPoint, provided by and executing within a BuilderMould and controlled by a [[processing pattern|ProcPatt]]. This rather flexible setup allows for wiring summation lines, include faders, scale and offset changes and various overlay modes. Thus the specific form of the connection wired in here depends on all the local circumstances visible at that point of operation:
* the mould is picked from a small number of alternatives, based on the general wiring situation.
* the processing pattern is queried to fit the mould, the stream type and additional placement specifications ({{red{TODO 11/10: work out details}}})
* the stream type system itself contributes in determining possible connections and conversions, introducing further processing patterns

The final result, within the ''Render Engine'', is a network of processing nodes. Each of this nodes holds a WiringDescriptor, created as a result of the wiring operation detailed above. This descriptor lists the predecessors, and (in somewhat encoded form) the other details necessary for the processing node to respond properly at the engine's calculation requests (read: those details are implementation bound and can be expeted to be made to fit)

On a more global level, this LowLevelModel within the engine exposes a number of [[exit nodes|ExitNode]], each corresponding to a ModelPort, thus being a possible source to be handled by the OutputManager, which is responsible for mapping and connecting nominal outputs (the model ports) to actual output sinks (external connections and viewer windows). A model port isn't necessarily an absolute endpoint of connected processing nodes &mdash; it may as well reside in the middle of the network, e.g. as a ProbePoint. Besides the core engine network, there is also an [[output network|OutputNetwork]], built and extended on demand to prepare generated data for the purpose of presentation. This ViewConnection might necesitate scaling or interpolating video for a viewer, adding overlays with control information produced by plugins, or rendering and downmixing multichannel sound. By employing this output network, the same techniques used to control wiring of the main path, can be extended to control this output preparation step. ({{red{WIP 11/10}}} some important details are to be settled here, like how to control semi-automatic adaptation steps. But that is partially true also for the main network: for example, we don't know where to locate and control the faders generated as a consequence of building a summation line)

!!!Participants and Requirements
* the ~Pipe-ID is an universal key to denote connections, outputs and ports.
* output designation is just a ~Pipe-ID, actually a thin wrapper to make the intention explicit
* claims and plugs are to be implemented as LocatingPin
* as decided elsewhere, we get a [[Bus-MObject|BusMO]] as an attachment point
* we need OutputMapping as a new generic interface, allowing us to define ~APIs in terms of mappings
* builder moulds, operation point and processing patterns basically are in place, but need to be shaped
* the actual implementation in the engine will evolve as we go; necessary adaptations are limited to the processing patterns (which is intentional)


!Control connections
{{red{⚠ In-depth rework underway as of 7/2024...}}}
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
Each [[processing node|ProcNode]] contains a stateless ({{{const}}}) descriptor detailing the inputs, outputs and predecessors. Moreover, this descriptor contains the configuration of the call sequence yielding the &raquo;data pulled from predecessor(s)&laquo;. The actual type of this object is composed out of several building blocks (policy classes) and placed by the builder as a template parameter on the WiringDescriptor of the individual ProcNode. This happens in the WiringFactory in file {{{nodewiring.cpp}}}, which consequently contains all the possible combinations (pre)generated at compile time.

!building blocks
* ''Caching'': whether the result frames of this processing step will be communicated to the Cache and thus could be fetched from there instead of actually calculating them.
* ''Process'': whether this node does any calculations on it's own or just pulls from a source
* ''Inplace'': whether this node is capable of processing the result "in-place", thereby overwriting the input buffer
* ''Multiout'': whether this node produces multiple output channels/frames in one processing step

!!implementation
!!!!Caching
When a node participates in ''caching'', a result frame may be pulled immediately from cache instead of calculating it. Moreover, //any output buffer//&nbsp; of this node will be allocated //within the cache.// Consequently, caching interferes with the ability of the next node to calculate "in-Place". In the other case, when ''not using the cache'', the {{{pull()}}} call immediately starts out with calling down to the predecessor nodes, and the allocation of output buffer(s) is always delegated to the parent state (i.e. the StateProxy pulling results from this node). 

Generally, buffer allocation requests from predecessor nodes (while being pulled by this node) will either be satisfied by using the "current state", or treated as if they were our own output buffers when this node is in-Place capable.

!!!!Multiple Outputs
Some simplifications are possible in the default case of a node producing just ''one single output'' stream. Otherwise, we'd have to allocate multiple output buffers, and then, after processing, select the one needed as a result and deallocate the superfluous further buffers.

!!!!in-Place capability
If a node is capable of calculating the result by ''modifying it's input'' buffer(s), an important performance optimization is possible, because in a chain of in-place capable nodes, we don't need any buffer allocations. But, on the other hand, this optimization may collide with the caching, because a frame retrieved from cache must not be modified.
Without this optimization, in the base case each processing needs an input and an output. Exceptionally, we could think of special nodes which //require// to process in-place, in which case we'd need to provide it with a copy of the input buffer to work on. 

!!!!Processing
If ''not processing'' we don't have any input buffers, instead we get our output buffers from an external source.
Otherwise, in the default case of actually ''processing'' out output, we have to organize input buffers, allocate output buffers, call the {{{process()}}} function of the WiringDescriptor and finally release the input buffers.
{{red{This is an early draft as of 9/08}}}
Wiring requests rather belong to the realm of the high-level model, but play an important role in the build process, because the result of "executing" a wiring request will be to establish an actual low-level data connection. Wiring requests will be created automatically in the course of the build process, but they can be created manually and attached to the media objects in the high-level model as a ''wiring plug'', which is a special kind of LocatingPin (&rarr; [[Placement]])

Wiring requests are small stateful value objects. They will be collected, sorted and processed ordered, such as to finish the building and get a completely wired network of processing nodes. Obviously, there needs to be some reference or smart pointer to the objects to be wired, but this information remains opaque.

&rarr; ConManager
//A component  for uniform handling of zoom scale and visible interval on the timeline.//
Working with and arranging media requires a lot of //navigation// and changes of //zoom detail level.// More specifically, the editor is required to repeatedly return //to the same locations// and show arrangements at the same alternating scale levels. Most existing editing applications approach this topic naïvely, by just responding to some coarse grained interaction controls -- thereby creating the need for a lot of superfluous and tedious search and navigation activities, causing constant grind for the user. And resolving these obnoxious shortcomings turns out as a never ending task, precisely due to the naïve and ad hoc approach initially taken.

Based on these observations, the design of the Lumiera UI calls for centralisation of all zoom- and navigation handling into a single component, instantiated once for every visible context, outfitted with the ability to capture and maintain a //history of zoom and navigation activities.// This component is called »''Zoom Window''«, since it represents a window-like local visible interval, embedded into a larger time span covering the whole timeline. The //current zoom state// is thus defined by
* the overall {{{TimeSpan}}} of the timeline canvas, including a start time (inclusive) and an end time (exclusive)
* the //visible interval// („window“), likewise modelled as {{{time::TimeSpan}}}
* the //scale// defined as pixels per second
** this is a //fractional value// -- allowing to specify multiples of frame sizes or sound sample durations
** maximum scale limited to 2px per µ-tick; widest zoom limited to overall canvas duration

!Requirement Analysis
The Timeline UI is {{red{as of 10/2022}}} specified sufficiently to serve as framework to determine the requirements of a zoom handling component
!!!Usages
;~CanvasHook + ~DisplayMetric
:info | pull
:*Pixel / sec
:*Time ⟼ Pixel
:*Pixel ⟼ ~Time-Offset
;~BodyCanvas
:info | pull
:*window offset
:*Time ⟼ Pixel
:*overall range
;~TimelineLayout
:info | push
:*change to scale factor ⟶ zoom slider
:*any change ⟹ trigger ~DisplayEvaluation
:**visible window
:**scale factor
:**overall range
:mutate
:*in response to scrollbar
:**nudge
:**move window
:*in response to zoom slider
:**change scale
;~TimelineController
:mutate
:*overall range
:*window parameters (persistent UI state)
;Menu + Actions + ~SpotLocator
:mutate
:*zoom factor explicit
:*zoom in / out
:*Zoom history
:*goto
;Gestures~~(Controller)~~
:mutate
:*move window
:*set window new
:*nudge / navigate scale
!!!~Use-Cases
;initial setup
:preparation
:*~UI-state ➩ set metric
:initial content population received...
:#~TimelineController ➩ set overall range
:#~DisplayEvaluation prompts timeline elements to reflow
:#~BodyCanvas ⟵ window params
:#~DisplayMetric ⟵ metric params
;reposition
:UI interaction received...
:*~TimelineLayout ⟵ scrollbar
:*~TimelineLayout ⟵ zoom slider
:*Menu/Locator ⟵ intentional settings
:*Gestures ⟵ move
:*Gestures ⟵ select
:*Gestures ⟵ nudge
:handling sequence...
:#push ⟶ ~TimelineLayout
:#~DisplayEvaluation ⟹ reflow
:#*~BodyCanvas ⟵ window params
:#*~DisplayMetric ⟵ metric params
;model content
:changes received per MutationMessage
:#~TimelineController ➩ set overall range
:#push ⟶ ~TimelineLayout
:#~DisplayEvaluation ⟹ reflow
:#*~BodyCanvas ⟵ window params
:#*~DisplayMetric ⟵ metric params
!!!Interaction Procedure
#mutators
#*set metric
#*nudge metric
#*set overall range
#*set position visible
#*offset position
#*nudge position
#*select/set window
#*nav History
#coordinated adjustment
#*any change ⟹ trigger ~TimelineLayout ⟶ ~DisplayEvaluation
#read state
#*metric
#*translations
#*window offset
#*overall range
#*resulting window extension in pixels

!!!Semantics
The {{{overallSpan}}} corresponds to the whole canvas extension, and it's location defines the origin in pixel coordinates.
The {{{visibleWindow}}} corresponds to the actually visible part, and the //metric// needs to be calibrated, to make the window's extension in pixel match the actual size in the UI. All further manipulations are assumed to keep that pixel extension constant. The actual duration of the timeline is in no way limited, but usually you'd expect the timeline to fit onto the canvas -- the canvas may even be extended when the user wants to extend the timeline beyond existing bounds.
;Invariants
:after each change, a normalisation sequence is performed to (re)establish the following guarantees
:* windows are always non-empy and properly oriented
:* the given width in pixels is always retained
:* zoom metric factor < max zoom (2px / µ-tick)
:* visibleWindow ⊂ Canvas



!Implementation
The above requirement analysis reveals a common handling scheme, which is best served by a conventional design with an object, mutator and getter methods and encapsulated state. Moreover, a single client for push notification can be identified: the ~TimelineLayout (implementation of the [[DisplayManager interface|TimelineDisplayManager]]; it suffices to notify this collaboration partner; since at implementation level the ZoomWindow is itself embedded by mix-in into the ~TimelineLayout, the latter can easily read the resulting zoom parameters and react accordingly.

Lumiera uses a µs-grid as base for the internal time representation {{red{11/2022 might revisit this decision, see #1258}}}, which generally is a good balance between performance and the addressable value range. However, for these calculations aimed at pixel precise drawing, even this micro grid turns out to be not precise enough, especially for sample accurate sound automation. To work around this limitation, the ~ZoomWindow //metric// (scale factor) is represented as fractional number (implemented as {{{boost::rational<int64_t>}}}. This allows to carry out internal calculations involving scale factors with lossless integral arithmetics, and thus precisely to retain a given window extension in screen pixels. Since the temporal extension is still determined on a µs scale, sometimes the (fractional) zoom factor must be adjusted slightly to match the grid -- preference is always to reduce the factor or increase the window to the next tick.

The purpose of automation is to vary a parameter of some data processing instance in the course of time while rendering. Thus, automation encompasses all the variability within the render network //which is not a structural change.//


!Parameters and Automation

[[Automation]] is treated as a function over time. Everything beyond this definition is considered an implementation detail of the [[parameter provider|ParamProvider]] used to yield the value. Thus automation is closely tied to the concept of a [[Parameter]], which also plays an important role in the communication with the GUI and while [[setting up and wiring the render nodes|BuildRenderNode]] in the course of the build process (&rarr; see [[tag:Builder|Builder]])
Definition of commonly used terms and facilities...