loading Proc-Layer devel doku ...

Requires Javascript.
Engine - Building a Render Nodes Network from Objects in the Session
Background: #fff
Foreground: #000
PrimaryPale: #8cf
PrimaryLight: #18f
PrimaryMid: #04b
PrimaryDark: #014
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
<!--{{{-->
<div class='toolbar' macro='toolbar [[ToolbarCommands::EditToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='editor' macro='edit title'></div>
<div macro='annotations'></div>
<div class='editor' macro='edit text'></div>
<div class='editor' macro='edit tags'></div><div class='editorFooter'><span macro='message views.editor.tagPrompt'></span><span macro='tagChooser excludeLists'></span></div>
<!--}}}-->
To get started with this blank [[TiddlyWiki]], you'll need to modify the following tiddlers:
* [[SiteTitle]] & [[SiteSubtitle]]: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* [[MainMenu]]: The menu (usually on the left)
* [[DefaultTiddlers]]: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
<<importTiddlers>>
<!--{{{-->
<link rel='alternate' type='application/rss+xml' title='RSS' href='index.xml' />
<!--}}}-->
These [[InterfaceOptions]] for customising [[TiddlyWiki]] are saved in your browser

Your username for signing your edits. Write it as a [[WikiWord]] (eg [[JoeBloggs]])

<<option txtUserName>>
<<option chkSaveBackups>> [[SaveBackups]]
<<option chkAutoSave>> [[AutoSave]]
<<option chkRegExpSearch>> [[RegExpSearch]]
<<option chkCaseSensitiveSearch>> [[CaseSensitiveSearch]]
<<option chkAnimate>> [[EnableAnimations]]

----
Also see [[AdvancedOptions]]
<!--{{{-->
<div class='header' role='banner' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
<div class='headerShadow'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
<div class='headerForeground'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
</div>
<div id='mainMenu' role='navigation' refresh='content' tiddler='MainMenu'></div>
<div id='sidebar'>
<div id='sidebarOptions' role='navigation' refresh='content' tiddler='SideBarOptions'></div>
<div id='sidebarTabs' role='complementary' refresh='content' force='true' tiddler='SideBarTabs'></div>
</div>
<div id='displayArea' role='main'>
<div id='messageArea'></div>
<div id='tiddlerDisplay'></div>
</div>
<!--}}}-->
/*{{{*/
body {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}

a {color:[[ColorPalette::PrimaryMid]];}
a:hover {background-color:[[ColorPalette::PrimaryMid]]; color:[[ColorPalette::Background]];}
a img {border:0;}

h1,h2,h3,h4,h5,h6 {color:[[ColorPalette::SecondaryDark]]; background:transparent;}
h1 {border-bottom:2px solid [[ColorPalette::TertiaryLight]];}
h2,h3 {border-bottom:1px solid [[ColorPalette::TertiaryLight]];}

.button {color:[[ColorPalette::PrimaryDark]]; border:1px solid [[ColorPalette::Background]];}
.button:hover {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::SecondaryLight]]; border-color:[[ColorPalette::SecondaryMid]];}
.button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::SecondaryDark]];}

.header {background:[[ColorPalette::PrimaryMid]];}
.headerShadow {color:[[ColorPalette::Foreground]];}
.headerShadow a {font-weight:normal; color:[[ColorPalette::Foreground]];}
.headerForeground {color:[[ColorPalette::Background]];}
.headerForeground a {font-weight:normal; color:[[ColorPalette::PrimaryPale]];}

.tabSelected {color:[[ColorPalette::PrimaryDark]];
	background:[[ColorPalette::TertiaryPale]];
	border-left:1px solid [[ColorPalette::TertiaryLight]];
	border-top:1px solid [[ColorPalette::TertiaryLight]];
	border-right:1px solid [[ColorPalette::TertiaryLight]];
}
.tabUnselected {color:[[ColorPalette::Background]]; background:[[ColorPalette::TertiaryMid]];}
.tabContents {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::TertiaryPale]]; border:1px solid [[ColorPalette::TertiaryLight]];}
.tabContents .button {border:0;}

#sidebar {}
#sidebarOptions input {border:1px solid [[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel {background:[[ColorPalette::PrimaryPale]];}
#sidebarOptions .sliderPanel a {border:none;color:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:hover {color:[[ColorPalette::Background]]; background:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:active {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::Background]];}

.wizard {background:[[ColorPalette::PrimaryPale]]; border:1px solid [[ColorPalette::PrimaryMid]];}
.wizard h1 {color:[[ColorPalette::PrimaryDark]]; border:none;}
.wizard h2 {color:[[ColorPalette::Foreground]]; border:none;}
.wizardStep {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];
	border:1px solid [[ColorPalette::PrimaryMid]];}
.wizardStep.wizardStepDone {background:[[ColorPalette::TertiaryLight]];}
.wizardFooter {background:[[ColorPalette::PrimaryPale]];}
.wizardFooter .status {background:[[ColorPalette::PrimaryDark]]; color:[[ColorPalette::Background]];}
.wizard .button {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryLight]]; border: 1px solid;
	border-color:[[ColorPalette::SecondaryPale]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryPale]];}
.wizard .button:hover {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Background]];}
.wizard .button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::Foreground]]; border: 1px solid;
	border-color:[[ColorPalette::PrimaryDark]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryDark]];}

.wizard .notChanged {background:transparent;}
.wizard .changedLocally {background:#80ff80;}
.wizard .changedServer {background:#8080ff;}
.wizard .changedBoth {background:#ff8080;}
.wizard .notFound {background:#ffff80;}
.wizard .putToServer {background:#ff80ff;}
.wizard .gotFromServer {background:#80ffff;}

#messageArea {border:1px solid [[ColorPalette::SecondaryMid]]; background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]];}
#messageArea .button {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::SecondaryPale]]; border:none;}

.popupTiddler {background:[[ColorPalette::TertiaryPale]]; border:2px solid [[ColorPalette::TertiaryMid]];}

.popup {background:[[ColorPalette::TertiaryPale]]; color:[[ColorPalette::TertiaryDark]]; border-left:1px solid [[ColorPalette::TertiaryMid]]; border-top:1px solid [[ColorPalette::TertiaryMid]]; border-right:2px solid [[ColorPalette::TertiaryDark]]; border-bottom:2px solid [[ColorPalette::TertiaryDark]];}
.popup hr {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::PrimaryDark]]; border-bottom:1px;}
.popup li.disabled {color:[[ColorPalette::TertiaryMid]];}
.popup li a, .popup li a:visited {color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:active {background:[[ColorPalette::SecondaryPale]]; color:[[ColorPalette::Foreground]]; border: none;}
.popupHighlight {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
.listBreak div {border-bottom:1px solid [[ColorPalette::TertiaryDark]];}

.tiddler .defaultCommand {font-weight:bold;}

.shadow .title {color:[[ColorPalette::TertiaryDark]];}

.title {color:[[ColorPalette::SecondaryDark]];}
.subtitle {color:[[ColorPalette::TertiaryDark]];}

.toolbar {color:[[ColorPalette::PrimaryMid]];}
.toolbar a {color:[[ColorPalette::TertiaryLight]];}
.selected .toolbar a {color:[[ColorPalette::TertiaryMid]];}
.selected .toolbar a:hover {color:[[ColorPalette::Foreground]];}

.tagging, .tagged {border:1px solid [[ColorPalette::TertiaryPale]]; background-color:[[ColorPalette::TertiaryPale]];}
.selected .tagging, .selected .tagged {background-color:[[ColorPalette::TertiaryLight]]; border:1px solid [[ColorPalette::TertiaryMid]];}
.tagging .listTitle, .tagged .listTitle {color:[[ColorPalette::PrimaryDark]];}
.tagging .button, .tagged .button {border:none;}

.footer {color:[[ColorPalette::TertiaryLight]];}
.selected .footer {color:[[ColorPalette::TertiaryMid]];}

.error, .errorButton {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Error]];}
.warning {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryPale]];}
.lowlight {background:[[ColorPalette::TertiaryLight]];}

.zoomer {background:none; color:[[ColorPalette::TertiaryMid]]; border:3px solid [[ColorPalette::TertiaryMid]];}

.imageLink, #displayArea .imageLink {background:transparent;}

.annotation {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border:2px solid [[ColorPalette::SecondaryMid]];}

.viewer .listTitle {list-style-type:none; margin-left:-2em;}
.viewer .button {border:1px solid [[ColorPalette::SecondaryMid]];}
.viewer blockquote {border-left:3px solid [[ColorPalette::TertiaryDark]];}

.viewer table, table.twtable {border:2px solid [[ColorPalette::TertiaryDark]];}
.viewer th, .viewer thead td, .twtable th, .twtable thead td {background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::Background]];}
.viewer td, .viewer tr, .twtable td, .twtable tr {border:1px solid [[ColorPalette::TertiaryDark]];}

.viewer pre {border:1px solid [[ColorPalette::SecondaryLight]]; background:[[ColorPalette::SecondaryPale]];}
.viewer code {color:[[ColorPalette::SecondaryDark]];}
.viewer hr {border:0; border-top:dashed 1px [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::TertiaryDark]];}

.highlight, .marked {background:[[ColorPalette::SecondaryLight]];}

.editor input {border:1px solid [[ColorPalette::PrimaryMid]];}
.editor textarea {border:1px solid [[ColorPalette::PrimaryMid]]; width:100%;}
.editorFooter {color:[[ColorPalette::TertiaryMid]];}
.readOnly {background:[[ColorPalette::TertiaryPale]];}

#backstageArea {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::TertiaryMid]];}
#backstageArea a {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstageArea a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; }
#backstageArea a.backstageSelTab {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
#backstageButton a {background:none; color:[[ColorPalette::Background]]; border:none;}
#backstageButton a:hover {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstagePanel {background:[[ColorPalette::Background]]; border-color: [[ColorPalette::Background]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]];}
.backstagePanelFooter .button {border:none; color:[[ColorPalette::Background]];}
.backstagePanelFooter .button:hover {color:[[ColorPalette::Foreground]];}
#backstageCloak {background:[[ColorPalette::Foreground]]; opacity:0.6; filter:alpha(opacity=60);}
/*}}}*/
/*{{{*/
* html .tiddler {height:1%;}

body {font-size:.75em; font-family:arial,helvetica; margin:0; padding:0;}

h1,h2,h3,h4,h5,h6 {font-weight:bold; text-decoration:none;}
h1,h2,h3 {padding-bottom:1px; margin-top:1.2em;margin-bottom:0.3em;}
h4,h5,h6 {margin-top:1em;}
h1 {font-size:1.35em;}
h2 {font-size:1.25em;}
h3 {font-size:1.1em;}
h4 {font-size:1em;}
h5 {font-size:.9em;}

hr {height:1px;}

a {text-decoration:none;}

dt {font-weight:bold;}

ol {list-style-type:decimal;}
ol ol {list-style-type:lower-alpha;}
ol ol ol {list-style-type:lower-roman;}
ol ol ol ol {list-style-type:decimal;}
ol ol ol ol ol {list-style-type:lower-alpha;}
ol ol ol ol ol ol {list-style-type:lower-roman;}
ol ol ol ol ol ol ol {list-style-type:decimal;}

.txtOptionInput {width:11em;}

#contentWrapper .chkOptionInput {border:0;}

.externalLink {text-decoration:underline;}

.indent {margin-left:3em;}
.outdent {margin-left:3em; text-indent:-3em;}
code.escaped {white-space:nowrap;}

.tiddlyLinkExisting {font-weight:bold;}
.tiddlyLinkNonExisting {font-style:italic;}

/* the 'a' is required for IE, otherwise it renders the whole tiddler in bold */
a.tiddlyLinkNonExisting.shadow {font-weight:bold;}

#mainMenu .tiddlyLinkExisting,
	#mainMenu .tiddlyLinkNonExisting,
	#sidebarTabs .tiddlyLinkNonExisting {font-weight:normal; font-style:normal;}
#sidebarTabs .tiddlyLinkExisting {font-weight:bold; font-style:normal;}

.header {position:relative;}
.header a:hover {background:transparent;}
.headerShadow {position:relative; padding:4.5em 0 1em 1em; left:-1px; top:-1px;}
.headerForeground {position:absolute; padding:4.5em 0 1em 1em; left:0; top:0;}

.siteTitle {font-size:3em;}
.siteSubtitle {font-size:1.2em;}

#mainMenu {position:absolute; left:0; width:10em; text-align:right; line-height:1.6em; padding:1.5em 0.5em 0.5em 0.5em; font-size:1.1em;}

#sidebar {position:absolute; right:3px; width:16em; font-size:.9em;}
#sidebarOptions {padding-top:0.3em;}
#sidebarOptions a {margin:0 0.2em; padding:0.2em 0.3em; display:block;}
#sidebarOptions input {margin:0.4em 0.5em;}
#sidebarOptions .sliderPanel {margin-left:1em; padding:0.5em; font-size:.85em;}
#sidebarOptions .sliderPanel a {font-weight:bold; display:inline; padding:0;}
#sidebarOptions .sliderPanel input {margin:0 0 0.3em 0;}
#sidebarTabs .tabContents {width:15em; overflow:hidden;}

.wizard {padding:0.1em 1em 0 2em;}
.wizard h1 {font-size:2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizard h2 {font-size:1.2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizardStep {padding:1em 1em 1em 1em;}
.wizard .button {margin:0.5em 0 0; font-size:1.2em;}
.wizardFooter {padding:0.8em 0.4em 0.8em 0;}
.wizardFooter .status {padding:0 0.4em; margin-left:1em;}
.wizard .button {padding:0.1em 0.2em;}

#messageArea {position:fixed; top:2em; right:0; margin:0.5em; padding:0.5em; z-index:2000; _position:absolute;}
.messageToolbar {display:block; text-align:right; padding:0.2em;}
#messageArea a {text-decoration:underline;}

.tiddlerPopupButton {padding:0.2em;}
.popupTiddler {position: absolute; z-index:300; padding:1em; margin:0;}

.popup {position:absolute; z-index:300; font-size:.9em; padding:0; list-style:none; margin:0;}
.popup .popupMessage {padding:0.4em;}
.popup hr {display:block; height:1px; width:auto; padding:0; margin:0.2em 0;}
.popup li.disabled {padding:0.4em;}
.popup li a {display:block; padding:0.4em; font-weight:normal; cursor:pointer;}
.listBreak {font-size:1px; line-height:1px;}
.listBreak div {margin:2px 0;}

.tabset {padding:1em 0 0 0.5em;}
.tab {margin:0 0 0 0.25em; padding:2px;}
.tabContents {padding:0.5em;}
.tabContents ul, .tabContents ol {margin:0; padding:0;}
.txtMainTab .tabContents li {list-style:none;}
.tabContents li.listLink { margin-left:.75em;}

#contentWrapper {display:block;}
#splashScreen {display:none;}

#displayArea {margin:1em 17em 0 14em;}

.toolbar {text-align:right; font-size:.9em;}

.tiddler {padding:1em 1em 0;}

.missing .viewer,.missing .title {font-style:italic;}

.title {font-size:1.6em; font-weight:bold;}

.missing .subtitle {display:none;}
.subtitle {font-size:1.1em;}

.tiddler .button {padding:0.2em 0.4em;}

.tagging {margin:0.5em 0.5em 0.5em 0; float:left; display:none;}
.isTag .tagging {display:block;}
.tagged {margin:0.5em; float:right;}
.tagging, .tagged {font-size:0.9em; padding:0.25em;}
.tagging ul, .tagged ul {list-style:none; margin:0.25em; padding:0;}
.tagClear {clear:both;}

.footer {font-size:.9em;}
.footer li {display:inline;}

.annotation {padding:0.5em; margin:0.5em;}

* html .viewer pre {width:99%; padding:0 0 1em 0;}
.viewer {line-height:1.4em; padding-top:0.5em;}
.viewer .button {margin:0 0.25em; padding:0 0.25em;}
.viewer blockquote {line-height:1.5em; padding-left:0.8em;margin-left:2.5em;}
.viewer ul, .viewer ol {margin-left:0.5em; padding-left:1.5em;}

.viewer table, table.twtable {border-collapse:collapse; margin:0.8em 1.0em;}
.viewer th, .viewer td, .viewer tr,.viewer caption,.twtable th, .twtable td, .twtable tr,.twtable caption {padding:3px;}
table.listView {font-size:0.85em; margin:0.8em 1.0em;}
table.listView th, table.listView td, table.listView tr {padding:0 3px 0 3px;}

.viewer pre {padding:0.5em; margin-left:0.5em; font-size:1.2em; line-height:1.4em; overflow:auto;}
.viewer code {font-size:1.2em; line-height:1.4em;}

.editor {font-size:1.1em;}
.editor input, .editor textarea {display:block; width:100%; font:inherit;}
.editorFooter {padding:0.25em 0; font-size:.9em;}
.editorFooter .button {padding-top:0; padding-bottom:0;}

.fieldsetFix {border:0; padding:0; margin:1px 0px;}

.zoomer {font-size:1.1em; position:absolute; overflow:hidden;}
.zoomer div {padding:1em;}

* html #backstage {width:99%;}
* html #backstageArea {width:99%;}
#backstageArea {display:none; position:relative; overflow: hidden; z-index:150; padding:0.3em 0.5em;}
#backstageToolbar {position:relative;}
#backstageArea a {font-weight:bold; margin-left:0.5em; padding:0.3em 0.5em;}
#backstageButton {display:none; position:absolute; z-index:175; top:0; right:0;}
#backstageButton a {padding:0.1em 0.4em; margin:0.1em;}
#backstage {position:relative; width:100%; z-index:50;}
#backstagePanel {display:none; z-index:100; position:absolute; width:90%; margin-left:3em; padding:1em;}
.backstagePanelFooter {padding-top:0.2em; float:right;}
.backstagePanelFooter a {padding:0.2em 0.4em;}
#backstageCloak {display:none; z-index:20; position:absolute; width:100%; height:100px;}

.whenBackstage {display:none;}
.backstageVisible .whenBackstage {display:block;}
/*}}}*/
/***
StyleSheet for use when a translation requires any css style changes.
This StyleSheet can be used directly by languages such as Chinese, Japanese and Korean which need larger font sizes.
***/
/*{{{*/
body {font-size:0.8em;}
#sidebarOptions {font-size:1.05em;}
#sidebarOptions a {font-style:normal;}
#sidebarOptions .sliderPanel {font-size:0.95em;}
.subtitle {font-size:0.8em;}
.viewer table.listView {font-size:0.95em;}
/*}}}*/
/*{{{*/
@media print {
#mainMenu, #sidebar, #messageArea, .toolbar, #backstageButton, #backstageArea {display: none !important;}
#displayArea {margin: 1em 1em 0em;}
noscript {display:none;} /* Fixes a feature in Firefox 1.5.0.2 where print preview displays the noscript content */
}
/*}}}*/
<!--{{{-->
<div class='toolbar' role='navigation' macro='toolbar [[ToolbarCommands::ViewToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='subtitle'><span macro='view modifier link'></span>, <span macro='view modified date'></span> (<span macro='message views.wikified.createdPrompt'></span> <span macro='view created date'></span>)</div>
<div class='tagging' macro='tagging'></div>
<div class='tagged' macro='tags'></div>
<div class='viewer' macro='view text wikified'></div>
<div class='tagClear'></div>
<!--}}}-->
PageTemplate
|>|SiteTitle - SiteSubtitle|
|>|MainMenu|
|DefaultTiddlers<<br>><<br>><<br>>ViewTemplate<<br>><<br>>EditTemplate|SideBarOptions|
|~|OptionsPanel|
|~|SideBarTabs|
|~|AdvancedOptions|
|~|<<tiddler Configuration.SideBarTabs>>|

''StyleSheet:'' StyleSheetColors - StyleSheetLayout - StyleSheetPrint

ColorPalette

SiteUrl
//pattern of collaboration for loosely coupled entities, to be used for various purposes within Proc...//
Expecting Advice and giving Advice &mdash; this collaboration ranges somewhere between messaging and dynamic properties, but cross-cutting the primary, often hierarchical relation of dependencies. Always happening at a certain //point of advice,// this exchange of data gains a distinct, static nature -- it is more than just a convention or a protocol. On the other hand, Advice is deliberately kept optional and received synchronously (albeit possibly within an continuation), this way allowing for loose coupling. This indirect, cross-cutting nature allows to build integration points into library facilities, without enforcing a strong dependency link.

!Specification
''Definition'': Advice is an optional, mediated collaboration between entities taking on the roles of advisor and advised, thereby passing a custom piece of advice data, managed by the advice support system. The possibility of advice is created by both of the collaborators entering the system, where the advised entity exposes a point of advice, while the advising entity provides an actual advice value.
[>img[Entities for Advice collaboration|uml/fig141445.png]]


!!Collaborators
* the ''advised'' entity 
* the ''advisor''
* ''point of advice''
* ''advice system''
* the ''binding''
* the ''advice''
Usually, the ''advised'' entity opens the collaboration by requesting an advice. The ''advice'' itself is a piece of data of a custom type, which needs to be //copyable.// Obviously, both the advised and the advisor need to share knowledge about the meaning of this advice data. (in a more elaborate version we might allow the advisor to provide a subclass of the advice interface type). The actual advice collaboration happens at a ''point of advice'', which needs to be derived first. To this end, two prerequisites are to be fulfilled (without fixed sequence): The advised puts up an ''advice request'' by specifying his ''binding'', which is a pattern for matching. An entity about to give advice attaches a possible ''advice provision'', combined with an advisor binding, which similarly is a pattern. The ''advice system'' as mediator resolves both sides, by matching (which in the most general case could be an unification). This process creates an ''advice point solution'' &mdash; allowing the advisor to fed the piece of advice into a kind of communication channel (&raquo;advice channel&laquo;), causing the advice data to be placed into the point of advice. After passing a certain (implementation defined) break point, the advice leaves the influence of the advisor and gets exposed to the advised entities. Especially, this involves copying the advice data into a location managed by the advice system. In the standard case, the advised entity picks up the advice synchronously (and non-blocking). Of course, there could be a status flag to find out if there is new advice. Moreover, typically the advice data type is default constructible and thus there is always a basic form of advice available, thereby completely decoupling the advised entity from the timings related to this collaboration.

!!extensions
In a more elaborate scheme, the advised entity could provide a signal to be invoked either in the thread context of the advisor (being still blocked in the advice providing call), or in a completely separate thread. A third solution would be to allow the advised entity to block until receiving new advice. Both of these more elaborate schemes would also allow to create an advice queue &mdash; thereby developing the advice collaboration into a kind of messaging system. Following this route seems questionable though.

&rarr; AdviceSituations
&rarr; AdviceRequirements
&rarr; AdviceImplementation
[<img[Advice solution|uml/fig141573.png]]





The advice system is //templated on the advice type// &mdash; so basically any collaboration is limited to a distinct advice type. But currently (as of 5/2010), this typed context is kept on the interface level, while the implementation is built on top of a single lookup table (which might create contention problems in the future and thus may be changed without further notice). The advice system is a system wide singleton service, but it is never addressed directly by the participants. Rather, instances of ~AdviceProvision and ~AdviceRequest act as point of access. But these aren't completely symmetric; while the ~AdviceRequest is owned by the advised entity, the ~AdviceProvision is a value object, a uniform holder used to introduce new advice into the system. ~AdviceProvision is copied into an internal buffer and managed by the advice system, as is the actual advice item, which is copied alongside.

In order to find matches and provide advice solutions, the advice system maintains an index data structure called ''~Binding-Index''. The actual binding predicates are represented by value objects stored within this index table. The matching process is triggered whenever a new possibility for an advice solution enters the system, which could be a new request, a new provision or a change in the specified bindings. A successful match causes a pointer to be set within the ~AdviceRequest, pointing to the ~AdviceProvision acting as solution. Thus, when a solution exists, the advised entity can access the advice value object by dereferencing this pointer. A new advice solution just results in setting a different pointer, which is atomic and doesn't need to be protected by locking. But note, omitting the locking means there is no memory barrier; thus the advised entity might not see any changed advice solution, until the corresponding thread(s) refresh their CPU cache. This might or might not be acceptable, depending on the context, and thus is configurable as policy. Similarly, the handling of default advice is configurable. Usually, advice is a default constructible value object. In this case, when there isn't any advice solution (yet), a pseudo solution holding the default constructed advice value is used to satisfy any advice access by the client (advised entity). The same can be used when the actual ~AdviceProvision gets //retracted.// As an alternative, when this default solution approach doesn't work, we can provide a policy either to throw or to wait blocking &mdash; but this alternative policy is similarly implemented with a //null object// (a placeholder ~AdviceProvision). Anyway, this implementation technique causes the advice system to collect some advice provisions, bindings and advice objects over time. It should use a pooling custom allocator in the final version. As the number of advisors is expected to be rather small, the storage occupied by these elements, which is effectively blocked until application exit, isn't considered a problem.

!organising the advice solution
This is the tricky part of the whole advice system implementation. A naive implementation will quickly degenerate in performance, as costs are of order ~AdviceProvisions * ~AdviceRequests * (average number of binding terms). But contrary to the standard solutions for rules based systems (either forward or backward chaining), in this case here always complete binding sets are to be matched, which allows to reduce the effort.

!!!solution mechanics
The binding patterns are organised by //predicate symbol and the lists are normalised.// A simple normalisation could be lexicographic ordering of the predicate symbols. Then the resulting representation can be //hashed.// When all predicates are constant, a match can be detected by hashtable lookup, otherwise, in case some of the predicates contain variable arguments ({{red{planned extension}}}), the lookup is followed by an unification. For this to work, we'll have to include the arity into the predicate symbols used in the first matching stage. Moreover, we'll create a //matching closure// (functor object), internally holding the arguments for unification. This approach allows for //actual interpretation of the arguments.// It is conceivable that in special cases we'll get multiple instances of the same predicate, just with different arguments. The unification of these terms needs to consider each possible pairwise combination (cartesian product) &mdash; but working out the details of the implementation can safely be deferred until we'll actually hit such a special situation, thanks to the implementation by a functor.

Fortunately, the calculation of this normalised patterns can be separated completely from the actual matching. Indeed, we don't even need to store the binding patterns at all within the binding index &mdash; storing the hash value is sufficient (and in case of patterns with arguments we'll attach the matching closure functor). Yet still we need to store a marker for each successful match, together with back-links, in order to handle changing and retracting of advice.

!!!storage and registrations
We have to provide dedicated storage for the actual advice provisions and for the index entries. Mostly, these objects to be managed are attached through a single link &mdash; and moreover the advice system is considered performance critical, so it doesn't make sense to implement the management of these entries by smart-ptr. This rules out ~TypedAllocationManager and prompts to write a dedicated storage frontend, later to be backed by Lumiera's mpool facility.
* both the advice provision and the advice requests attach to the advice system after fulfilling some prerequisites; they need to detach automatically on destruction.
* in case of the provision, there is a cascaded relation: the externally maintained provision creates an internal provision record, which in turn attaches an index entry.
* both in case of the provision and the request, the relation of the index bears some multiplicity:
** a multitude of advice provisions can attach with the same binding (which could be the same binding pattern terms, but different variable arguments). Each of them could create a separate advice solution (at least when variable arguments are involved); it would be desirable to establish a defined LIFO order for any search for possibly matching advice.
** a multitude of advice requests can attach with the same binding, and each of them needs to be visited in case a match is detected.
* in both cases, any of these entries could be removed any time on de-registration of the corresponding external entity
* we need to track existing advice solutions, because we need to be able to overwrite with new advice and to remove all solutions bound to a given pattern about to leave the system. One provision could create a large number of solutions, while each registration always holds onto exactly one solution (which could be a default/placeholder solution though)

!!!!subtle variations in semantics
While generally advice has value semantics and there is no ownership or distinguishable identity, the actual implementation technique creates the possibility for some subtle semantic variations. At the time of this writing (5/2010) no external point of reference was available to decide upon the correct implementation variant. These variations get visible when advice is //retracted.// Ideally, a new advisor would re-attach to an existing provision and supersede the contained advice information with new data. Thus, after a chain of such new provisions all attaching with the identical binding, when finally the advice gets retracted, any advice provisions would be gone and we'd fall back onto the default solution. Thus, "retracting" would mean to void any advice given with this binding.
But there is another conceivable variation of semantics, which yields some benefits implementation-wise: Advice can be provided as "I don't care what was said, but here is new information". In this case, the mechanism of resolving and finding a match would be responsible to pick the latest addition, while the provisions would just be dumped into the system. In this case, "retracting" would mean just to cancel //one specific//&nbsp; piece of information and might cause in an older advice solution to be uncovered and revived. The default (empty) solution would be used in this case only after retracting all advice provisions.

!!!!implementation variants with respect to attachment and memory management
Aside from the index, handling of the advice provisions turns out to be tricky.
* management by ref-count was ruled out due to contention and locality considerations
* the most straight forward implementation would be for the ~AdviceProvision within the advisor to keep kind of an "inofficial" link to "its" provision, allowing to modify and retract it during the lifetime of the advisor. When going away without retracting (the default behaviour), the provision, as added into the system would remain there as a dangling entry. It is still reachable via the index, but not maintained in any further way. If memory usage turns out to be a problem, we'd need to enqueue these entries for clean-up.
* but as this simple solution contradicts the general advice semantics in a subtle way (see previous paragraph), we could insist on really re-capturing and retracting previous advice automatically on each new advice provision or modification. In this case, due to the requirement of thread safety, each addition, binding modification, placing of new advice or retraction would require to do an index search to find an existing provision with equivalent binding (same binding definition, not just a matching binding pattern). As a later provision could stomp upon an existing provision without the original advisor noticing this, we can't use the internal references anymore; we really need to search each time and also need a global lock during the modification transaction.
* an attempt to reduce this considerable overhead would be to use a back-link from the provision as added to the system to the original source (the ~AdviceProvision owned by the advisor). On modification, this original source would be notified and thus detached. Of course this is tricky to implement correctly, and also requires locking.
The decision for the initial implementation is to use the first variant and just accept the slightly imprecise semantics.
When copying a Provision, the hidden link to existing advice data is //not shared.//

!!!!de-allocation of advice data
It is desirable that the dtors of each piece of advice data be called eventually. But ensuring this reliably is tricky, because advice 
data may be of various types and is added to the system to remain available, even after the original {{{advice::Provision}}} went out of scope. Moreover, the implementation decision was //not//&nbsp; to employ a vtable for the advice collaborators and data holders, so we're bound to invoke the dtor with the correct specific type.
There are some special cases when de-allocation happens while the original provision is still alive (new advice, changed binding, retracting). But in any other case, responsibility for de-allocation has to be taken by the ~AdviceSystem, which unfortunately can't handle the specific type information. Thus the original provision needs to provide a deleter function, and there is no way to avoid storing a function pointer to this deleter within the ~AdviceSystem, together with the advice data holder.
It seems reasonable to create this deleter right away and not to share the link to advice data, when copying a provision, to keep responsibilities straight. {{red{Question: does this even work?? }}} to be verified: does the address of the advice data buffer really determine alone what is found as "existing" provision?

!!!lifecycle considerations
Behind the scenes, hidden within the {{{advice.cpp}}} implementation file, the ~AdviceSystem is maintained as singleton. According to a general lifecycle policy within Lumiera, no significant logic is allowed to execute in the shutdown phase of the application, once the {{{main()}}} has exited. Thus, any advice related operations might throw {{{error::Logic}}} after that point. The {{{~AdviceSystem()}}} also is a good place to free any buffers holding incorporated advice data, after having freed the index datastructure referring to these buffer storage, of course.

!!!!handling of default advice
Basically, the behaviour when requesting non-existing advice may be configured by policy. But the default policy is to return ref to a default constructed instance of the advice type in that case. Just the (implementation related) problem is that we return advice by {{{const&}}}, not by value, so we're bound to create and manage this piece of default advice during the lifetime of the ~AdviceSystem. The way the ~AdviceSystem is accessed (only through the frontend of {{{advice::Request}}} and {{{advice::Provision}}} objects, in conjunction with the desire to control this behaviour by policy, creates a tricky implementation situation.
* regarding the lifecycle (and also from the logical viewpoint) it would be desirable to handle this "default" or "no solution" case similar to accessing an existing solution. But unfortunately doing so would require a fully typed context; thus basically on inserting a new request, when returning from the index search without a dedicated solution, we'd need to fabricate a fallback solution to insert it into the provision index, while still holding the index lock. At that point it is not determined if we ever need that fallback solution. Alternatively we could consider to fabricate this fallback solution on first unsuccessful advice fetch. But this seems sill worse, as it turns a (possibly even lock free) ptr access into an index operation. Having a very cheap advice access seems like an asset.
* on the other hand, using some separate kind of singleton bundle just for these default advice data (e.g. a templated version of //Meyer's Singleton...//), the fallback solution can be settled independent from the ~AdviceSystem, right in the {{{advice.hpp}}} and using static memory, but the downside is now the fallback solution might be destroyed prior to shutdown of the ~AdviceSystem, as it lives in another compilation unit.
Thus the second approach looks favourable, but we should //note the fact that it is hard to secure this possible access to an already destroyed solution,// unless we decline using the advice feature after the end of {{{main()}}}. Such a policy seems to be reasonable anyway, as the current implementation also has difficulties to prevent accessing an already destroyed {{{advice::Provision}}}, being incorporated in the ~AdviceSystem, but accessed through a direct pointer in the {{{advice::Request}}}.

!!!locking and exception safety
The advice system is (hopefully) written such as not to be corrupted in case an exception is thrown. Adding new requests, setting advice data on a provision and any binding change might fail due to exhausted memory. The advice system remains operational in this case, but the usual reaction would be //subsystem shutdown,// because the Advice facility typically is used in a very low-level manner, assuming it //just works.// As far as I can see, the other mutation operations can't throw.

The individual operations on the interface objects are //deliberately not thread-safe.// The general assumption is that {{{advice::Request}}} and {{{advice::Provision}}} will be used in a safe environment and not be accessed or modified concurrently. A notable exception to this rule is accessing Advice: as this just includes checking and dereferentiating a pointer, it might be done concurrently. But note, //the advice system does nothing to ensure visibility of the solution within a separate thread.// If this thread still has the old pointer value in his local cache, it won't pick up the new solution. In case the old solution got retracted, this even might cause access to already released objects. You have been warned. So it's probably a good idea to ensure a read barrier happens somewhere in the enclosing usage context prior to picking up a possibly changed advice solution concurrently.
''Note'': the underlying operations on the embedded global {{{advice::Index}}} obviously need to be protected by locking the whole index table on each mutation, which also ensures a memory barrier and thus propagates changed solutions. While this settles the problem for the moment, we might be forced into a more fine grained locking due to contention prolems later on...

!!!index datastructure
It is clear by now that the implementation datastructure has to serve as a kind of //reference count.// Within this datastructure, any constructed advice solution needs to be reflected somehow, to prevent us from discarding an advice provision still accessible. Allowing lock-free access to the advice solution (planned feature) adds a special twist, because in this case we can't even tell for sure if an overwritten old solution is actually gone (or if its still referred from some thread's cached memery). This could be addressed by employing a transactional approach (which might be good anyway) -- but I tend to leave this special concern aside for now.

To start with, any advice matching and solution will //always happen within matching buckets of a hash based pattern organisation.// The first stage of each access involves using the correct binding pattern, and this binding pattern can be represented within the index data structure by the binding pattern's hash value. Since the advice typing can be translated at interface level into a further predicate within the binding, the use of these binding hashes as first access step also limits access to advice with proper type. All further access or mutation logic takes place within a sub structure corresponding to this top-level hash value. The binding index thus relies on two hashtables (one for the requests and one for the provisions), but using specifically crafted datastructures as buckets. The individual entries within these bucket sub structures in both cases will be comprised of a binding matcher (to determine if a match actually happens) and a back-link to the registered entitiy (provision or request). Given the special pattern of the advice solutions, existing solutions could be tracked within the entries at the request side.
* Advice provisions are expected to happen only in small numbers; they will be searched stack-like, starting from the newes provisions, until a match is found.
* Each advised entity basically creates an advice request, so there could be a larger number of request entries. In the typical search triggered from the provision side, each request entry will be visited and checked for match, which, if successful, causes a pointer to be set within the ~AdviceRequest object (located outside the realm of the advice system). While -- obviously -- multiple requests with similar binding match could be folded into a sub-list, we need actual timing measurements to determine the weight of these two calculation steps of matching and storing, which together comprise the handling of an advice solution.

The above considerations don't fully solve the question how to represent a computed solution within the index data structure, candidates being to use the index within the provision list, or a direct pointer to the provision or even just to re-use the pointer stored into the ~AdviceRequest. My decision is to do the latter. Besides solutions found by matching, we need //fallback solutions// holding a default constructed piece of advice of the requested type. As these defaults aren't correlated at all to the involved bindings, but only to the advice type as such, it seems reasonable to keep them completely apart, like e.g. placing them into static memory managed by the ~AdviceProvision template instantiations.

!!!interactions to be served by the index
[>img[Advice solution|draw/AdviceBindingIndex1.png]]

;add request
:check existing provisions starting from top until match; use default solution in case no match is found; publish solution into the new request; finally attach the new request entry
;remove request
:just remove the request entry
;modify request
:handle as if newly added
;add provision
:push new provision entry on top; traverse all request entries and check for match with this new provision entry, publish new solution for each match
;retract provision
:remove the provision entry; traverse all request entries to find those using this provision as advice solution, treat these as if they where newly added requests
;modify provision
:add a new (copy of the) provision, followed by retracting the old one; actually these two traversals of all requests can be combined, thus treating a request which used the old provision but doesn't match the new one is treated like a new request
<<<
__Invariant__: each request has a valid solution pointer set (maybe pointing to a default solution). Whenever such a solution points to a registered provision, there is a match between the index entries and this is the top-most possible match to any provision entry for this request entry
<<<
Clearly, retracting advice (and consequently also the modification) is expensive. After finishing these operations, the old/retracted provision can be discarded (or put aside in case of non-locking advice access). Other operations don't cause de-allocation, as provisions remain within the system, even if the original advising entity is gone.
From analysing a number of intended AdviceSituations, some requirements for an Advice collaboration and implementation can be extracted.

* the piece of advice is //not shared// between advisor and the advised entities; rather, it is copied into storage managed by the advice system
* the piece of advice can only be exposed {{{const}}}, as any created advice point solution might be shared
* the actual mode of advice needs to be configurable by policy &mdash; signals (callback functors) might be used on both sides transparently
* the client side (the advised entity) specifies initially, if a default answer is acceptable. If not, retrieving advice might block or fail
* on both sides, the collaboration is initiated specifying an advice binding, which is an conjunction of predicates, --optionally dynamic--^^no!^^
* there is a tension between matching performance and flexibility. The top level should be entirely static (advice type)
* the analysed usage situations provide no common denominator on the preferences regarding the match implementation.
* some cases require just a match out of a small number of tokens, while generally we might get even a double dispatch
* later, possible and partial solutions could be cached, similar to the rete algorithm. Dispatching a solution should work lock-free
* advice can be replaced by new advice, which causes all matching advice solutions to behave as being overwritten.
* when locking is left out, we can't give any guarantee as to when a given advice gets visible to the advised entity
* throughput doesn't seem to be an issue, but picking up existing advice should be as fast as possible
* we expect a small number of advisors collaborating with and a larger number of advised entities.

!!questions
;when does the advice collaboration actually happen?
:when there is both a client (advised) and a server (advisor) and their advice bindings match
;can there be multiple matches?
:within the system as a whole there can be multiple solutions
:but the individual partners never see more than one connection
:each point of advice has exactly one binding and can establish one advice channel
;but when an attempt is made to transfer more information?
:both sides don't behave symmetrically, and thus the consequences are different
:on the client side, advice is just //available.// When there is newer one, the previous advice is overwritten
:the server side doesn't //contain// advice &mdash; rather, it is placed into the system. After that, the advisor can go away
:thus, if an advisor places new advice into an existing advice provision, this effectively initiates a new collaboration
:if the new advice reaches the same destination, it overwrites; but it may as well reach a different destination this time
;can just one advice provision create multiplicity?
:yes, because of the matching process there could be multiple solutions. But neither the client nor the server is aware of that.
;can advice be changed?
:No. When inserted into the system, the advisor looses any direct connection to the piece of advice (it is copied)
:But an advisor can put up another piece of advice into the same advice provision, thereby effectively overwriting at the destination
;if advice is copied, what about ownership and identity?
:advice has //value semantics.// Thus it has no distinguishable identity beyond the binding used to attach it
:a provision does not "own" advice. It is a piece of information, and the latest information is what counts
;can the binding be modified dynamically?
:this is treated as if retracting the existing point of advice and opening a new one.
;what drives the matching?
:whenever a new point of advice is opened, search for a matching solution happens.
:thus, the actual collaboration can be initiated from both sides
:when a match happens, the corresponding advice point solution gets added into the system
;what about the lifetime of such a solution?
:it is tied to the //referral// &mdash; but there is an asymmetry between server and client
:referral is bound to the server sided / client sided point of advice being still in existence
:but the server sided point of advice is copied into the system, while the client sided is owned by the client
:thus, when an advisor goes away without explicitly //retracting//&nbsp; the advice, any actual solution remains valid
:on the client side there is an asymmetry: actually, a new advice request can be opened, with an exactly identical binding
:in this case, existing connections will be re-used. But any differences in the binding will require searching a new solution
;is the search for an advice point solution exhaustive?
:from the server side, when a new advice provision / binding is put up, //any// possible advice channel will be searched
:contrary to this, at the client side, the first match found wins and will establish an advice channel.

!decisions
After considering the implementation possibilities, some not completely determined requirements can be narrowed down.
* we //do// support the //retracting of advice.//
* there is always an implicit //default advice solution.//
* advice //is not an messaging system// &mdash; no advice queue
* signals (continuations) are acceptable as a extension to be provided later
* retracting advice means to retreat a specific solution. This might or might not uncover earlier solutions (undefined behaviour)
* we don't support any kind of dynamic re-evaluation of the binding match (this means not supporting the placement use case)
* the binding pattern is //interpreted strictly as a conjunction of logic predicates// &mdash; no partial match, but arguments are allowed
* we prepare for a later extension to //full unification of arguments,// and provide a way of accessing the created bindings as //advice parameters.//

Combining all these requirements and properties provides the foundation for the &rarr; AdviceImplementation

[[Advice]] is a pattern extracted from several otherwise unrelated constellations

!Proxy media in the engine
Without rebuilding the engine network, we need the ability to reconfigure some parts to adapt to low resolution place-holder media temporarily. The collaboration required to make this happen seems to ''cross-cut'' the normal processing logic. Indeed, the nature of the adjustments is highly context dependent &mdash; not every processing node needs to be adjusted. There is a dangerous interference with the ongoing render processes, prompting for the possibility to pick up this information synchronously.
* the addressing and delivery of the advice is based on a mix of static (type) and dynamic information
* it is concievable that the actual matching may even include a token present in the direct invocation context (but this possibility was ruled out by later decision)
* the attempt to recieve and pick up advice needs to be failsafe
* locking should be avoided by design

!Dependency injection for testing
While inversion of control is a guiding principle on all levels, the design of the Lumiera application deliberately stays just below the level of employing a dependency injection container. Instead, common services are accessible //by type// and the builder pattern is used more explicitly at places. Interestingly, the impact on writing unit tests was by far not so serious as one might expect, based on the usual reasoning of D.I. proponents. But there remain some situations, where sharing a common test fixture would come in handy
* here the test depending on a fixture puts up a hard requirement for the actual advice to be there.
* thus, the advice support system can be used to communicate a need for advice
* but it seems unreasonable to extend it actually to transmitt a control flow

!properties of placement
The placement concept plays a fundamental role within Lumiera's HighLevelModel. Besides just being a way of sticking objects together and defining the common properties of //temporal position and output destination,// we try to push this approach to enable a more general, open and generic use. "Placement" is understood as locating within a grid comprised of various degrees of freedom &mdash; where locating in a specific way might create additional dimensions to be included into the placement. The standard example is an output connection creating additional adjustable parameters controlling the way the connected object is embedded into a given presentation space (consider e.g. a sound object, which &mdash; just by connection, gains the ability of being //panned// by azimuth, elevation and distance)
* in this case, obviously the colaboration is n:m, while each partner preferrably should only see a single advice link.
* advice is used here to negotiate a direct colaboration, which is then handed off to another facility (wiring a control connection)
* the possibility of an advice colaboration in this case is initiated rather from the side of the advisor
* deriving an advice point solution includes some kind of negotioation or active re-evaluation
* the possible adivsors have to be queried according to their placement scope relations
* this queriying might even trigger a resolution process within the advising placement.
__Note__: after detailed analysis, this use case was deemed beyond the scope of the [[Advice]] core concept and idea.
//As a use case, it was dropped.// But we retain some of the properties discovered by considering this scenario, especially the n:m relation, the symmetry in terms of opening the collaboration, and the possibility to have a specially implemented predicate in the binding pattern.

&rarr; AdviceRequirements
Memory management facility for the low-level model (render nodes network). The model is organised into temporal segments, which are considered to be structurally constant and uniform. The objects within each segment are strongly interconnected, and thus each segment is being built in a single build process and is replaced or released as a whole. __~AllocationCluster__ implements memory management to support this usage pattern. He owns a number of object families of various types.[>img[draw/AllocationCluster.png]]
* [[processing nodes|ProcNode]] &mdash; probably with several subclasses (?)
* [[wiring descriptors|WiringDescriptor]]
* the input/output descriptor arrays used by the latter

To Each of those families we can expect an initially undetermined (but rather large) number of individual objects, which can be expected to be allocated within a short timespan and which are to be released cleanly on destruction of the AllocationCluster.

''Problem of calling the dtors''
Even if the low-level memory manager(s) may use raw storage, we require that the allocated object's destructors be called. This means keeping track at least of the number of objects allocated (without wasting too much memory for bookkeeping). Besides, as the objects are expected to be interconnected, it may be dangerous to destroy a given family of objects while another family of objects may rely on the former in its destructor. //If we happen do get into this situation,// we need to define a priority order on the types and assure the destruction sequence is respected.

&rarr; see MemoryManagement
Asset management is a subsystem on its own. Assets are "things" that can be loaded into a session, like Media, Clips, Effects, Transitions. It is the "bookkeeping view", while the Objects in the Session relate to the "manipulation and process view". Some Assets can be //loaded// and a collection of Assets is saved with each Session. Besides, there is a collection of basic Assets always available by default.

The Assets are important reference points holding the information needed to access external resources. For example, an Clip asset can reference a Media asset, which in turn holds the external filename from which to get the media stream. For Effects, the situation is similar. Assets thus serve two quite distinct purposes. One is to load, list, group search and browse them, and to provide an entry point to create new or get at existing MObject in the Session, while the other purpose is to provide attribute and property information to the inner parts of the engine, while at the same time isolating and decoupling them from environmental details. 

We can distinguish several different Kinds of Assets, each one with specific properties. While all these Kinds of Assets implement the basic Asset interface, they in turn are the __key abstractions__ of the asset management view. Mostly, their interfaces will be used directly, because they are quite different in behaviour. Thus it is common to see asset related operations being templated on the Asset Kind. 
&rarr; see also [[Creating and registering Assets|AssetCreation]]
[img[Asset Classess|uml/fig130309.png]]

!Media Asset
Some piece of Media Data accessible at some external Location and able to be processed by Lumiera. A Media File on Harddisk can be considered as the most basic form of Media Asset, with some important derived flavours, like a Placeholder for a currently unavailable Source, or Media available in different Resolutions or Formats.
* __outward interface operations__ include querying properties, creating an Clip MObject, controlling processing policy (low res proxy placeholders, interlacing and other generic pre- and postprocessing)
* __inward interface operations__ include querying filename, codec, offset and any other information necessary for creating a source render node, getting additional processing policy decisions (handling of interlacing, aspect ratio).
&rarr; MediaAsset

!Processing Asset
Some software component able to work on media data in the Lumiera Render engine Framework. This includes all sorts of loadable effects, as well as some of the standard, internal facilities (Mask, Projector). Note that Processing Assets typically provide some attachment Point or means of communication with GUI facilities.
* __outward interface operations__ include getting name and description, investigating the media types the processor is able to handle, cause the underlying module to be acutally loaded...
* __inward interface operations__ include resolving the actual processing function.
&rarr; ProcAsset

!Structural Asset
Some of the building blocks providing the framework for the objects placed into the current Session. Notable examples are [[processing pipes|Pipe]] within the high-level-model, Viewer attachment points, Tracks, Sequences, Timelines etc.
* __outward interface operations__ include...
* __inward interface operations__ include...
&rarr; StructAsset {{red{still a bit vague...}}}

!Meta Asset
Any resources related to the //reflective recurse of the application on itself,// including parametrisation and customisation aspects and similar metadata, are categorised and tracked apart of the primary entities. Examples being types, scales and quantisation grids, decision rules, control data stores (automation data), annotations attached to labels, inventory entities etc.
* __outward interface operations__ include...
* __inward interface operations__ include...
&rarr; MetaAsset {{red{just emerging as of 12/2010}}}

!!!!still to be worked out..
is how to implement the relationship between [[MObject]]s and Assets. Do we use direct pointers, or do we prefer an ID + central registry approach? And how to handle the removal of an Asset.
&rarr; see also [[analysis of mem management|ManagementAssetRelation]]
&rarr; see also [[Creating Objects|ObjectCreation]], especially [[Assets|AssetCreation]]

  //9/07: currently implementing it as follows: use a refcounting-ptr from Clip-~MObject to asset::Media while maintaining a dependency network between Asset objects. We'll see if this approach is viable//

Assets are created by a Factories returning smart pointers; the Asset creation is bound to specific use cases and //only available// for these specific situations. There is no generic Asset Factory.

For every Asset we generate a __Ident tuple__ and a long ID (hash) derived from this Ident tuple. The constructor of the abstract base class {{{Asset}}} takes care of this step and automatically registeres the new Asset object with the AssetManager. Typically, the factory methods for concrete Asset classes provide some shortcuts providing sensible default values for some of the Ident tuple data fields. They may take additional parameters &mdash; for example the factory method for creating {{{asset::Media}}} takes a filename (and may at some point in the future aply "magic" based on examination of the file &rarr; LoadingMedia)

Generally speaking, assets can be seen as the statical part or view of the session and model. They form a global scope and are tied to the [[model root|ModelRootMO]] &mdash; which means, they're going to be serialised and de-serialised alongside with this model root scope. Especially the de-serialisation triggers (re)-creation of all assets associated with the session to be loaded.
{{red{TODO:}}} //there will be a special factory mechanism for this case, details pending definition as of 2/2010 //
The Asset Manager provides an Interface to an internal Database holding all Assets in the current Session and System state. It may be a real Database at some point (and for the moment it's a Hashtable). Each [[Asset]] is registered automatically with the Asset Manager; it can be queried either by it's //identification tuple// or by it's unique ID.

Conceptually, assets belong to the [[global or root scope|ModelRootMO]] of the session data model. A mechanism for serialising and de-serialising all assets alongside with the session is planned as of 2/2010
Conceptually, Assets and ~MObjects represent different views onto the same entities. Assets focus on bookkeeping of the contents, while the media objects allow manipulation and EditingOperations. Usually, on the implementation side, such closely linked dual views require careful consideration.

!redundancy
Obviously there is the danger of getting each entity twice, as Asset and as ~MObject. While such dual entities could be OK in conjunction with much specialised processing, in the case of Lumiera's Proc-Layer most of the functionality is shifted to naming schemes, configuration and generic processing, leaving the actual objects almost empty and deprived of distinguishing properties. Thus, starting out from the required concepts, an attempt was made to join, reduce and straighten the design.
* type and channel configuration is concentrated to MediaAsset
* the accounting of structural elements in the model is done through StructAsset
* the object instance handling is done in a generic fashion by using placements and object references
* clips and labels appear as ~MObjects solely; on the asset side there is just an generic [[id tracking mechanism|TypedID]].
* tracks are completely deprived of processing functionality and become lightweight containers, also used as clip bins.
* timelines and sequences are implemented as façade to equivalent structures within the model
* this leaves us only with effects requiring both an object and asset implementation

[<img[Fundamental object relations used in the session|uml/fig138885.png]]

Placing an MObject relatively to another object such that it should be handled as //attached//&nbsp; to the latter results in several design and implementation challenges. Actually, such an attachment creates a cluster of objects. The typical use case is that of an effect attached to a clip or processing pipe.
* attachment is not a globally fixed relation between objects, rather, it typically exists only for some limited time span (e.g. the duration of the basic clip the effect is attached to)
* the order of attachment is important and the attached placement may create a fork in the signal flow, so we need a way for specifying reproducibly how the resulting wiring should be
* when building, we access the information in reversed direction: we have the target object and need to query for all attachments

The first step towards an solution is to isolate the problem; obviously we don't need to store the objects differently, we just need //information about attached objects//&nbsp; for some quite isolated tasks (namely for creating a GUI representation and for combining attached objects into a [[Pipe]] when building). Resorting to a query (function call) interface should turn the rest of the problem into an implementation detail. Thus
* for an __attachment head__ (= {{{Placement<MObject>}}} to which other objects have been attached) get the ordered list of attachments
* for an __attached placement__ (member of the cluster) get the placement of the corresponding attachment head
* retrieve and break the attachment when //deleting.//

!!Implementation notes
Attachment is managed within the participating placements, mostly by special [[locating pins|LocatingPin]]. Attachment doesn't necessarily nail down an attached object to a specific position, rather the behaviour depends on the type of the object and the locating pins actually involved, especially on their order and priority. For example, if an {{{Placement<Effect>}}} doesn't contain any locating pin defining a temporal position, then the attachment will result in the placement inheriting the temporal placement of the //attachment head// (i.e. the clip this effect has been attached to). But, if on the contrary the effect in question //does// have an additional locating pin, for example relative to another object or even to a fixed time position, this one will "win" and determine the start position of the effect &mdash; it may even move the effect out of the time interval covered by the clip, in which case the attachment has no effect on the clip's processing pipe.
The attachment relation is hierarchical and has a clearly defined //active// and //passive// side: The attachment head is the parent node in a tree, but plays the role of the passive partner, to which the child nodes attach. But note, this does not mean we are limited to a single attachment head. Actually, each placement has a list of locating pins and thus can attach to several other placements. For example, a transition attaches to at least two local pipes (clips). {{red{TODO: unresolved design problem; seems to contradict the PlacementScope}}}

!!!!Relation to memory management
Attachment on itself does //not// keep an object alive. Rather, it's implemented by an opaque ID entry (&rarr; PlacementRef), which can be resolved by the PlacementIndex. The existence of attachments should be taken into account when deleting an object, preferably removing any dangling attachments to prevent an exception to be thrown later on. On the other hand, contrary to the elements of the HighLevelModel, processing nodes in the render engine never depend on placements &mdash; they always refer directly to the MObject instance or even the underlying asset. In the case of MObject instances, the pointer from within the engine will //share ownership// with the placement (remember: both are derived from {{{boost::shared_ptr}}}).
Automation is treated as a function over time. It is always tied to a specific Parameter (which can thus be variable over the course of the timeline). All details //how// this function is defined are completely abstracted away. The Parameter uses a ParamProvider to get the value for a given Time (point). Typically, this will use linear or bezier interpolation over a set of keyframes internally. Parameters can be configured to have different value ranges and distribution types (on-off, stepped, continuous, bounded)

[img[how to implement Automation|uml/fig129669.png]]
While generally automation is treated as a function over time, defining and providing such a function requires some //Automation Data.// The actual layout and meaning of this data is deemed an implementation detail of the [[parameter provider|ParamProvider]] used, but nevertheless an automation data set has object characteristics within the session (high-level-model), allowing it to be attached, moved and [[placed|Placement]] by the user.
Starting out from the concepts of Objects, Placement to Tracks, render Pipes and connection properties (&rarr; see [[here|TrackPipeSequence]]) within the session, we can identify the elementary operations occuring within the Builder. Overall, the Builder is organized as application of //visiting tools// to a collection of objects, so finally we have to consider some object kind appearing in the working function of the given builder tool, which holds at this moment some //context//. The job now is to organize this context such as to create a predictable build process from this //event driven// approach.
&rarr;see also: BuilderPrimitives for the elementary situations used to cary out the building operations

!Builder working Situations
# any ''Clip'' (which at this point has been reduced already to a part of a simple elementary media stream &rarr; see [[Fixture]])
## yields a source reading node
## which needs to be augmented by the underlying media's [[processing pattern|ProcPatt]]
##* thus inserting codec(s) and source transformations
##* effectively this is an application of effects
## at this point we have to process (and maybe generate on-the-fly) the [[source port of this clip|ClipSourcePort]]
##* the output of the source reading and preprocessing defined thus far is delivered as input to this port, which is done by a ~WiringRequest (see below)
##* as every port, it is the entry point to a [[processing pipe|Pipe]], thus the source port has a processing pattern, typically inserting the camera (transformation effect) at this point
## followed by the application of effects
##* separately for every effect chain rooted (placed) directly onto the clip
##* and regarding the chaining order
## next we have to assess the [[pipes|Pipe]] to which the clip has been placed
## producing a [[wiring request|WiringRequest]] for every pair {{{(chainEndpoint, pipe)}}}
# [>img[draw/Proc.builder1.png]]   attaching an ''Effect'' is actually always an //insertion operation// which is done by //prepending// to the previously built nodes. Effects may be placed as attached to clips and pipes, which causes them to be included in the processing chain at the given location. Effects may as well be placed at an absolute time, which means they are to be applied to every clip that happens to be at this time &mdash; but this usecase will be reolved when creating the Fixture, causing the effect to be attached to the clips in question. The same holds true for Effects put on tracks.
# treating an ''wiring request'' means
## detecting possible and impossible connections
## deriving additional possible "placement dimensions" generated by executing such an connection (e.g. connecting a mono source to a spatial sound system bus creates panning possibilities)
##* deriving parameter sources for this additional degrees of freedom
##* fire off insertion of the necessary effects to satisfy this connection request and implement the additional "placement dimensions" (pan, layer order, overlay mode, MIDI channel selection...)
# processing the effects and further placements ''attached to a Pipe'' is handled identical to the processing done with all attachments to individual clips.
# ''Transitions'' are to be handled differently according to their placement (&rarr; more on [[Transitions|TransitionsHandling]])
#* when placed normally to two (or N) clips, they are inserted at the exit node of the clip's complete effect chain.
#* otherwise, when placed to the source port(s) or when placed to some other pipes they are inserted at the exit side of those pipe's effect chains. (Note: this puts additional requirements on the transition processor, so not every transition can be placed this way)
After consuming all input objects and satisfying all wiring requests, the result is a set of [[exit nodes|ExitNode]] ready for pulling data. We call the network reachable from such an exit node a [[Processor]], together all processors of all segments and output data types comprise the render engine.

!!!dependencies
Pipes need to be there first, as everything else will be plugged (placed) to a pipe at some point. But, on the other hand, for the model as such, pipes are optional: We could create sequences with ~MObjects without configuring pipes (but won't be able then to build any render processor of course). Similarily, there is no direct relation between tracks and pipes. Each sequence is comprised of at least one root track, but this has no implications regarding any output pipe.

Effects can be attached only to already existing pipelines, starting out at some pipes entry port or the source port of some clip. Besides that, all further parts can be built in any order and independent of each other. This is made possible by using [[wiring requests|WiringRequest]], which can be resolved later on. So, as long as we start out with the tracks (to resolve any pipe they are placed to), and further, if we manage to get any effect placed to some clip-MO //after// setting up and treating the clip, we are fine and can do the building quasi event driven.

!!!building and resolving
Building the network for the individual objects thus creates a queue of wiring requests. Some of them may be immediately resolvable, but detecting this correctly can be nontrivial, and so it seems better to group all wiring requests based on the pipe and treat them groupwise. Because &mdash; in the most general case &mdash; connecting includes the use of transforming and joining nodes, which can create additional wiring requests (e.g. for automation parameter data connections). Finally, if the network is complete, we could perform [[optimisations|RenderNetworkOptimisation]]
/***
|Name|BetterTimelineMacro|
|Created by|SaqImtiaz|
|Location|http://tw.lewcid.org/#BetterTimelineMacro|
|Version|0.5 beta|
|Requires|~TW2.x|
!!!Description:
A replacement for the core timeline macro that offers more features:
*list tiddlers with only specfic tag
*exclude tiddlers with a particular tag
*limit entries to any number of days, for example one week
*specify a start date for the timeline, only tiddlers after that date will be listed.

!!!Installation:
Copy the contents of this tiddler to your TW, tag with systemConfig, save and reload your TW.
Edit the ViewTemplate to add the fullscreen command to the toolbar.

!!!Syntax:
{{{<<timeline better:true>>}}}
''the param better:true enables the advanced features, without it you will get the old timeline behaviour.''

additonal params:
(use only the ones you want)
{{{<<timeline better:true  onlyTag:Tag1 excludeTag:Tag2 sortBy:modified/created firstDay:YYYYMMDD maxDays:7 maxEntries:30>>}}}

''explanation of syntax:''
onlyTag: only tiddlers with this tag will be listed. Default is to list all tiddlers.
excludeTag: tiddlers with this tag will not be listed.
sortBy: sort tiddlers by date modified or date created. Possible values are modified or created.
firstDay: useful for starting timeline from a specific date. Example: 20060701 for 1st of July, 2006
maxDays: limits timeline to include only tiddlers from the specified number of days. If you use a value of 7 for example, only tiddlers from the last 7 days will be listed.
maxEntries: limit the total number of entries in the timeline.


!!!History:
*28-07-06: ver 0.5 beta, first release

!!!Code
***/
//{{{
// Return the tiddlers as a sorted array
TiddlyWiki.prototype.getTiddlers = function(field,excludeTag,includeTag)
{
          var results = [];
          this.forEachTiddler(function(title,tiddler)
          {
          if(excludeTag == undefined || tiddler.tags.find(excludeTag) == null)
                        if(includeTag == undefined || tiddler.tags.find(includeTag)!=null)
                                      results.push(tiddler);
          });
          if(field)
                   results.sort(function (a,b) {if(a[field] == b[field]) return(0); else return (a[field] < b[field]) ? -1 : +1; });
          return results;
}



//this function by Udo
function getParam(params, name, defaultValue)
{
          if (!params)
          return defaultValue;
          var p = params[0][name];
          return p ? p[0] : defaultValue;
}

window.old_timeline_handler= config.macros.timeline.handler;
config.macros.timeline.handler = function(place,macroName,params,wikifier,paramString,tiddler)
{
          var args = paramString.parseParams("list",null,true);
          var betterMode = getParam(args, "better", "false");
          if (betterMode == 'true')
          {
          var sortBy = getParam(args,"sortBy","modified");
          var excludeTag = getParam(args,"excludeTag",undefined);
          var includeTag = getParam(args,"onlyTag",undefined);
          var tiddlers = store.getTiddlers(sortBy,excludeTag,includeTag);
          var firstDayParam = getParam(args,"firstDay",undefined);
          var firstDay = (firstDayParam!=undefined)? firstDayParam: "00010101";
          var lastDay = "";
          var field= sortBy;
          var maxDaysParam = getParam(args,"maxDays",undefined);
          var maxDays = (maxDaysParam!=undefined)? maxDaysParam*24*60*60*1000: (new Date()).getTime() ;
          var maxEntries = getParam(args,"maxEntries",undefined);
          var last = (maxEntries!=undefined) ? tiddlers.length-Math.min(tiddlers.length,parseInt(maxEntries)) : 0;
          for(var t=tiddlers.length-1; t>=last; t--)
                  {
                  var tiddler = tiddlers[t];
                  var theDay = tiddler[field].convertToLocalYYYYMMDDHHMM().substr(0,8);
                  if ((theDay>=firstDay)&& (tiddler[field].getTime()> (new Date()).getTime() - maxDays))
                     {
                     if(theDay != lastDay)
                               {
                               var theDateList = document.createElement("ul");
                               place.appendChild(theDateList);
                               createTiddlyElement(theDateList,"li",null,"listTitle",tiddler[field].formatString(this.dateFormat));
                               lastDay = theDay;
                               }
                  var theDateListItem = createTiddlyElement(theDateList,"li",null,"listLink",null);
                  theDateListItem.appendChild(createTiddlyLink(place,tiddler.title,true));
                  }
                  }
          }

          else
              {
              window.old_timeline_handler.apply(this,arguments);
              }
}
//}}}
Binding-~MObjects are used to associate two entities within the high-level model.
More specifically, such a binding serves 
* to outfit any top-level [[Timeline]] with real content, which is contained within a [[Sequence]]
* to build a VirtualClip, that is to link a complete sequence into another sequence, where it appears like a new virtual media or clip.

!Properties of a Binding
Binding is a relation entity, maintaining a link between parts of the session. Actually this link is achieved somewhat indirect: The binding itself is an MObject, but it points to a [[sequence asset|Sequence]]. Moreover, in case of the (top-level) timelines, there is a timeline asset acting as a frontend for the ~BindingMO. 
* the binding exposes special functions needed to implement the timeline {{red{planned as of 11/10}}}
* similarly, the binding exposes functions allowing to wrap up the bound sequence as VirtualMedia (when acting as VirtualClip).
* the Binding holds an OutputMapping -- allowing to specify, resolve and remember [[output designations|OutputDesignation]]

Note: there are other binding-like entities within the model, which are deliberately not subsumed below this specification, but rather implemented stand alone.
&rarr; see also SessionInterface



!Implementation
On the implementation side, we use a special kind of MObject, acting as an anchor and providing an unique identity. Like any ~MObject, actually a placement establishes the connection and the scope, and typically constitutes a nested scope (e.g. the scope of all objects //within// the sequence to be bound into a timeline)

Binding can be considered an implementation object, rarely to be created directly. Yet it is part of the high-level model
{{red{WIP 11/10}}}: it is likely that -- in case of creating a VirtualClip -- BindingMO will be hooked up behind another façade asset, acting as ''virtual media''

!!!channel / output mapping {{red{WIP 11/10}}}
The Binding-~MObject stores an OutputMapping. Basically this, together with OutputDesignation, implements the mapping behaviour
* during the build process, output designation(s) will be retrieved for each pipe.
* indirect and relative designations are to resolved; the relative ones are forwarded to the next enclosing binding
* in any case, the result is an direct WiringPlug, which can then be matched up and wired
* but in case of implementing a virtual clip, in addition to the direct wiring...
** a relative output designation (the N^^th^^ channel of this kind) is carried over to the target scope to be re-resolved there.
** any output designation specification yields a summation pipe at the binding, i.e. a position corresponding to the global pipes when using the same sequence as timeline.
** The output of these summation pipes is treated like a media channels
** but for each of those channels, an OutputDesignation is //carried over//&nbsp; into the target (virtual clip)
*** now, if a distinct output designation can be determined at this placement of the virtual clip, it will be used for the further routing
*** otherwise, we try to re-evaluate the original output designation carried over. <br/>Thus, routing will be done as if the original output designation was given at this placement of the virtual clip.
There is some flexibility in the HighLevelModel, allowing to attach the same [[Sequence]] onto multiple [[timelines|Timeline]] or even into a [[meta-clip|VirtualClip]]. Thus, while there is always an containment relation which can be used to define the current PlacementScope, we can't always establish an unique path from any given location up to the model root. In the most general case, we have to deal with a DAG, not a tree.

!solution idea
Transform the DAG into a tree by //focussing//&nbsp; on the current situation and context. Have a state containing the //current path.//  &rarr; QueryFocus

Incidentally, this problem is quite similar to file system navigation involving ''symlinks''.
* under which circumstances shall discovery follow symlinks?
* where do you get by {{{cd ..}}} &mdash; especially when you went down following a symlink?
This leads us to building our solution here to match a similar behaviour pattern, according to the principle of least surprise. That is, disovery shall follow the special [[bindings|BindingMO]] only when requested explicitly, but the current "shell" (QueryFocus) should maintain a virtual/effective path to root scope.

!!detail decisions
!!!!how to represent scoping
We us a 2-layer approach: initially, containment is implemented through a registration in the PlacementIndex. Building on that, scope is layered on top as an abstraction, which uses the registered containment relation, but takes the current access path into account at the  critical points (where a [[binding|BindingMO]] comes into play)

!!!!the basic containment tree
each Placement (with the exception of the root) gets registered as contained in yet another Placement. This registration is entirely a tree, and thus differs from the real scope nesting at the Sequence level: The scopes constituting Sequences and Timelines are registered as siblings, immediately below the root. This has some consequences
# Sequences as well as Timelines can be discovered as contents of the model root
# ScopePath digresses at Sequence level from the basic containment tree

!!!!locating a placement
constituting the effective logical position of a placement poses sort-of a chicken or egg problem: We need already a logical position to start with. In practice, this is done by recurring on the QueryFocus, which is a stack-like state and automatically follows the access or query operations on the session. //Locating a placement//&nbsp; is done by //navigating the current query focus.//

!!!!navigating a scope path location
As the current query focus stack top always holds a ScopePath, the ''navigating operation'' on ScopePath is the key for managing this logical view onto the "current" location.
* first, try to locate the new scope in the same sequence as the current scope, resulting in a common path prefix
* in case the new scope belongs to a different sequence, this sequence might be connected to the current one as a meta-clip, again resulting in a common prefix
* otherwise use the first possible binding according to the ordering of timelines as a prefix
* use the basic containment path as a fallback if no binding exists

!!{{red{WIP 9/10}}}Deficiencies
To buy us some time, analysing and implementing the gory details of scope path navigation and meta-clips was skipped for now.
Please note the shortcomings and logical contradictions in the solution currently in code:
* the {{{ScopePath::navigate()}}}-function was chosen as the location to implement the translation logic. //But actually this translation logic is missing.//
* so basically we're just using the raw paths of the basic containment tree; more specifically, the BindingMO (=Timeline) isn't part of the derived ScopePath
* this will result in problems even way before we implement meta-clips (because the Timeline is assumed to provide output routing information) to the Placements
* QueryFocus, with the help of ScopeLocator exposes the query services of the PlacementIndex. So actually it's up to the client code to pick the right functions. This might get confusing
* The current design rather places the implementation according to the roles of the involved entities, which causes some ping-pong on the implementation level. Especially the ScopeLocator singleton can be accessed multiple times. This is the usual clarity vs. performance tradeoff. Scope resolution is assumed rather to be //not performance critical.//
All rendering, transformations and output of media data requires using ''data buffers'' -- but the actual layout and handling of these buffers is closely related to the actual implementation of these operations. As we're relying heavily on external libraries and plug-ins for performing these operations, there is no hope getting away with just one single {{{Buffer}}} data type definition. Thus, we need to confine ourselves to a common denominator of basic operations regarding data buffers and abstract the access to these operations through a BufferProvider entity. Beyond these basic operations, mostly we just need to assure that //a buffer exists as an distinguishable element// -- which in practice boils down to pushing around {{{void*}}} variables. 

Obviously, overloading a pointer with semantic meaning isn't exactly a brilliant idea -- and the usual answer is to embed this pointer into a smart handle, which also yields the nice side-effect of explaining this design to the reader. Thus a buffer handle
* can only be obtained from a BufferProvider
* can be used to identify a buffer
* can be dereferenced
* can be copied 

!design quest: buffer type information
To perform anything useful with such a buffer handle, the client code needs some additional information, which can be generalised into a //type information:// Either, the client needs to know the size and kind of data to expect in the buffer, maybe just assume to get a specific buffer with suitably dimensions, or the client needs to know which buffer provider to contact for any management operations on that buffer (handle). And, at some point there needs to be a mechanism to verify the validity of a handle. But all of this doesn't mean that it's necessary to encode or embedd this information directly into the handle -- it might also be stored into a registration table (which has the downside of creating contention), or it might just be attached implicitly to the invocation context.

Just linking this type information to the context is certainly the most elegant solution, but also by far the most difficult to achieve -- not to mention the implicit dependency on a very specific invocation situation. So for now (9/2011) it seems best to stick to the simple and explicit implementation, just keeping that structural optimisation in mind. And the link to this buffer type information should be made explicit within the definition anyway, even if we choose to employ another design tradeoff later.
* thus the conclusion is: we introduce a ''descriptor object'', which will be stored within the handle
* each BufferProvider exposes a ''descriptor prototype''; it can be specialised and used by to [[organise implementation details|BufferMetadata]]


!sanity checks
there are only limited sanity checks, and they can be expected to be optimised away for production builds.
Basically the client is responsible for sane buffer access.
Buffers are used to hold the media data for processing and output. Within the Lumiera RenderEngine and [[Player]] subsystem, we use some common concepts to handle the access and allocation of working buffers. Yet this doesn't imply having only one central authority in charge of every buffer -- such an approach wouldn't be possible (due to collaboration with external systems) and wouldn't be desirable either. Rather, there are some common basic usage //patterns// -- and there are some core interfaces used throughout the organisation of the rendering process.

Mostly, the //client code,// i.e. code in need of using buffers, can access some BufferProvider, thereby delegating the actual buffer management. This binds the client to adhere to kind of a //buffer access protocol,// comprised of the ''announcing'', ''locking'', optionally ''attaching'' and finally the ''releasing'' steps. Here, the actual buffer management within the provider is a question of implementation and will be configured during build-up of the scope in question.

!usage situations
;rendering
:any calculations and transformations of media data typically require an input- and output buffer. To a large extent, these operations will be performed by specialised libraries, resulting in a call to some plain-C function receiving pointers to the required working buffers. Our invocation code has the liability to prepare and provide those pointers, relying on a BufferProvider in turn.
;output
:most any of the existing libraries for handling external output require the client to adhere to some specific protocol. Often, this involves some kind of callback invoked at the external library's discretion, thus forcing our engine to prepare data within an intermediary buffer. Alternatively, the output system might provide some mechanism to gain limited direct access to the output buffers, and such an access can again be exposed to our internal client code through the BufferProvider abstraction.

!primary implementations
;memory pool
:in all those situations, where we just need a working buffer for some time, we can rely on our internal custom memory allocator.
:{{red{~Not-Yet-Implemented as of 9/11}}} -- as a fallback we just rely on heap allocations through the language runtime
;frame cache
:whenever a calculated result may be of further interest, beyond the immediate need triggering the calculation, it might be eligible for caching.
:The Lumiera ''frame cache'' is a special BufferProvider, maintaining a larger pool of buffers which can be pinned and kept around for some time,
:accomodating limited resources and current demand for fresh result buffers.

the generic BufferProvider implementation exposes a service to attach and maintain additional metadata with individual buffers. Using this service is not mandatory -- a concrete buffer provider implementation may chose to maintain more specific metadata right on the implementation level, especially if more elaborate management is necessary within the implementation anyway (e.g. the frame index). We can expect most buffer provider implementations to utilise at least the generic buffer type id service though.

!buffer types and descriptors
Client code accesses buffer through [[smart buffer handles|BuffHandle]], including some kind of buffer type information, encoded into a type ID within the ''buffer descriptor''. These descriptors are used like prototypes, relating the type-~IDs hierarchically. Obviously, the most fundamental distinction is the BufferProvider in charge for that specific buffer. Below that, the next mandatory level of distinction is the ''buffer size''. In some cases, additional distinctions can be necessary. Each BufferProvider exposes a service to yield unique type ~IDs governed by such a hierarchical scheme.

!state and metadata for individual buffers
Beyond that, it can be necessary to associate at least a state flag with //individual buffers.// Doing so requires the buffer to be in //locked state,// otherwise it wouldn't be distinguishable as an separate entity (a client is able to access the buffer memory address only after "locking" this buffer). Especially when using a buffer provider in conjunction with an OutputSlot, these states and transitions are crucial for performing an orderly handover of generated data from the producer (render engine) to the consumer (external output sink).

__Note__: while the API to access this service is uniform, conceptually there is a difference between just using the (shared) type information and associating individual metadata, like the buffer state. Type-~IDs, once allocated, will never be discarded (within the lifetime of an Lumiera application instance -- buffer associations aren't persistent). To the contrary, individual metadata //will be discarded,// when releasing the corresponding buffer. According to the ''prototype pattern'', individual metadata is treated as a one-way-off specialisation.
It turns out that --  throughout the render engine implementation -- we never need direct access to the buffers holding actual media data. Buffers are just some entity to be //managed,// i.e. "allocated", "locked" and "released"; the //actual meaning of these operations can be left to the implementation.// The code within the render engine just pushes around ''smart-prt like handles''. These [[buffer handles|BuffHandle]] act as a front-end, being created by and linked to a buffer provider implementation. There is no need to manage the lifecycle of buffers automatically, because the use of buffers is embedded into the render calculation cycle, which follows a rather strict protocol anyway. Relying on the [[capabilities of the scheduler|SchedulerRequirements]], the sequence of individual jobs in the engine ensures...
* that the availability of a buffer was ensured prior to planning a job ("buffer allocation")
* that a buffer handle was obtained ("locked") prior to any operation requiring a buffer
* that buffers are marked as free ("released") after doing the actual calculations.

!operations
While BufferProvider is an interface meant to be backed by various different kinds of buffer and memory management approaches, there is a common set of operations to be supported by any of them
;announcing
:client code may announce beforehand that it expects to get a certain amount of buffers. Usually this causes some allocations to happen right away, or it might trigger similar mechanisms to ensure availability; the BufferProvider will then return the actual number of buffers guaranteed to be available. This announcing step is optional an can happen any time before or even after using the buffers and it can be repeated with different values to adjust to changing requirements. Thus the announced amount of buffers always denotes //additional buffers,// on top of what is actively used at the moment. This safety margin of available buffers usually is accounted separately for each distinct kind of buffer (buffer type). There is no tracking as to which specific client requested buffers, beyond the buffer type.
;locking
:this operation actually makes a buffer available for a specific client and returns a [[buffer handle|BuffHandle]]. The corresponding buffer is marked as used and can't be locked again unless released. If necessary, at that point the BufferProvider might allocate memory to accommodate (especially when the buffers weren't announced beforehand). The locking may fail and raise an exception. You may expect failure to be unlikely when buffers have been //announced beforehand.// To support additional sanity checks, the client may provide a token-ID with the lock-operation. This token may be retrieved later and it may be used to ensure the buffer is actually locked for //this token.//
;attaching
:optionally the client may attach an object to a locked buffer. This object is placement-constructed into the buffer and will be destroyed automatically when releasing the buffer. Alternatively, the client may provide a pair of constructor- / destructor-functors, to be invoked in a similar way. This allows e.g. to install descriptor structures within the buffer, as required by an external library.
;releasing
:buffers need to be released explicitly by the client code. This renders the corresponding BuffHandle invalid, (optionally) invokes a destructor function of an attached object and maybe reclaims the buffer memory

!!type metadata service
In addition to the basic operations, clients may associate BufferMetadata with individual buffers;
in the basic form, this means just maintaining a type tag describing the kind of buffer, while optionally this service might be extended to e.g. associating a state flag.

__see also__
&rarr; OutputSlot relying on a buffer provider to deal with frame output buffers
&rarr; more about BufferManagement within the RenderEngine and [[Player]] subsystem
&rarr; RenderMechanics for details on the buffer management within the node invocation for a single render step
The invocation of individual [[render nodes|ProcNode]] uses an ''buffer table'' internal helper data structure to encapsulate technical details of the allocation, use, re-use and feeing of data buffers for the media calculations. Here, the management of the physical data buffers is delegated through a BufferProvider, which typically is implemented relying on the ''frame cache'' in the backend. Yet some partially quite involved technical details need to be settled for each invocation: We need input buffers, maybe provided as external input, while in other cases to be filled by a recursive call. We need storage to prepare the (possibly automated) parameters, and finally we need a set of output buffers. All of these buffers and parameters need to be rearranged for invoking the (external) processing function, followed by releasing the input buffers and commiting the output buffers to be used as result.

Because there are several flavours of node wiring, the building blocks comprising such a node invocation will be combined depending on the circumstances. Performing all these various steps is indeed the core concern of the render node -- with the help of BufferTable to deal with the repetitive, tedious and technical details.

!requirements
The layout of the buffer table will be planned beforehand for each invocation, allongside with planning the individual invocation jobs for the scheduler. At that point, a generic JobTicket for the whole timeline segment is available, describing the necessary operations in an abstract way, as determined by the preceeding planning phase. Jobs are prepared chunk wise, some time in advance (but not all jobs of at once). Jobs will be executed concurrently. Thus, buffer tables need to be created repeatedly and placed into a memory block accessed and owned exclusively by the individual job.
* within the buffer table, we need a working area for the output handles, the input handles and the parameter descriptors
* actually, these can be seen as pools holding handle objects which might even be re-used, especially for a chain of effects calculated in-place.
* each of these pools is characterised by a common //buffer type,// represented as buffer descriptor
* we need some way to integrate with the StateProxy, because some of the buffers need to be marked especially, e.g. as result
* there should be convenience functions to release all pending buffers, forwarding the release operation to the individual handles
//Building the fixture is actually at the core of the [[builder's operation|Builder]]//
{{red{WIP as of 11/10}}} &rarr; see also the [[planning page|PlanningBuildFixture]]

;Resolving the DAG[>img[Steps towards creating a Segmentation|draw/SegmentationSteps1.png]]
Because of the possibility of binding a Sequence multiple times, and maybe even nested as virtual clip, the [[high-level model|HighLevelModel]] actually constitutes a DAG, not a tree. This leds to quite some tricky problems, which we try to resolve by //rectifying the DAG into N virtual trees.// (&rarr; BindingScopeProblem)

Relying on this transformation, each Timeline spans a sub-tree virtually separated from all other timelines; the BuildProcess is driven by [[visiting|VisitorUse]] all the //tangible// objects within this subtree. In the example shown to the right, Sequence-β is both bound as VirtualClip into Sequence-α, as well as bound independently as top-level sequence into Timeline-2. Thus it will be visited twice, but the QueryFocus mechanism ensures that each visitation »sees« the proper context.

;Explicit Placements
Each tangible object placement (relevant for rendering), which is encountered during that visitation, gets //resolved// into an [[explicit placement|ExplicitPlacement]]. If we see [[Placement]] as a positioning within a multi dimensional configuration space, then the resolution into an explicit placement is like the creation of an ''orthogonal base'': Within the explicit placement, each LocatingPin corresponds exactly to one degree of freedom and can be considered independently from all other locating pins. This resolution step removes any fancy dynamic behaviour and all scoping and indirect references. Indeed, an explicit placement is a mere //value object;// it isn't part of the session core (PlacementIndex), isn't typed and can't be referred indirectly.

;Segmentation of Time axis
This simple and explicit positioning thus allows to arrange all objects as time intervals on a single axis. Any change and especially any overlap is likely to create a different wiring configuration. Thus, for each such configuration change, we fork off a new //segment// and //copy over// all partially touched placements. The resulting seamless sequence of non-overlapping time intervals provides the backbone of the datastructure called [[Fixture]].

;Building the Network
From this backbone, the actual [[building mechanism|BuilderMechanics]] proceeds as a ongoing visitation and resolution, resulting in the gowth of a network of [[render nodes|ProcNode]] starting out from the source reading nodes and proceeding up through the local pipes, the transitions and the global pipes. When this build process is exhausted, besides the actual network, the result is a //residuum of nodes not connected any further.// Any of these [[exit nodes|ExitNode]] can be associated to a ~Pipe-ID in the high-level model. Within each segment, there should be one exit node per pipe-ID at max. These are the [[model ports|ModelPort]] resulting from the build process, keyed by their corresponding ~Pipe-ID.
&rarr; see [[Structure of the Fixture|Fixture]]
All decisions on //how // the RenderProcess has to be carried out are concentrated in this rather complicated Builder Subsystem. The benefit of this approach is, besides decoupling of subsystems, to keep the actual performance-intensive video processing code as simple and transparent as possible. The price, in terms of increased complexity &mdash; to pay in the Builder &mdash; can be handled by making the Build Process generic to a large degree. Using a Design By Contract approach we can decompose the various decisions into small decision modules without having to trace the actual workings of the Build Process as a whole.

[>img[Outline of the Build Process|uml/fig129413.png]]
The building itself will be broken down into several small tool application steps. Each of these steps has to be mapped to the MObjects found on the [[Timeline]]. Remember: the idea is that the so called "[[Fixture]]" contains only [[ExplicitPlacement]]s which in turn link to MObjects like Clips, Effects and [[Automation]]. So it is sufficient to traverse this list and map the build tools to the elements. Each of these build tools has its own state, which serves to build up the resulting Render Engine. So far I see two steps to be necessary:
* find the "Segments", i.e. the locations where the overall configuration changes
* for each segment: generate a ProcNode for each found MObject and wire them accordingly
Note, //we still have to work out how exactly building, rendering and playback work// together with the backend-design. The build process as such doesn't overly depend on these decisions. It is easy to reconfigure this process. For example, it would be possible as well to build for each frame separately (as Cinelerra2 does), or to build one segment covering the whole timeline (and handle everything via [[Automation]]

&rarr;see also: [[Builder Overview|Builder]]
&rarr;see also: BasicBuildingOperations
&rarr;see also: BuilderStructures
&rarr;see also: BuilderMechanics
&rarr;see also: PlanningBuildFixture
&rarr;see also: PlanningSegementationTool
&rarr;see also: PlanningNodeCreatorTool

[img[Colaborations in the Build Process|uml/fig128517.png]]
Actually setting up and wiring a [[processing node|ProcNode]] involves several issues and is carried out at the lowest level of the build process.
It is closely related to &rarr; [[the way nodes are operated|NodeOperationProtocol]] and the &rarr; [[mechanics of the render process|RenderMechanics]]

!!!object creation
The Nodes are small polymorphic objects, carrying configuration data, but no state. They are [[specially allocated|ManagementRenderNodes]], and the object creation is accessible by means of the NodeFactory solely. They //must not be deallocated manually.// The decision of what concrete node type to create depends on the actual build situation and is worked out by the combination of [[mould|BuilderMould]] and [[processing pattern|ProcPatt]] at the current OperationPoint, issuing a call to one of NodeFactory's {{{operator()}}}

!!!node, plugin and processing function
Its a good idea to distinguish clearly between those concepts. A plugin is a piece of (possibly external) code we use to carry out operations. We have to //discover its properties and capabilities.// We don't have to discover anything regarding nodes, because we (Lumiera builder and renderengine) are creating, configuring and wiring them to fit the specific purpose. Both are to be distinguished from processing functions, which do the actual calculations on the media data. Every node typically encompasses at least one processing function, which may be an internal function in the node object, a library function from Lumiera or GAVL, or external code loaded from a plugin.

!!!node interfaces
As a consequence of this distinctions, in conjunction with a processing node, we have to deal with three different interfaces
* the __build interface__ is used by the builder to set up and wire the nodes. It can be full blown C++ (including templates)
* the __operation interface__ is used to run the calculations, which happens in cooperation of Proc-Layer and Backend. So a function-style interface is preferable.
* the __inward interface__ is accessed by the processing function in the course of the calculations to get at the necessary context, including in/out buffers and param values.

!!!wiring data connections
A node //knows its predecessors, but not its successors.// When being //pulled//&nbsp; in operation, it can expect to get a frame provider for accessing the in/out buffer locations (some processing functions may be "in-place capable", but that's only a special case of the former). At this point, the ''pull principle'' comes into play: the node may request input frames from the frame provider, passing its predecessors as a ''continuation''.
With regard to the build process, the wiring of data connections translates into providing the node with its predecessors and preconfiguring the possible continuations. While in the common case, a node has just one input/output and pulls from its predecessor a frame for the same timeline position, the general case can be more contrived. A node may process N buffers in parallel and may require several different time positions for it's input, even at a differing framerate. So the actual source specification is (predNode,time,frameType). The objective of the wiring done in the build process is to factor out the parts known in advance, while in the render process only the variable part need to be filled in. Or to put it differently: wiring builds a higher order function (time)->(continuation), where continuation can be invoked to get the desired input frame.

!!!wiring control conections
In many cases, the parameter values provided by these connections aren't frame based data, rather, the processing function needs a call interface to get the current value (value for a given time), which is provided by the parameter object. Here, the wiring needs to link to the suitable parameter instance, which is located within the high-level model (!). As an additional complication, calculating the actual parameter value may require a context data frame (typically for caching purposes to speed up the interpolation). While these parameter context data frames are completely opaque for the render node, they have to be passed in and out similar to the state needed by the node itself, and the wiring has to prepare for accessing these frames too.
The Builder takes some MObject/[[Placement]] information (called Timeline) and generates out of this a Render Engine configuration able to render this Objects. It does all decisions and retrieves the current configuration of all objects and plugins, so the Render Engine can just process them stright forward.

The Builder is the central part of the [[Builder Pattern|http://en.wikipedia.org/wiki/Builder_pattern]]
<br/>
As the builder [[has to create a render node network|BuilderModelRelation]] implementing most of the features and wiring possible with the various MObject kinds and placement types, it is a rather complicated piece of software. In order to keep it manageable, it is broken down into several specialized sub components:
* clients access builder functionality via the BuilderFacade
* the [[Proc-Layer-Controller|Controller]] initiates the BuildProcess and does the overall coordination of scheduling edit operations, rebuilding the fixture and triggering the Builder
* to carry out the building, we use several primary tools (SegmentationTool, NodeCreatorTool,...),  together with a BuilderToolKit to be supplied by the [[tool factory|BuilderToolFactory]]
* //operating the Builder// can be viewed at from two different angles, either emphasizing the [[basic building operations|BasicBuildingOperations]] employed to assemble the render node network, or focussing rather at the [[mechanics|BuilderMechanics]] of cooperating parts while processing.
* besides, we can identify a small set of elementary situations we call [[builder primitives|BuilderPrimitives]], to be covered by the mentioned BuilderToolKit; by virtue of [[processing patterns|ProcPatt]] they form an [[interface to the rule based configuration|BuilderRulesInterface]].
* the actual building (i.e. the application of tools to the timeline) is done by the [[Assembler|BuilderAssembler]], which is basically a collection of functions (but has a small amount of global configuration state)
* any non-trivial wiring of render nodes, tracks, pipes and [[automation|Automation]] is done by the services of the [[connection manager|ConManager]]
The cooperation of several components creates a context of operation for the primary builder working tool, the [[node creator|PlanningNodeCreatorTool]]:
* the BuilderToolFactory acts as the "builder for the builder tools", i.e. we can assume to be able to retrive all needed primary tools and elementary tools from this factory, completely configured and ready to use.
* the [[Assembler|BuilderAssembler]] has the ability to consume objects from the high level model and feed them to the node creator (which translates into a dispatch of individual operations suited to the objects to be treated). This involves some sort of scheduling or ordering of the operaions, which is the only means to direct the overall process such as to create a sensible and usable result. //This is an fundamental design decision:// the actual working tools have no hard wired knowledge of the "right process", which makes the whole Builder highly configurable ("open").
* the [[connection manager|ConManager]] on the contrary is a passive service provider. Fed with [[wiring requests|WiringRequest]], he can determine if a desired connection is possible, and what steps to take to implement it; the latter recursively creates further building requests to satisfy by the assembler, and possibly new wiring requests.

!!pattern of operation
The working pattern of this builder mechanics can be described as triggering, enqueuing, priorizing, recursing and exhausting. Without the priorizing part, it would be a depth-first call graph without any context state, forcing us to have all cross reference information available at every node or element to be treated. We prefer to avoid this overhead by ordering the operations into several phases and within these phases into correlated entities with the help of a ''weighting function'' and scheduling with a ''priority queue''

!!call chain
After preparing the tools with the context state of this build process, the assembler drives the visitation process in the right order. The functions embedded within the visitor (NodeCreatorTool) for treating specific kinds of objects in turn use the toolkit (=the fully configured tool factory) to get the mould(s) for the individual steps they need to carry out. This involves preparing the mould (with the high-level object currently in-the-works, a suitable processing pattern and additional references), followed by operating the mould. The latter "plays" the processing pattern in the context of the mould, which, especially with the help of the operation point, carries out the actual building and/or connecting step. While doing so, the node factory will be invoked, which in turn invokes the wiring factory and thus pre-determines the node's prospective mode of operation when later called for rendering.
[>img[Builder creating the Model|uml/fig132868.png]]
The [[Builder]] uses different kinds of tools for creating a network of render nodes from a given high-level model. When breaking down this (necessarily complex) process into small manageable chunks, we arrive at [[elementary building situations|BuilderPrimitives]]. For each of these there is a specialized tool. We denote these tools as "moulds" because they are a rather passive holder for the objects to be attached and wired up. They are shaped according to the basic form the connections have to follow for each of these basic situations:
* attaching an effect to a pipe
* combining pipes via a transition
* starting out a pipe from a source reader
* general connections from the exit node of a pipe to the port of another pipe
In all those cases, the active part is provided by [[processing patterns|ProcPatt]] &mdash; sort of micro programs executed within the context of a given mould: the processing pattern defines the steps to take (in the standard/basic case this is just "attach"), while the mould holds and provides the location where these steps will operate. Actually, this location is represented as a OperationPoint, provided by the mould and abstracting the details of making multi-channel connections.
While assembling and building up the render engines node network, a small number of primitive building situations is encountered repeatedly. The BuilderToolKit provides a "[[mould|BuilderMould]]" for each of these situations, typically involving parametrisation and the application of a [[processing pattern|ProcPatt]].

The ''Lifecycle'' of such a mould starts out by arming it with the object references involved into the next building step. After conducting this building step, the resulting render nodes can be found &mdash; depending on the situation &mdash; attached either to the same mould, or to another kind of mould, but in any case ready to be included in the next building step. Thus, //effectively//&nbsp; the moulds are //used to handle the nodes being built,// due to the fact that the low-level model (nodes to be built) and the high-level model (objects directing what is to be built) are //never connected directly.//

!List of elementary building situations
!!!inserting an Effect or Plugin
[>img[draw/builder-primitives1.png]]
The __~PipeMould__ is used to chain up the effects attached to a clip (=local pipe) or global pipe (=bus)
* participating: a Pipe and an Effect
* point of reference: current exit node of the pipe
* result: Effect appended at the pipe's exit node
* returns: ~PipeMould holding onto the new exit node

@@clear(right):display(block):@@


!!!attaching a transition 
[>img[draw/builder-primitives2.png]]
After having completed N pipe's node chains, a __~CombiningMould__ can be used to join them into a [[transition|TransitionsHandling]]
* participating: N pipe's exit nodes, transition
* point of reference: N exit nodes corresponding to (completed) pipes
* result: transition has been attached with the pipe's exit nodes, new wiring requests created attached to the transition's exit node(s)
* returns: ~WiringMould, connected with the created wiring request
Using this mould implicitly "closes" the involved pipes, which means that we give up any reference to the exit node and can't build any further effect attached to this pipes. Generally speaking, "exit node" isn't a special kind of node, rather it's a node we are currently holding on. Similarly, there is nothing directly correlated to a pipe within the render nodes network after we are done with building the part of the network corresponding to the pipe; the latter serves rather as a blueprint for building, but isn't an entity in the resulting low-level model.
Actually, there is {{red{planned}}} a more general (and complicated) kind of transition, which can be inserted into N data connections without joining them together into one single output, as the standard transitions do. The ~CombiningMould can handle this case too by just returning N wiring moulds as a result.

@@clear(right):display(block):@@

!!!building a source connection
[>img[draw/builder-primitives3.png]]
The __~SourceChainMould__ is used as a starting point for any further building, as it results in a local pipe (=clip) rooted at the clip source port. This reflects the fact that the source readers (=media access points) are the //leaf nodes// in the node graph we are about to build.
* participating: source port of a clip, media access point, [[processing pattern|ProcPatt]]
* point of reference: //none//
* result: processing pattern has been //executed//, resulting in a chain of nodes from the source reader to the clip source port
* returns: ~PipeMould holding onto the new exit node (of a yet-empty pipe)

@@clear(right):display(block):@@

!!!wiring a general connection
Any wiring (outside the chain of effects within a pipe) is always done from exit nodes to the port of another pipe, requiring an [[wiring request|WiringRequest]] already checked and deemed resolvable. Within the __~WiringMould__ the actual wiring is conducted, possibly adding a summation node (called "overlayer" in case of video) and typically a fader element (the specific setup to be used is subject to configuration by processing patterns)
* participating: already verified connection request, providing a Pipe and an exit node; a processing pattern and a Placement
* points of reference: exit node and (optionally) starting point of a pipe's chain (in case there are already other connections)
* result: summation node prepended to the port of the pipe, processing pattern has been //executed// for building the connection from the exit node to the pipe's port, ParamProvider has been setup in [[accordance|PlacementDerivedDimension]] to the Placement.
* returns: ~PipeMould holding onto the destination pipe's exit node, ~WiringMould holding onto the port side of the same pipe, i.e. the destination where further connections will insert summation nodes. {{red{TODO how to handle the //empty//-case?}}}
[>img[draw/builder-primitives4.png]]

@@clear(right):display(block):@@
* the MObjects implement //Buildable//
* each Buildable can "receive" a Tool object and apply it
* the different Tool objects are iterated/mapped onto the list of MObjects in the [[Timeline]]
* __Rationale__
** the MObject class hierarchy is rather fixed (it is unlikely the we will be adding much new MObject subclasses)
** so this design makes it easy to add new Tool subclasses, and within each Tool subclass, all operations on the different MObject classes are grouped together, so it is easy to see what is going on.
** a given Tool instance can carry state while being iterated, so we don't need any global (or object-global) variables to hold the result of the build process

This programming technique is often referred to as [["double dispatch" or "visitor"|VisitorUse]]. We use a specialized library implementation of this pattern &mdash; heavily inspired by the [[Loki library|http://loki-lib.sourceforge.net/]]. We use this approach not only for the builder, but also for carrying out operations on the objects in the session in a typesafe manner. 
It is the low level foundation of the actual [[building operations|BasicBuildingOperations]] necessary to create render nodes starting from the given high level model. 
[img[Entities cooperating in the Builder|uml/fig129285.png]]

!Colaborations

While building, the application of such a visiting tool (especially the [[NodeCreatorTool|PlanningNodeCreatorTool]]) is embedded into an execution context formed by the BuilderToolFactory providing our BuilderToolKit, the [[Assembler|BuilderAssembler]] and [[connection manager|ConManager]]. The colaboration of these parts can be seen as the [[mechanics of the builder|BuilderMechanics]] &mdash; sort of the //outward view//, contrary to the //invard aspects// visible when focussing on how the nodes are put together.
    
[img[Colaborations in the Build Process|uml/fig128517.png]]

Besides the primary working tool within the builder (namely the [[Node Creator Tool|PlanningNodeCreatorTool]]), on a lower level, we encounter several [[elementary building situations|BuilderPrimitives]] &mdash; and for each of these elementary situations we can retrieve a suitable "fitting tool" or [[mould|BuilderMould]]. The palette of these moulds is called the ''tool kit'' of the builder. It is subject to configuration by rules.


!!addressing a mould
All mould instances are owned and managed by the [[tool factory|BuilderToolFactory]], and can be referred to by their type (PipeMould, CombiningMould, SourceChainMould, WiringMould) and a concrete object instance (of suitable type). The returned mould (instance) acts as a handle to stick together the given object instance (from the high-level model) with the corresponding point in the low-level node network under construction. As consequence of this approach, the tool factory instance holds a snapshot of the current building state, including all the active spots in the build process. As the latter is driven by objects from the high-level model appearing (in a sensible order &rarr; see BuilderMechanics) within the NodeCreatorTool, new moulds will be created and fitted as necessary, and existing moulds will be exhausted when finished, until the render node network is complete.

!!configuring a mould
As each mould kind is different, it has a {{{prepare(...)}}} function with suitably typed parameters. The rest is intended to be  self-configuring (for example, a ~CombiningMould will detect the actual kind of Transition and select the internal mode of operation), so that it's sufficient to just call {{{operate()}}}

!!sequence of operations
When {{{operate()}}} doesn't throw, the result is a list of //successor moulds// &mdash; you shouldn't use the original mould after triggering its operation, because it may have been retracted as a result and reused for another purpose by the tool factory. It is not necessary to store these resulting moulds either (as they can be retrieved as described above), but they can be used right away for the next building step if applicable. In the state they are returned from a successful building step (mould operation = execution of a contained [[processing pattern|ProcPatt]]), they are usually already holding a reference to the part of the network just created and need to be configured only with the next high-level object (effect, placement, pipe, processing pattern or similar, depending on the concrete situation) in order to carry out the next step.

!!single connection step
at the lowest level within the builder there is the step of building a //connection.// This step is executed by the processing pattern with the help of the mould. Actually, making such a connection is more complicated, because in the standard case it will connect N media streams simultaneously (N=2 for stereo sound or 3D video, N=6 for 5.1 Surround, N=9 for 2nd order Ambisonics). These details are encapsulated within the OperationPoint, which is provided by the mould and exhibits a common interface for the processing pattern to express the connecting operation.

&rarr;see also: BuilderPrimitives for the elementary working situations corresponding to each of these [[builder moulds|BuilderMould]]
''Bus-~MObjects'' create a scope and act as attachment point for building up [[global pipes|GlobalPipe]] within each timeline. While [[Sequence]] is a frontend -- actually implemented by attaching a root-[[Track]] object -- for //each global pipe// a BusMO is attached as child scope of the [[binding object|BindingMO]], which in turn actualy implements either a timeline or a [[meta-clip|VirtualClip]].
* each global pipe corresponds to a bus object, which thus refers to the respective ~Pipe-ID
* bus objects may be nested, forming a //subgroup//
* the placement of a bus holds a WiringClaim, denoting that this bus //claims to be the corresponding pipe.//
* by default, a timeline is outfitted with one video and one sound master bus
Calculation stream is an organisational unit used at the interface level of the Lumiera engine.
Representing a //stream of calculations,// delivering generated data within //timing constraints,// it is used
*by the [[play process(es)|PlayProcess]] to define and control properties of the output generation
*at the engine backbone to feed the [[Scheduler]] with individual [[render jobs|RenderJob]] to implement this stream of calculations
Calculation stream objects are stateless, constant chunks of definition -- any altering of playback or rendering parameters just causes the respective descriptors to be superseeded. The presence of a CalcStream (being alive within the denoted time span) implies that using any of the associated jobs, dispatcher tables, node and wiring descriptors is safe

!lifecycle
Calculation stream descriptors can be default constructed, representing a //void calculation.// You can't do anything with these.
Any really interesting calculation stream needs to be retrieved from the EngineFaçade. Additionally, an existing calculation stream can be chained up or superseded, yielding a new CalcStream based on the parameters of the existing one, possibly with some alterations.

!purpose
When a calculation stream is retrieved from the EngineFaçade it is already registered and attached there and represents an ongoing activity. Under the hood, several further collaborators will hold a copy of that calculation stream descriptor. While, as such, a CalcStream has no explicit state, at any time it //represents a current state.// In case the running time span of that stream is limited, it becomes superseded automatically, just by the passing of time.

Each calculation stream refers a relevant [[frame dispatcher table|FrameDispatcher]]. Thus, for the engine (interface level), the calculation stream allows to produce the individual [[render jobs|RenderJob]] to enqueue with the [[Scheduler]]. This translation step is what links and relates nominal time with running wall clock time, thereby obeying the [[timing constraints|Timings]] given when initially defining the calculation stream.

Additionally, each calculation stream knows how to access a //render environment closure,// allowing to re-schedule and re-adjust the setup of this stream. Basically, this closure is comprised of several functors (callbacks), which could be invoked to perform management tasks later on. Amongst others, this allows the calculation stream to redefine, supersede or "cancel itself", without the need to access a central registration table at the engine interface level.

&rarr; NodeOperationProtocol
Background: #fefefd
Foreground: #000
PrimaryPale: #8fb
PrimaryLight: #4dc9a7
PrimaryMid: #16877a
PrimaryDark: #0f3f56
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eef
TertiaryLight: #ccd
TertiaryMid: #99a
TertiaryDark: #667
Error: #f88
Within Proc-Layer, a Command is the abstract representation of a single operation or a compound of operations mutating the HighLevelModel.
Thus, each command is a ''Functor'' and a ''Closure'' ([[command pattern|http://en.wikipedia.org/wiki/Command_pattern]]), allowing commands to be treated uniformly, enqueued in a [[dispatcher|ProcDispatcher]], logged to the SessionStorage and registered with the UndoManager.

Commands are //defined// using a [[fluent API|http://en.wikipedia.org/wiki/Fluent_interface]], just by providing apropriate functions. Additionally, the Closure necessary for executing a command is built by binding to a set of concrete parameters. After reaching this point, the state of the internal representation could be serialised by plain-C function calls, which is important for integration with the SessionStorage.

&rarr; see CommandDefinition
&rarr; see CommandHandling
&rarr; see CommandLifecycle
&rarr; see CommandUsage
Commands can be identified and accessed //by name// &mdash; consequently there needs to be an internal command registry, including a link to the actual implementing function, thus allowing to re-establish the connection between command and implementing functions when de-serialising a persisted command. To create a command, we need to provide the following information
* operation function actually implementing the command
* function to [[undo|UndoManager]] the effect of the command
* function to capture state to be used by UNDO.
* a set of actual parameters to bind into these functions (closure).

!Command definition object
The process of creating a command by providing these building blocks is governed by a ~CommandDef helper object. According to the [[fluent definition style|http://en.wikipedia.org/wiki/Fluent_interface]], the user is expected to invoke a chain of definition functions, finally leading to the internal registration of the completed command object, which then might be dispatched or persisted. For example
{{{
CommandDefinition ("test.command1")
     .operation (command1::operate)          // provide the function to be executed as command
     .captureUndo (command1::capture)        // provide the function capturing Undo state
     .undoOperation (command1::undoIt)       // provide the function which might undo the command
     .bind (obj, val1,val2)                  // bind to the actual command parameters (stores command internally)
     .executeSync();                         // convenience call, forwarding the Command to dispatch.
}}}

!Operation parameters
While generally there is //no limitation// on the number and type of parameters, the set of implementing functions and the {{{bind(...)}}} call are required to match. Inconsistencies will be detected by the compiler. In addition to taking the //same parameters as the command operation,// the {{{captureUndo()}}} function is required to return (by value) a //memento//&nbsp; type, which, in case of invoking the {{{undo()}}}-function, will be provided as additional parameter. To summarise:
|!Function|>|!ret(params)|
| operation| void |(P1,..PN)|
| captureUndo| MEM |(P1,..PN)|
| undoOperation| void |(P1,..PN,MEM)|
| bind| void |(P1,..PN)|
Usually, parameters should be passed //by value// &mdash; with the exception of target object(s), which are typically bound as MObjectRef, causing them to be resolved at commad execution time (late binding).
Organising any ''mutating'' operations executable by the user (via GUI) by means of the [[command pattern|http://en.wikipedia.org/wiki/Command_pattern]] can be considered //state of the art//&nbsp; today. First of all, it allows to discern the specific implementation operations to be called on one or several objects within the HighLevelModel from the operation requested by the user, the latter being rather a concept. A command can be labeled clearly, executed under controlled circumstances, allowing transactional behaviour.


!Defining commands
[>img[Structure of Commands|uml/fig134021.png]] Basically, a command could contain arbitrary operations, but we'll assume that it causes a well defined mutation within the HighLevelModel, which can be ''undone''. Thus, when defining (&rarr;[[syntax|CommandDefinition]]) a command, we need to specify not only a function doing the mutation, but also another function which might be called later to reverse the effect of the action. Besides, the action operates on a number of ''target'' objects and additionally may require a set of ''parameter'' values. These are to be stored explicitly within the command object, thus creating a ''closure'' &mdash; the operation //must not//&nbsp; rely on other hidden parameters (maybe with the exception of very generic singleton system services?).

Theoretically, defining the "undo" operation might utilise two different approaches:
* specifying an //inverse operation,// known to cancel out the effect of the command
* capturing a //state memento,// which can later be played back to restore the state found prior to executing the command.
While obviously the first solution is much simpler to implement on behalf of the command framework, the second solution has distinct advantages, especially in the context of an editing application: there might be rounding or calculation errors, the inverse might be difficult to define correctly, the effect of the operation might depend on circumstances, be random, or might even trigger a resolution operation to yield the final result. Thus the decision is within Lumiera Proc-Layer to //favour state capturing// &mdash; but in a modified, semi-manual and not completely exclusive way.

!Undo state
While the usual »Memento« implementation might automatically capture the whole model (resulting in a lot of data to be stored and some uncertainty about the scope of the model to be captured), in Lumiera we rely instead on the client code to provide a ''capture function''&nbsp;and a ''playback function'' alongside with the actual operation. To help with this task, we provide a set of standard handlers for common situations. This way, operations might capture very specific information, might provide an "intelligent undo" to restore a given semantic instead of just a fixed value &mdash; and moreover the client code is free actually to employ the "inverse operation" model in special cases where it just makes more sense than capturing state.

!Handling of commands
A command may be [[defined|CommandDefinition]] completely from scratch, or it might just serve as a CommandPrototype with specific targets and parameters. The command could then be serialised and later be recovered and re-bound with the parameters, but usually it will be handed over to the ProcDispatcher, pending execution. When ''invoking'', the handling sequence is to [[log the command|SessionStorage]], then call the ''undo capture function'', followed from calling the actual ''operation function''. After success, the logging and [[undo registration|UndoManager]] is completed. In any case, finally the ''result signal'' (a functor previously stored within the command) is emitted. {{red{10/09 WIP: not clear if we acutally implement this concept}}}

By design, commands are single-serving value objects; executing an operation repeatedly requires creating a collection of command objects, one for each invocation. While nothing prevents you from invoking the command operation functor several times, each invocation will overwrite the undo state captrued by the previous invocation. Thus, each command instance should bes seen as the promise (or later the trace) of a single operation execution. In a similar vein, the undo capturing should be defined as to be self sufficient, so that invoking just the undo functor of a single command performes any necessary steps to restore the situation found before invoking the corresponding mutation functor &mdash; of course only //with respect to the topic covered by this command.// So, while commands provide a lot of flexibility and allow to do a multitude of things, certainly there is an intended CommandLifecycle.
&rarr; command [[definition|CommandDefinition]] and [[-lifecycle|CommandLifecycle]]
&rarr; more on possible [[command usage scenarios|CommandUsage]]
&rarr; more details regarding [[command implementation|CommandImpl]]
Commands are separated in a handle (the {{{control::Command}}}-object), to be used by the client code, and an implementation level, which is managed transparently behind the stages. Client code is assumed to build a CommandDefinition at some point, and from then on to access the command ''by ID'', yielding the command handle.
Binding of arguments, invocation and UNDO all are accessible through this frontend.

!Infrastructure
To support this handling scheme, some infrastructure is in place:
* a command registry maintains the ID &harr; Command relation.
* indirectly, through a custom alloctaor, the registry is also involved into allocation of the command implementation frame
* this implementation frame combines
** an operation mutation and an undo mutation
** a closure, implemented through an argument holder
** an undo state capturing mechanism, based on a capturing function provided on definition
* performing the actual execution is delegated to a handling pattern object, accessed by name.
[<img[Structure of Commands|uml/fig135173.png]]
While generally the command framework was designed to be flexible and allow a lot of different use cases, execution paths and to serve various goals, there is an ''intended lifecycle'' &mdash; commands are expected to go through several distinct states.

The handling of a command starts out with a ''command ID'' provided by the client code. Command ~IDs are unique (human readable) identifiers and should be organised in a hierarchical fashion. When provided with an ID, the CommandRegistry tries to fetch an existing command definition. In case this fails, we enter the [[command definition stage|CommandDefinition]], which includes specifying functions to implement the operation, state capturing and UNDO. When all of this information is available, the entity is called a ''command definition''. Conceptually, it is comparable to a //class// or //meta object.//

By ''binding'' to specific operation arguments, the definition is //armed up//&nbsp; and becomes a real ''command''. This is similar to creating an instance from a class. Behind the scenes, storage is allocated to hold the argument values and any state captured to create the ability to UNDO the command's effect later on.

A command is operated or executed by passing it to an ''execution pattern'' &mdash; there is a multitude of possible execution patterns to choose from, depending on the situation. 
{{red{WIP... details of ~ProcDispatcher not specified yet}}}

When a command has been executed (and maybe undone), it's best to leave it alone, because the UndoManager might hold a reference. Anyway, a ''clone of the command'' could be created, maybe bound with different arguments and treated separately from the original command.

!State predicates
* fetching an non-existent command raises an ~LUMIERA_ERROR_INVALID_COMMAND
* a command definition becomes //valid// ({{{bool true}}}) when all necessary functions are specified. Technically this coincides with the creation of a CommandImpl frame behind the scenes, which also causes the Command (frontend/handle object) to evaluate to {{{true}}} in bool context from then on.
* when, in addition to the above, the command arguments are bound, it becomes //executable.//
* after the (fist) execution, the command gets also //undo-able.//
State predicates are accessible through the Command (frontend); additionally there are static query functions in class {{{Command}}}
//for now (7/09) I'll use this page to collect ideas how commands might be used...//

* use a command for getting a log entry and an undo possibility automatically
* might define, bind and then execute a command at once
* might define it and bind it to a standard set of parameters, to be used as a prototype later.
* might just create the definition, leaving the argument binding to the actual call site
* execute it and check the success/failure result
* just enqueue it, without caring for successful execution
* place it into a command sequence bundle
* repeat the execution

!!!a command definition....
* can be created from scratch, by ID
* can be re-accessed, by ID
* can't be modified once defined (this is to prevent duplicate definitions with the same ID)
* but can be dropped, which doesn't influence already existing dependent command instances
* usually will be the starting point for creating an actual command by //binding//

!!!a command instance....
* normally emerges from a definition by binding arguments
* the first such binding will create a named command registration
* but subsequent accesses by ID or new bindings create an anonymous clone
* which then in turn might then be registered explicitly with a new ID
* anonymous command instances are managed by referral and ref-counting

!!!an execution pattern....
* can only be defined in code by a class definition, not at runtime
* subclasses the ~HandlingPattern interface and uses an predefined ID (enum).
* a singleton instance is created on demand, triggered by referring the pattern's ID
* is conceptually //stateless// &mdash; of course there can be common configuration values
* is always invoked providing a concrete command instance to execute
* is configured into the command instance, to implement the command's invocation
* returns a duck-typed //result// object

The Connection Manager is a service for wiring connections and for querying information and deriving decisions regarding various aspects of data streams and the possibility of connections. The purpose of the Connection Manager is to isolate the [[Builder]], which is client of this information and decision services, from the often convoluted details of type information and organizing a connection.

!control connections
my intention was that it would be sufficient for the builder to pass an connection request, and the Connection Manager will handle the details of establishing a control/parameter link.
{{red{TODO: handling of parameter values, automation and control connections still need to be designed}}}

!data connections
Connecting data streams of differing type involves a StreamConversion. Mostly, this aspect is covered by the [[stream type system|StreamType]]. The intended implementation will rely partially on [[rules|ConfigRules]] to define automated conversions, while other parts need to be provided by hard wired logic. Thus, regarding data connections, the ConManager can be seen as a specialized Facade and will delegate to the &rarr; [[stream type manager|STypeManager]]
* retrieve information about capabilities of a stream type given by ID 
* decide if a connection is possible
* retrieve a //strategy// for implementing a connection
This index refers to the conceptual, more abstract and formally specified aspects of the Proc-Layer and Lumiera in general.
More often than not, these emerge from immediate solutions, being percieved as especially expressive, when taken on, yielding guidance by themselves. Some others, [[Placements|Placement]] and [[Advice]] to mention here, immediately substantiate the original vision.
Configuration Queries are requests to the system to "create or retrieve an object with //this and that // capabilities". They are resolved by a rule based system ({{red{planned feature}}}) and the user can extend the used rules for each Session. Syntactically, they are stated in ''prolog'' syntax as a conjunction (=logical and) of ''predicates'', for example {{{stream(mpeg), pipe(myPipe)}}}. Queries are typed to the kind of expected result object: {{{Query<Pipe> ("stream(mpeg)")}}} requests a pipe excepting/delivering mpeg stream data &mdash; and it depends on the current configuration what "mpeg" means. If there is any stream data producing component in the system, which advertises to deliver {{{stream(mpeg)}}}, and a pipe can be configured or connected with this component, then the [[defaults manager|DefaultsManagement]] will create/deliver a [[Pipe|PipeHandling]] object configured accordingly.
&rarr; [[Configuration Rules system|ConfigRules]]
&rarr; [[accessing and integrating configuration queries|ConfigQueryIntegration]]
* planning to embed a YAP Prolog engine
* currently just integrated by a table driven mock
* the baseline is a bit more clear by now (4/08)

&rarr; see also ConfigRules
&rarr; see also DefaultsManagement

!Use cases
[<img[when to run config queries|uml/fig131717.png]]

The key idea is that there is a Rule Base &mdash; partly contained in the session (building on a stock of standard rules supplied with the application). Now, whenever there is the need to get a new object, for adding it to the session or for use associated with another object &mdash; then instead of creating it by a direct hard wired ctor call, we issue a ConfigQuery requesting an object of the given type with some //capabilities// defined by predicates. The same holds true when loading an existing session: some objects won't be loaded back blindly, rather they will be re-created by issuing the config queries again. Especially an important use case is  (re)creating a [[processing pattern|ProcPatt]] to guide the wiring of a given media's processing pipeline.

At various places, instead of requiring a fixed set of capabilities, it is possible to request a "default configured" object instead, specifying just those capabilities we really need to be configured in a specific way. This is done by using the [[Defaults Manager|DefaultsManagement]] accessible on the [[Session]] interface. Such a default object query may either retrieve an already existing object instance, run further config queries, and finally result in the invocation of a factory for creating new objects &mdash; guarded by rules to suit current necessities.

@@clear(left):display(block):@@

!Components and Relations
[>img[participating classes|uml/fig131461.png]]

<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>

Access point is the interface {{{ConfigRules}}}, which allowes to resolve a ConfigQuery resulting in an object with properties configured such as to fulfill the query. This whole subsystem employes quite some generic programming, because actually we don't deal with "objects", but rather with similar instantiations of the same functionality for a collection of different object types. For the purpose of resolving these queries, the actual kind of object is not so much of importance, but on the caller side, of course we want to deal with the result of the queries in a typesafe manner.
Examples for //participating object kinds// are [[pipes|Pipe]], [[processing patterns|ProcPatt]], effect instances, [[tags|Tag]], [[labels|Label]], [[automation data sets|AutomationData]],...
@@clear(right):display(block):@@
For this to work, we need each of the //participating object types// to provide the implementation of a generic interface {{{TypeHandler}}}, which allows to access actual C/C++ implementations for the predicates usable on objects of this type within the Prolog rules. The implementation has to make sure that, alongside with each config query, there are additional //type constraints// to be regarded. For example, if the client code runs a {{{Query<Pipe>}}}, an additional //type guard// (implemented by a predicate {{{type(pipe)}}} has to be inserted, so only rules and facts in accordance with this type will be used for resolution.


!when querying for a [["default"|DefaultsManagement]] object
[<img[colaboration when issuing a defaults query|uml/fig131845.png]]

@@clear(left):display(block):@@
Many features can be implemented by specifically configuring and wiring some unspecific components. Rather than tie the client code in need of some given feature to these configuration internals, in Lumiera the client can //query // for some kind of object providing the //needed capabilities. // Right from start (summer 2007), Ichthyo had the intention to implement such a feature using sort of a ''declarative database'', e.g. by embedding a Prolog system. By adding rules to the basic session configuration, users should be able to customize the semi-automatic part of Lumiera's behaviour to great extent.

[[Configuration Queries|ConfigQuery]] are used at various places, when creating and adding new objects, as well when building or optimizing the render engine node network.
* Creating a [[pipe|PipeHandling]] queries for a default pipe or a pipe with a certain stream type
* Adding a new [[track|TrackHandling]] queries for some default placement configuration, e.g. the pipe it will be plugged to.
* when processing a [[wiring request|WiringRequest]], connection possibilities have to be evaluated.
* actually building such a connection may create additional degrees of freedom, like panning for sound or layering for video.

!anatomy of a Configuration Query
The query is given as a number of logic predicates, which are required to be true. Syntactically, it is a string in prolog syntax, e.g. {{{stream(mpeg)}}}, where "stream" is the //predicate, // meaning here "the stream type is...?" and "mpeg" is a //term // denoting an actual property, object, thing, number etc &mdash; the actual kind of stream in the given example. Multible comma separated predicates are combined with logical "and". Terms may be //variable// at start, which is denoted syntactically by starting them with a uppercase letter. But, through the computation of a solution, any variable term needs to be //bound//&nbsp;  to some known fixed value, otherwise the counts as failed. A failed query is treated as a local failure, which may cause some operation being aborted or just some other possibility being chosen.
Queries are represented by instantiations of the {{{Query<TYPE>}}} template, because their actual meaning is "retrieve or create an object of TYPE, configured such that...!". At the C++ side, this ensures type safety and fosters programming against interfaces, while being implemented rule-wise by silently prepending the query with the predicate {{{object(X, type)}}}, which reads as "X is an object with this type"

!!!querying for default
A special kind of configuration query includes the predicate {{{default(X)}}}, which is kind-of an existential quantisation of the variable {{{X}}}. Using this predicate states that a suitable //default-configured// object exists somehow. This behaviour could be used as an fallback within a config query, allowing us always to return a solution. The latter can be crucial when it comes to integrating the query subsystem into an existing piece of implementation logic, which requires us to give some guarantees.
&rarr; see DefaultsManagement on details how to access these captured defaults

But note, the exact relation of configuration queries and query-for-default still needs to be worked out, especially when to use which flavour. 

! {{red{WIP 2012}}} front-end and integration
The overall architecture of this rules based configuration system remains to be worked out
* the generic front-end to unite all kinds of configuration queries
* how to configure the various [[query resolvers|QueryResolver]]
* how to pick and address a suitable resolver for a given query situation
* the taxonomy of queries:
** capability queries
** retrieval queries
** defaults queries
** discovery queries
** connection queries
** ...
{{red{WIP 10/2010 &rArr; see Ticket [[#705|http://issues.lumiera.org/ticket/705]]}}}

!executing a Configuration Query
Actually posing such an configuration query, for example to the [[Defaults Manager in the Sessison|DefaultsManagement]], may trigger several actions: First it is checked against internal object registries (depending on the target object type), which may cause the delivery of an already existing object (as reference, clone, or smart pointer). Otherwise, the system tries to figure out an viable configuration for a newly created object instance, possibly by issuing recursive queries. In the most general case this may silently impose additional decisions onto the //execution context // of the query &mdash; by default the session.

!Implementation
At start and for debugging/testing, there is an ''dummy'' implementation using a map with predefined queries and answers. But for the real system, the idea is to embed a ''YAP Prolog'' engine to run the queries. This includes the task of defining and loading a set of custom predicates, so the rule system can interact with the object oriented execution environment, for example by transforming some capability predicate into virtual calls to a corresponding object interface. We need a way for objects to declare some capability predicates, together with a functor that can be executed on an object instance (and further parameters) in the cause of the evaluation of some configuration query. Type safety and diagnostics play an important role here, because effectively the rule base is a pool of code open for arbitray additions from the user session.
&rarr; [[considerations for a Prolog based implementation|QueryImplProlog]]
&rarr; [[accessing and integrating configuration queries|ConfigQueryIntegration]]
&rarr; see {{{src/common/query/mockconfigrules.cpp}}} for the table with the hard wired (mock) answers

!!!fake implementation guidelines
The fake implementation should follow the general pattern planned for the Prolog implementation. That is, find, query, create, query. Thus, such a config query can be considered existentially quantised. The interplay with the factories is tricky. Because factories might issue config queries themselves, while a factory invocation driven from within this rule pattern //must not// re-invoke the factory. Besides, a factory invocation should create, not fetch an existing solution (?)

{{red{WARN}}} there is an interference with the (planned) Undo function: a totally correct implementation of "Undo" would need to remember and restore the internal state of the query system (similar to backtracking). But, more generally, such an correct implementation is not achievable, because we are never able to capture and control the whole state of a real world system doing such advanced things like video and sound processing. Seemingly we have to accept that after undoing an action, there is no guarantee we can re-do it the same way as it was first time.

Here, in the context of the Render Engine, the Controller component is responsible for triggering and coordinating the build process and for activating the backend and the Render Engine configuration created by the Builder to carry out the actual rendering. There is another Controller in the backend, the ~PlaybackController, which is in charge of the playback/rendering/display state of the application, and consequently will call this (Proc-Layer) Controller to get the necessary Render Engine.

!Facade
This is an very important external Interface, because it links together all three Layers of our current architecture. It can be used by the backend to initiate [[Render Processes (=StateProxy)|StateProxy]] and it will probably be used by the Dispatcher for GUI actions as well...

The question is where to put all the state-like information [[associated with the current session|SessionOverview]]. Because this is certainly "global", but may depend on the session or need to be configured differently when loading another session. At the moment (9/07) Ichthyo considers the following solution:
* represent all configuration as [[Asset]]s
* find a way {{red{TODO}}} how to reload the contents of the [[AssetManager]].
* completely hide the Session object behind a ''~PImpl'' smart pointer, so the session object can be switched when reloading.
* the [[Fixture]] acts as isolation layer, and all objects refered from the Fixture are refcounting smart pointers. So, even when the session gets switched, the old objects remain valid as long as needed.
A ''frame of data'' is the central low-level abstraction when dealing with media data and media processing.

Deliberately we avoid relying on any special knowledge regarding such data frames, beyond the fact
* that a frame resides within a ''memory buffer'' &rarr; [[buffer provider abstraction|BufferProvider]]
* that the frame has a ''frame number'', which can be related to a ''time span'' &rarr; [[time quantisation framework|TimeQuant]]
* that the frame belongs to an abstract media stream, which can be described through our &rarr; [[stream type system|StreamType]]
* and that some external ''media handling library'' knows to deal with the data in this frame
[[ProcLayer and Engine]]
As detailed in the [[definition|DefaultsManagement]], {{{default(Obj)}}} is sort of a Joker along the lines "give me a suitable Object and I don't care for further details". Actually, default objects are implemented by the {{{mobject::session::DefsManager}}}, which remembers and keeps track of anything labeled as "default". This defaults manager is a singleton and can be accessed via the [[Session]] interface, meaning that the memory track regarding defaults is part of the session state. Accessing an object via the query for an default actually //tagges// this object (storing a weak ref in the ~DefsManager). Alongside with each object successfully queried via "default", the degree of constriction is remembered, i.e. the number of additional conditions contained in the query. This enables us to search for default objects starting with the most unspecific.

!Skeleton
# ''search'': using the predicate {{{default(X)}}} enumerates existing objects of suitable type
#* candidates are delivered starting with the least constrained default
#* the argument is unified
#** if the rest of the query succeeds we've found our //default object// and are happy.
#** otherwise, if all enumerated solutions are exhausted without success, we enter
# ''default creation'': try to get an object fulfilling the conditions and remember this situation
#* we issue an ConfigQuery with the query terms //minus// the {{{default(X)}}} predicate
#* it depends on the circumstances how this query is handled. Typically the query resolution first searches existing objects and then creates a new instance to match the required capabilities. Usually, this process succeeds, but there can be configurations leading to failure.
#** failing the ~ConfigQuery is considered an (non-critical) exception (throws), as defaults queries are supposed to succeed
#** otherwise, the newly created object is remembered (tagged) as new default, together with the degree of constriction

!!!Implementation details
Taken precisely, the "degree of constriction" yields only a partial ordering &mdash; but as the "default"-predicate is sort of a existential quantification anyway, its sole purpose is to avoid polluting the session with unnecessary default objects, and we don't need to care for absolute precision. A suitable approximation is to count the number of predicates terms in the query and use a (sorted) set (separate for each Type) to store weak refs to the the objects tagged as "default"
{{red{WARN}}} there is an interference with the (planned) Undo function. This is a general problem of the config queries; just ignoring this issue seems reasonable.

!!!Problems with the (preliminary) mock implementation
As we don't have a Prolog interpreter on board yet, we utilize a mock store with preconfigured answers. (see MockConfigQuery). As this preliminary solution is lacking the ability to create new objects, we need to resort to some trickery here (please look away). The overall logic is quite broken, not to say outright idiotic, because the system isn't capable to do any real resolution &mdash; if we ignore this fact, the rest of the algorithm can be implemented, tested and used right now.
For several components and properties there is an implicit default value or configuration; it is stored alongside with the session. The intention is that defaults never create an error, instead, they are to be extended silently on demand. Objects configured according to these defaults can be retrieved at the [[Session]] interface by a set of overloaded functions {{{Session::current->default(Query<TYPE> ("query string"))}}}, where the //query string // defines a capability query similar to what is employed for pipes, stream types, codecs etc. This query mechanism is implemented by [[configuration rules|ConfigRules]]

!!!!what is denoted by {{{default}}}?
{{{default(Obj)}}} is a predicate expressing that the object {{{Obj}}} can be considered the default setup under the given conditions. Using the //default// can be considered as a shortcut for actually finding an exact and unique solution. The latter would require to specify all sorts of detailed properties up to the point where only one single object can satisfy all conditions. On the other hand, leaving some properties unspecified would yield a set of solutions (and the user code issuing the query had to provide means for selecting one solution from this set). Just falling back on the //default// means that the user code actually doesn't care for any additional properties (as long as the properties he //does// care for are satisfied). Nothing is said specifically on //how//&nbsp; this default gets configured; actually there can be rules //somewhere,// and, additionally, anything encountered once while asking for a default can be re-used as default under similar circumstances.
&rarr; [[implementing defaults|DefaultsImplementation]]
Along the way of working out various [[implementation details|ImplementationDetails]], decisions need to be made on how to understand the different facilities and entities and how to tackle some of the problems. This page is mainly a collection of keywords, summaries and links to further the discussion. And the various decisions should allways be read as proposals to solve some problem at hand...

''Everything is an object'' &mdash; yes of course, that's a //no-brainer,// todays. Rather, important is to note what is not "an object", meaning it can't be arranged arbitrarily
* we have one and only one global [[Session]] which directly contains a collection of multiple [[Timelines|Timeline]] and is associated with a globally managed collection of [[assets|Asset]]. We don't utilise scoped variables here (no "mandantisation"); if  a media has been //opened,// it is just plain //globally known//&nbsp; as asset.
* the [[knowledge base|ConfigRules]] is just available globally. Obviously, the session gets a chance to install rules into this knowledge base, but we don't stress ownership here.
* we create an unique [[Fixture]] which acts as isolation layer towards the render engine and is (re)built automatically.
The high-level view of the tangible entities within the session is unified into a ''single tree'' -- with the notable exception of [[external outputs|OutputManagement]] and the [[assets|Asset]], which are understood as representing a //bookkeeping view// and kept separate from the //things to be manipulated// (MObjects).

We ''separate'' processing (rendering) and configuration (building). The [[Builder]] creates a network of [[render nodes|ProcNode]], to be processed by //pulling data // from some [[Pipe]]

''Objects are [[placed|Placement]] rather'' than assembled, connected, wired, attached. This is more of a rule-based approach and gives us one central metaphor and abstraction, allowing us to treat everything in an uniform manner. You can place it as you like, and the builder tries to make sense out of it, silently disabling what doesn't make sense.
An [[Sequence]] is just a collection of configured and placed objects (and has no additional, fixed structure). [[Tracks|Track]] form a mere organisational grid, they are grouping devices not first-class entities (a track doesn't "have" a pipe or "is" a video track and the like; it can be configured to behave in such manner by using placements though). [[Pipes|Pipe]] are hooks for making connections and are the only facility to build processing chains. We have global pipes, and each clip is built around a lokal [[source port|ClipSourcePort]] &mdash; and that's all. No special "media viewer" and "arranger", no special role for media sources, no commitment to some fixed media stream types (video and audio). All of this is sort of pushed down to be configuration, represented as asset of some kind. For example, we have [[processing pattern|ProcPatt]] assets to represent the way of building the source network for reading from some media file (including codecs treated like effect plugin nodes)

Actual ''media data and handling'' is abstracted rigorously. Media is conceived as being stream-like data of distinct StreamType. When it comes to more low-level media handling, we build on the DataFrame abstraction. Media processing isn't the focus of Lumiera; we organise the processing but otherwise ''rely on media handling libraries.'' In a similar vein, multiplicity is understood as type variation. Consequently, we don't build an audio and video "section" and we don't even have audio tracks and video tracks. Lumiera uses tracks and clips, and clips build on media, but we're able to deal with [[multichannel|MultichannelMedia]] mixed-typed media natively.

Lumiera is not a connection manager, it is not an audio-visual real time performance instrument, and it doesn't aim at running presentations. It's an ''environment for assembling and building up'' something (an edit, a session, a piece of media work). This decision is visible at various levels and contexts, like a reserved attitude towards hardware acceleration (it //will// be supported, but reliable proxy editing has a higher priority), or the decision, not to incorporate system level ports and connections directly into the session model (they are mapped to [[output designations|OutputDesignation]] rather)

''State'' is rigorously ''externalized'' and operations are to be ''scheduled'', to simplify locking and error handling. State is either treated similar to media stream data (as addressable and cacheable data frame), or is represented as "parameter" to be served by some [[parameter provider|ParamProvider]]. Consequently, [[Automation]] is just another kind of parameter, i.e. a function &mdash; how this function is calculated is an encapsulated implementation detail (we don't have "bezier automation", and then maybe a "linear automation", a "mask automation" and yet another way to handle transitions)

Deliberately there is an limitaion on the flexibility of what can be added to the system via Plugins. We allow configuration and parametrisation to be extended and we allow processing and data handling to be extended, but we disallow extensions to the fundamental structure of the system by plugins. They may provide new implementations for already known subsystems, but they can't introduce new subsystems not envisioned in the general design of the application.
* these fixed assortments include: the possbile kinds of MObject and [[Asset]], the possible automation parameter data types, the supported kinds of plugin systems
* while plugins may extend: the supported kinds of media, the [[media handling libraries|MediaImplLib]] used to deal with those media, the session storage backends, the behaviour of placments
This __proc-Layer__ and ~Render-Engine implementation started out as a design-draft by [[Ichthyo|mailto:Ichthyostega@web.de]] in summer 2007. The key idea of this design-draft is to use the [[Builder Pattern|http://en.wikipedia.org/wiki/Builder_pattern]] for the Render Engine, thus separating completely the //building// of the Render Pipeline from //running,// i.e. doing the actual Render. The Nodes in this Pipeline should process Video/Audio and do nothing else. No more decisions, tests and conditional operations when running the Pipeline. Move all of this out into the configuration of the pipeline, which is done by the Builder.

!Why doesn't the current Cinelerra-2 Design succeed?
The design of Cinelerra-2 basically follows a similar design, but __fails because of two reasons__
# too much differentiation is put into the class hierarchy instead of configuring Instances differently.<br>This causes overly heavy use of virtual functions and -- in order to ameliorate this -- falling back to hard wired branching
# far too much coupling and back-coupling to the internals of the »EDL«, forcing a overly rigid structure on the latter

!Try to learn from the Problems of the current Cinelerra-2 codebase
* build up an [[Node Abstraction|ProcNode]] powerful enough to express //all necessary Operations// without the need to recure on the actual object type
* need to redesign the internals of the Session in a far more open manner. Everything is an [[M-Object|MObjects]] which is [[placed|Placement]] in some manner
* strive at a StrongSeparation between Session and Render Engine, encapsulate and allways implement against interfaces


!Design Goals
As always, the main goal is //to cut down complexity// by the usual approach to separate into small manageable chunks.

To achieve this, here we try to separate ''Configuration'' from ''Processing''. Further, in Configuration we try to separate the ''high level view'' (users view when editing) from the ''low level view'' (the actual configuration effective for the calculations). Finally, we try to factor out and encapsulate ''State'' in order to make State explicit.

The main tool used to implement this separation is the [[Builder Pattern|http://en.wikipedia.org/wiki/Builder_pattern]]. Here especially we move all decisions and parametrization into the BuildProcess. The Nodes in the render pipeline should process Video/Audio and do nothing else. All decisions, tests and conditional operations are factored out of the render process and handled as configuration of the pipeline, which is done by the Builder. The actual color model and number of ports is configured in by a pre-built wiring descriptor. All Nodes are of equal footing with each other, able to be connected freely within the limitations of the necessary input and output. OpenGL and renderfarm support can be configured in as an alternate implementation of some operations together with an alternate signal flow (usable only if the whole Pipeline can be built up to support this changed signal flow), thus factoring out all the complexities of managing the data flow between core and hardware accelerated rendering. We introduce separate control data connections for the [[automation data|Automation]], separating the case of true multi-channel-effects from the case where one node just gets remote controlled by another node (or the case of two nodes using the same automation data).

Another pertinent theme is to make the basic building blocks simpler, while on the other hand gaining much more flexibility for combining these building blocks. For example we try to unfold any "internal-multi" effects into separate instances (e.g. the possibility of having an arbitrary number of single masks at any point of the pipeline instead of having one special masking facility encompassing multiple sub-masks. Similarly, we treat the Objects in the Session in a more uniform manner and gain the possibility to [[place|Placement]] them in various ways.
//Currently (5/2011) this page is used to collect and build up a coherent design for the player subsystem of Lumiera..//

!Starting point
The intention is to start out with the design of the PlayerDummy and to //transform//&nbsp; it into the full player subsystem.
* the ~DisplayService in that dummy player design moves down into Proc-Layer and becomes the OutputManager
* likewise, the ~DisplayerSlot is transformed into the interface OutputSlot, with various implementations to be registered with the OutputManager
* the core idea of having a play controler act as the frontend and handle to an PlayProcess is retained.

!!Idea
Introduce a new kind of MObject, a GeneratorMO, to represent what the current dummy player does: generate some kind of test data. As a first step, we might retro-fit the existing dummy player to this structure, and then expand it to have a minimal render engine setup for tests and diagnostics. That is, there will be some kind of GeneratorNode to produce test data.

!!Stages of transition
Reworking the dummy player into the [[Player subsystem|PlayService]] is a larger undertaking, and best broken up into several //stages.//
# just moving the existing entities into the final location (typically down in the layer hierarchy)
# introduce a GeneratorMO and a GeneratorNode, while providing a shortcut to get the GeneratorNode without requiring the (not yet implemented) Builder.
# rework the ~ProcessImpl and the ~TickService to become the [[calculation stream|CalcStream]] and use the real Engine Interface, but still with an mock implementation hooked up behind
# extend and elaborate the handling of output, build the foundation of the real OutputManager
# switch to the real Scheduler and Engine implementation

!!!!new entities to build {{red{#805}}}
* the ~DisplayService needs to be generalised into an OutputManager interface
* we need to introduce a new kind of StructAsset, the ViewerAsset.
* then, of course, the PlayService interface needs to be created
* several interfaces need to be shaped: the ''engine façade'' and a ''scheduler frontend''.
* the realms of player and rendering are interconnected within the [[dispatch operation|FrameDispatcher]]
* as a temporary soltion, a ''~DummyPlayConnection'' will be created hard-wired, within the ~DummyPlayer service. This dumy container is a test setup, explicitly creating a OutputManager implementation, a GeneratorMO, a GeneratorNode (thus omitting the not-yet-implemented Builder) and the corresponding ModelPort. This way, the ~DummyPlayer service can be changed to use the real PlayService for creating the dummy generated output data. It will then pass back the resulting PlayController to the existing GUI setup.

The Method of pushing frames to the Viewer widget is changed fundamentally during that transition: while previously, in the PlayerDummy design study, the GUI opened a Display-façade interface -- now rather the GUI has to register an OutputSlot implementation with the global OutputManager. Such a registration requires an explicit deregistration on shutdown, and this can be made to block, until no further data will be delivered. Using this approach we hope to circumvent problems with occasional crashes on closing the application, due to dispatching frame output signals in the event thread while shutdown is already in progress.

The Dispatcher Tables are hosted within the [[Segmentation]] and serve as a ''strategy'' governing the play process.
For each [[calculation stream|CalcStream]] there is a concrete implementation of the [[dispatcher|FrameDispatcher]] interface, based on a specific configuration of dispatcher tables.
LayerSeparationInterface provided by the GUI.
Access point especially for the playback. A render- or playback process uses the DisplayFacade to push media data up to the GUI for display within a viewer widget of full-screen display. This can be thought off as a callback mechanism. In order to use the DisplayFacade, client code needs a DisplayerSlot (handle), which needs to be set up by the UI first and will be provided when starting the render or playback process.

!Evolving the real implementation {{red{TODO 1/2012}}}
As it stands, the ~DisplayFacade is a placeholder for parts of the real &rarr; OutputManagement, to be implemented in conjunction with the [[player subsystem|Player]] and the render engine. As of 1/2012, the intention is to turn the DisplayService into an OutputSlot instance -- following this line of thought, the ~DisplayFacade might become some kind of OutputManager, possibly to be [[implemented within a generic Viewer element|ViewerPlayConnection]]
A service within the GUI to manage output of frames generated by the lower layers of the application.
*providing the actual implementation of the DisplayFacade
*creating and maintaining of [[displayer slots|DisplayerSlot]], connected to viewer widgets or similar

{{red{TODO 1/2012}}} to be generalised into a service exposing an OutputSlot for display.
An output port wired up to some display facility or viewer widget within the UI. For the client code, each slot is represented by a handle, which can be used to lock into this slot for an continuous output process. Managed by the DisplayService
''EDL'' is a short-hand for __E__dit __D__ecision __L__ist. The use of this term can be confusing; for the usual meaning see the definition in [[Wikipedia|http://en.wikipedia.org/wiki/Edit_decision_list]]

Cinelerra uses this term in a related manner but with a somewhat shifted focus: In Cinelerra the EDL is comprised of the whole set of clips and other media objects arranged onto the tracks by the user. It is the result of the user's //editing efforts.//

In this usage, the EDL in most cases will be almost synonymous to &raquo;the session&laquo;, just the latter emphasizes more the state aspect. While the Lumiera project started out using the same terminology, later on, when support for multiple "containers" within the session and for [[meta-clips|VirtualClip]] was determined to be of much importance, the new term &raquo;[[Sequence]]&laquo; was preferred.
These are the tools provided to any client of the Proc layer for handling and manipulating the entities in the Session. When defining such operations, //the goal should be to arrive at some uniformity in the way things are done.// Ideally, when writing client code, one should be able to guess how to achieve some desired result.

!guiding principle
The approach taken to define any operation is based primarily on the ''~OO-way of doing things'': entities operate themselfs. You don't advise some manager, session or other &raquo;god class&laquo; to manipulate them. And, secondly, the scope of each operation will be as large as possible, but not larger. This often means performing quite some elementary operations &mdash; sometimes a convenience shortcut provided by the higher levels of the application may come in handy &mdash; and basically this gives rise to several different paths of doing the same thing, all of which need to be equivalent.

!main tasks
* you ''create a clip'' either from a source media or from another clip (copy, maybe clone?). The new clip always reflects the full size (and other properties) of the source used for creating.
* you can request a clip to ''resize'', which always relates to its current dimensions.
* you can ''place or attach'' the clip to some point or other entity by creating a [[Placement]] from the clip. (&rarr; [[handling of Placements|PlacementHandling]])
* you can ''adjust'' a placement relative to some other placement, meta object (i.e. selection), label or media, and this may cause the placement to resize or even effectively remove the clip.
* you can perform ''adjustments'' on a Sequence as a whole.
All these operations propagate to directly dependant objects and may schedule global reconfigurations.

!state, locking, policies
While performing any manipulative task, the state of the object is considered inconsistent, but it is guaranteed to be consistent after any such operation, irrespective of the result. There is no automatic atomicity and, consequently each propagation is considered a separate operation.

!!parallelism
At the level of fine grained operations on individual entities, there is ''no support for parallelism''. Individual entities don't lock themselves. Tasks perform locking and are to be scheduled. Thus, we need an isolation layer towards all inherently multithreaded parts of the system, like the GUI or the renderengine.

This has some obvious and some subtle consequences. Of course, manipulating //tasks// will be queued and scheduled somewhere. But as we rely on object identity and reference semantics for the entities in the session, some value changes are visible to  operations going on in parallel threads (e.g. rendering) and each operation on the session entities //has to be aware of this.//
Consequently, sometimes there needs to be done sort of a ''read-copy-update'', i.e. self-replacement by copy followed by manipulation of the new copy, while ongoing processes use the unaltered original object until they receive some sort of reset.

!!undo
Basically, each elementary operation has to record the information necessary to be undone. It does so by registering a Memento with some central UndoManager facility. This Memento object contains a functor pre-bound with the needed parameter values. (Besides, the UndoManager is free to implement a second level of security by taking independent state snapshots). 
{{red{to be defined in more detail later...}}}
We have to deal with effects on various different levels. One thing is to decide if an effect is applicable, another question is to find out about variable and automatable parameters, find a module which can be used as an effect GUI and finally to access a processing function which can be executed on a buffer filled with suitable data.

The effect asset (which is a [[processing asset|ProcAsset]]) provides an unified interface for loading external processing modules and querying their properties. As usual with assets, this is the "bookkeeping view" to start with, but we can create a //processor// from such an asset, i.e. an instance of the processing facility which can be used and wired into the model. Such a processor is an MObject and can be placed into the session (high level model); moreover it holds a concrete wiring for the various control parameters and it has an active link to the effect GUI module. As every ~MObject, it could be placed multiple times to affect several pipes, but all those different Placements would behave as being linked together (with respect to the control parameter sources and the GUI)

When building the low-level model, the actual processing code is resolved and a processing function is installed into the processing node representing the effect. This step includes checking of actual [[media type information|StreamType]], which may result in selecting a specifically tailored implementation function. The parameter wiring on the other hand is //shared// between the effect ~MObject, the corresponding GUI module and all low-level processing nodes. Actually, parameters are more of a reference than actually being values: they provide a {{{getValue()}}} function, which also works with automation. This way, any parameter or automation changes are reflected immediately into the produced output.

Initially, only the parameter (descriptors) are present on the effect ~MObject, while the actual [[parameter providers|ParamProvider]] are created or wired (by the ConManager) on demand.
The primary interface used by the upper application layers to interact with the render engine, to create and manage ongoing [[calculation streams|CalcStream]].

Below this facade, there is a thin adaptadion and forwarding layer, mainly talking to
* the individual [[Calculation Streams|CalcStream]] created for each PlayProcess.
* the FrameDispatcher, which translates such streams into a series of RenderJob entries
* the [[Scheduler]], which is responsible to perform these jobs in a timely fashion

!Quality of Service
Within the Facade, there is the definition of the {{{EngineService::Quality}}} tag, alongside with several pre-defined quality settings.
Actually this interface is a strategy, allowing to define quite specific quality levels, in case we need that. Clients can usually just use
these ~QoS-tags like enum values (they are copyable), without caring for the engine implementation related details.
A general identification scheme, ombining a human readable symbolic name, unique within a //specifically typed context,// and machine readable hash ID (LUID). ~Entry-IDs allow for asset-like position accounting and for type safe binding between configuration rules and model obects. They allow for creating an entry with symbolic id and distinct type, combined with an derived hash value, without the overhead in storage and instance management imposed by using a full-fledged Asset.
                                                                                      
Similar to an Asset, an identification tuple is available (generated on the fly), as is an unique LUID and total ordering. The type information is attached as template parameter, but included into the hash calculation. All instantiations of the EntryID template share a common baseclass, usable for type erased common registration.

~Entry-IDs as such do not provide any automatic registration or ensure uniqueness of any kind, but they can be used to that end. Especially, they're used within the TypedID registration framework for addressing individual entries {{red{planned as of 12/2010}}}
&rarr; TypedLookup
&rarr; TypeHandler
&rarr; MetaAsset
The &raquo;Timeline&laquo; is a sequence of ~MObjects -- here clips -- together with an ExplicitPlacement, locating each clip at a given time and track. (Note: I simplified the time format and wrote frame numbers to make it more clear)
[img[Example1: Objects in the Session/Fixture|uml/fig128773.png]]

----
After beeing processed by the Builder, we get the following Render Engine configuration
{{red{note: please take this only as a "big picture", the implementation details got a lot more complicated as of 6/08}}}
[img[Example1: generated Render Engine|uml/fig129029.png]]
{{red{TODO: seemingly this example is slightly outdated, as the implementation of placements is now indirect via LocatingPin objects}}}
This Example showes the //high level// Sequence as well. This needs to be transformed into a Fixture by some facility still to be designed. Basically, each [[Placement]] needs to be queried for this to get the corresponding ExplicitPlacement. The difficult part is to handle possible Placement constraints, e.g. one clip can't be placed at a timespan covered by another clip on the same track. In the current Cinelerra2, all of this is done directly by the GUI actions.

The &raquo;Timeline&laquo; is a sequence of ~MObjects -- note: using the same Object instances -- but now with the calculated ExplicitPlacement, locating the clip at a given time and track. The effect is located absolutely in time as well, but because it is the same Instance, it has the pointer to the ~RelativePlacement, wich basically attaches the effect to the clip. This structure may look complicated, but is easy to process if we go "backward" and just rely on the information contained in the ExplicitPlacement.
[img[Example2: Clip with Effect and generated Fixture for this Sequence|uml/fig128901.png]]

----
After beeing processed by the Builder, we get a Render Engine configuration.<br>
It has to be segmented at least at every point with changes in the configuration, but some variations are possible, e.g. we could create a Render Engine for every Frame (as Cinelerra2 does) or we could optimize out some configurations (for example the effect extended beyond the end of the clip)
{{red{note: as of 6/08 this can be taken only as the "big picture". Implementation will differ in details, and is more complicated than showed here}}}
[img[Example2: generated Render Engine|uml/fig129157.png]]
!MObject assembly
To make the intended use of the classes more clear, consider the following two example Object graphs:
* a video clip and a audio clip placed (explicitly) on two tracks &rarr;[[Example1]]
* a video clip placed relatively, with an attached HUE effect &rarr;[[Example2]]
a special ProcNode which is used to pull the finished output of one Render Pipeline (Tree or Graph). This term is already used in the Cinelerra2 codebase. I am unsure at the moment if it is a distinct subclass or rahter a specially configured ProcNode (a general design rule tells us to err in favour of the latter if in doubt).

The render nodes network is always built separate for each [[timeline segment|Segmentation]], which is //constant in wiring configuration.// Thus, while exit node(s) are per segment, the corresponding exit nodes of consecutive segments together belong to a ModelPort, which in turn corresponds to a global pipe (master bus not connected any further). These relations guide the possible configuration for an exit node: It may still provide multiple channels -- but all those channels are bound to belong to a single logical stream -- same StreamPrototype, always handled as bundle, connected and routed in one step. For example, when there is an 5.1 Audio master bus with a single fader, then "5.1 Audio" would be a prototype and these 6 channels will always be handled together; in such a case it makes perfectly sense to access these 6 audio channels through a single exit node, which is keyed (identified) by the same PipeID as used at the corresponding ModelPort and the corresponding [[global pipe|GlobalPipe]] ("5.1 Audio master bus")
A special kind (subclass) of [[Placement]]. As such it is always linked to a //Subject//, i.e. a MObject. But contrary to the (standard) placements, which may exhibit all kinds of fancy dynamic and scope dependent behaviour, within an explicit placement all properties are resolved and materialised. While the (standard) placement may contain an arbitrary list of LocatingPin objects, the resolution into an explicit placement performs a kind of »orthogonalisation«: each remaining LocatingPin defines exactly one degree of freedom independent of all others. Most notably, the explicit placement always specifies a absolute time and [[output designation|OutputDesignation]] for for locating the Subject. Explicit placements are ''immutable''.

!!Implementation considerations
Explicit placements are just created and never mutated, but copying and storage might become a problem.
It would thus be desirable to have a fixed-sized allocation, able to hold the placement body as well as the (fixed) locating pins inline.
The use of factories separates object creation, configuration and lifecycle from the actual usage context. Hidden behind a factory function
* memory management can be delegated to a centralised facility
* complex internal details of service implementation can be turned into locally encapsulated problems
* the actual implementation can be changed, e.g. by substituting a test mock

!common usage pattern
Within Lumiera, factories are frequently placed as a static field right into the //service interface.// Common variable names for such an embedded factory are {{{instance}}} or {{{create}}}.
The actual fabrication function is defined as function operator -- this way, the use of the factory reads like a simple function call. At usage site, only the reference to the //interface// or //kind of service// is announced.

!flavours
* A special kind of factory used at various places is the ''singleton'' factory for access to application wide services and facade interfaces. All singleton factories for the same target type share a single instance of the target, which is created lazily on first usage. Actually, this is our very special version of [[dependency injection|DependencyFactory]]
* whenever there is a set of objects of some kind, which require registration and connection to a central entity, client code gets (smart) handles, which are emitted by a factory, which connects to the respective manager for registration automatically.
* to bridge the complexities of using an external (plug-in based) interface, proxy objects are created by a factory, wired to invoke the external calls to forward any client invoked operation.
* the ''configurable factory'' is used to define a family of production lines, which are addressed by the client by virtue of some type-ID. Runtime data and criteria are used to form this type-ID and thus pick the suitable factory function
A grouping device within an ongoing [[playback or render process|PlayProcess]].
Any feed corresponds to a specific ModelPort, which in turn typically corresponds to a given GlobalPipe.
When starting playback or render, a play process (with a PlayController front-end for client code) is established to coordinate the processing. This ongoing data production might encompass multiple media streams, i.e. multiple feeds pulled from several model ports and delivered into several [[output slots|OutputSlot]]. Each feed in turn might carry structured MultichannelMedia, and is thus further structured into individual [[streams of calculation|CalcStream]]. Since the latter are //stateless descriptors,// while the player and play process obviously is stateful, it's the feed's role to mediate between a state-based (procedural) and a stateless (functional and parallelised) organisation model -- ensuring a seamless data feed even during modification of the playback parameters.
a specially configured view -- joining together high-level and low-level model.
The Fixture acts as //isolation layer// between the two models, and as //backbone to attach the render nodes.//
* all MObjects have their position, length and configuration set up ready for rendering.
* any nested sequences (or other kinds of indirections) have been resolved.
* every MObject is attached by an ExplicitPlacement, which declares a fixed position (Time, [[Pipe|OutputDesignation]])
* these ~ExplicitPlacements are contained immediately within the Fixture, ordered by time
* besides, there is a collection of all effective, possibly externally visible [[model ports|ModelPortRegistry]]

As the builder and thus render engine //only consults the fixture,// while all editing operations finally propagate to the fixture as well, we get an isolation layer between the high level part of the Proc layer (editing, object manipulation) and the render engine. [[Creating the Fixture|BuildFixture]] is an important first step and sideeffect of running the [[Builder]] when createing the [[render engine network|LowLevelModel]].
''Note'': all of the especially managed storage of the LowLevelModel is hooked up behind the Fixture
&rarr; FixtureStorage
&rarr; FixtureDatastructure

!{{red{WIP}}} Structure of the fixture
[<img[Structure of the Fixture|draw/Fixture1.png]]

The fixture is like a grid, where one dimension is given by the [[model ports|ModelPortRegistry]], and the other dimension extends in time. Within the time dimension there is a grouping into [[segments|Segmentation]] of constant structure.

;Model Ports
:The model ports share a single uniform and global name space: actually they're keyed by ~Pipe-ID
:Model ports are derived as a result of the build process, as the //residuum// of all nodes not connected any further
:Each port belongs to a specific Timeline and is associated with the [[Segmentation]] of that timeline.

;Segmentation
:The segmentation partitiones the time axis of a single timeline into segments of constant (wiring) configuration
:Together, the segments form a seamless sequence of time intervals. They contain a copy of each (explicit) placement of a visible object touching that time interval. Besides that, segments are the top level grouping device of the render engine node graph; individual segments are immutable, they are built and discarded as a whole chunk.
:Segments (and even a different Segmentation) may be //hot swapped// into an ongoing render.

;Exit Nodes
:Each segment holds an ExitNode for each relevant ModelPort of the corresponding timeline.
:Thus the exit nodes are keyed by ~Pipe-ID as well (and consequently have a distinct [[stream type|StreamType]]) -- each model port coresponds to {{{<number_of_segments>}}} separate exit nodes, but of course an exit node may be //mute//&nbsp; in some segmehts.
Generally speaking, the datastructure to implement the ''Fixture'' (&rarr; see a more general description [[here|Fixture]]) is comprised of a ModelPortRegistry and a set of [[segmentations|Segmentation]] per Timeline.
This page focusses on the actual data structure and usage details on that level. See also &rarr; [[storage|FixtureStorage]] considerations.

!transactional switch
A key point to note is the fact that the fixture is frequently [[re-built|BuildFixture]] by the [[Builder]], while render processes may be going on in parallel. Thus, when a build process is finished, a transactional ''commit'' happens to ''hot swap'' the new parts of the model. This is complemented by a clean-up of tainted render processes; finally, storage can be reclaimed.

To support this usage pattern, the Fixture implementation makes use of the [[PImpl pattern|http://c2.com/cgi/wiki?PimplIdiom]]

!Collecting usage scenarios {{red{WIP 12/10}}}
* ModelPort access
** get the model port for a given ~Pipe-ID
** enumerate the model ports for a Timeline
* rendering frame dispatch
** get or create the frame dispatcher table
** dispatch a single frame to yield the corresponding ExitNode
* (re)building
** create a new implementation transaction
** create a new segmentation
** establish what segments actually need to be rebuilt
** dispatch a newly built segment into the transaction
** schedule superseded segments and tainted process for clean-up
** commit a transaction
 

!Conclusions about the structure {{red{WIP 12/10}}}
* the ~PImpl needs to be a single (language) pointer. This necessitates having a monolithic Fixture implementation holder
* moreover, this necessitates a tight integration down to implementation level, both with the clean-up and the render processes themselves
The Fixture &rarr; [[data structure|FixtureDatastructure]] acts as umbrella to hook up the elements of the render engine's processing nodes network (LowLevelModel).
Each segment within the [[Segmentation]] of any timeline serves as ''extent'' or unit of memory management: it is built up completely during the corresponding build process and becomes immutable thereafter, finally to be discarded as a whole when superseded by a modified version of that segment (new build process) -- but only after all related render processes (&rarr; CalcStream) are known to be terminated.

Each segment owns an AllocationCluster, which in turn manages all the numerous small-sized objects comprising the render network implementing this segment -- thus the central question is when to //release the segment.//
* for one, we easily detect the point when a segment is swapped out of the segmentation; at this point we also have to detect the //tainted calculation streams.//
* but those render processes (calc streams) terminate asynchronously, and that forces us to do some kind of registration and deregistration.

!!question: is ref-counting acceptable here?
//Not sure yet.// Of course it would be the simplest approach. KISS.
Basically the concern is that each new CalcStream had to access the shared counts of all segments it touches.

''Note'': {{{shared_ptr}}} is known to be implemented by a lock-free algorithm (yes it is, at least since boost 1.33. Don't believe what numerous FUD spreaders have written). Thus lock contention isn't a problem, but at least a memory barrier is involved (and if I&nbsp;judge GCC's internal documentation right, currently their barriers extend to //all// globally visible variables)

!!question: alternatives?
There are. As the builder is known to be run again and again, no one forces us to deallocate as soon as we could. That's the classical argument exploited by any garbage collector too. Thus we could just note the fact that a calculation stream is done and re-evaluate all those noted results on later occasion. Obviously, the [[Scheduler]] is in the best position for notifying the rest of the system when this and that [[job|RenderJob]] has terminated, because the Scheduler is the only facility required to touch each job reliably. Thus it seems favourable to add basic support for either termination callbacks or for guaranteed execution of some notification jobs to the [[Scheduler's requirements|SchedulerRequirements]].

!!exploiting the frame-dispatch step
Irrespective of the decision in favour or against ref-counting, it seems reasonable to make use of the //frame dispatch step,// which is necessary anyway. The idea is to give each render process (maybe even each CalcStream)  a //copy//&nbsp; of a dispatcher table object -- basically just a list of time breaking points and a pointer to the relevant exit node. If we keep track of those dispatcher tables, add some kind of back-link to identify the process and require the process in turn to deregister, we get a tracking of tainted processes for free.

!!assessment {{red{WIP 12/10}}}
But the primary question here is to judge the impact of such an implementation. What would be the costs?
# creating individual dispatcher tables trades memory for simplified parallelism
# the per-frame lookup is efficient and negligible compared with just building the render context (StateProxy) for that frame
# when a process terminates, we need to take out that dispatcher and do deregistration stuff for each touched segment (?)

!!!Estimations
* number of actually concurrent render processes is at or below 30
* depending on the degree of cleverness on behalf of the scheduler, the throughput of processes might be multiplied (dull means few processes)
* the total number of segments within the Fixture could range into several thousand
* but esp. playback is focussed, which makes a number of rather several hundred tainted segments more likely

&rArr; we should try quickly to dispose of the working storage after render process termination and just retain a small notification record
&rArr; so the frame dispatcher table should be allocated //within//&nbsp; the process' working storage; moreover it should be tiled
&rArr; we should spend a second thought about how actually to find and process the segments to be discarded

!!!identifying tainted and disposable segments
Above estimation hints at the necessity of frequently finding some 30 to 100 segments to be disposed, out of thousands, assumed the original reason for triggering the build process was a typically local change in the high-level model. We can only discard when all related processes are finished, but there is a larger number of segments as candidate for eviction. These candidates are rather easy to pinpoint -- they will be uncovered during a linear comparison pass prior to committing the changed Fixture. Usually, the number of candidates will be low (localised changes), but global manipulations might invalidate thousands of segments.
* if we frequently pick the segments actually to be disposed, there is the danger of performance degeneration when the number of segments is high
* the other question is if we can afford just to keep all of those candidates around, as all of them are bound to get discardable eventually
* and of course there is also the question how to detect //when// they're due.

;Model A
:use a logarithmic datastructure, e.g. a priority queue. Possibly together with LRU ordering
:problem here is that the priorities change, which either means shared access or a lot of "superseded" entries
;Model B
:keep all superseded segments around and track the tainted processes instead
:problem here is how to get the tainted processes precisely and with low overhead
//currently {{red{12/10}}} I tend to prefer Model B...// while the priority queue remains to be investigated in more detail for organising the actual build process.
But actually I'm struck here, because of the yet limited knowledge about those render processes....
* how do we //join// an aborted/changed rendering process to his successor, without creating a jerk in the output?
* is it even possible to continue a process when parts of the covered time-range are affected by a build?
If the latter question is answered with "No!", then the problem gets simple in solution, but maybe memory consuming: In that case, //all//&nbsp; processes linked to a timeline gets affected and thus tainted; we'd just dump them onto a pile and delay releasing all of the superseded segments until all of them are known to be terminated.

!!re-visited {{red{WIP 1/13}}}
the last conclusions drawn above where confirmed by the further development of the overall design. Yes, we do //supersede// frequently and liberally. This isn't much of a problem, since the preparation of new jobs, i.e. the [[frame dispatch step|FrameDispatcher]] is performed chunk wise. A //continuation job// is added at the end of each chunk, and this continuation will pick up the task of job planning in time.

At the 1/2013 developer's meeting, Cehteh and myself had a longer conversation regarding the topic of notifications and superseding of jobs within the scheduler. The conclusion was to give ''each play process a separate LUID'' and treat this as ''job group''. The scheduler interface will offer a call to supersede all jobs within a given group.

Some questions remain though
* what are those nebulous dispatcher tables?
* do they exist per CalcStream ?
* is it possible, to file these dedicated dispatch informations gradually?
* how to store and pass on the control information for NonLinearPlayback?

since the intention is to have dedicated dispatch tables, these would implement the {{{engine.Dispatcher}}} inteface and incorporate some kind of strategy corresponding to the mode of playback. The chunk wise continuation of the job planning process would have to be reformulated in terms of //real wall clock time rather// -- since the relation of the playback process to nominal time can't be assumed to be a simple linear progression in all cases.

The situation focussed by this concept is when an API needs to expose a sequence of results, values or objects, instead of just yielding a function result value. As the naive solution of passing an pointer or array creates coupling to internals, it was superseded by the ~GoF [[Iterator pattern|http://en.wikipedia.org/wiki/Iterator]]. Iteration can be implemented by convention, polymorphically or by generic programming; we use the latter approach.

!Lumiera Forward Iterator concept
''Definition'': An Iterator is a self-contained token value, representing the promise to pull a sequence of data
* rather then deriving from an specific interface, anything behaving appropriately //is a Lumiera Forward Iterator.// 
* the client finds a typedef at a suitable, nearby location. Objects of this type can be created, copied and compared.
* any Lumiera forward iterator can be in //exhausted// &nbsp;(invalid) state, which can be checked by {{{bool}}} conversion.
* especially, default constructed iterators are fixed to that state. Non-exhausted iterators may only be obtained by API call.
* the exhausted state is final and can't be reset, meaning that any iterator is a disposable one-way-off object.
* when an iterator is //not//&nbsp; in the exhausted state, it may be //dereferenced// ({{{*i}}}), yielding the "current" value
* moreover, iterators may be incremented ({{{++i}}}) until exhaustion.

!!Discussion
The Lumiera Forward Iterator concept is a blend of the STL iterators and iterator concepts found in Java, C#, Python and Ruby. The chosen syntax should look familiar to C++ programmers and indeed is compatible to STL containers and ranges. To the contrary, while a STL iterator can be thought off as being just a disguised pointer, the semantics of Lumiera Forward Iterators is deliberately reduced to a single, one-way-off forward iteration, they can't be reset, manipulated by any arithmetic, and the result of assigning to an dereferenced iterator is unspecified, as is the meaning of post-increment and stored copies in general. You //should not think of an iterator as denoting a position// &mdash; just a one-way off promise to yield data.

Another notable difference to the STL iterators is the default ctor and the {{{bool}}} conversion. The latter allows using iterators painlessly within {{{for}}} and {{{while}}} loops; a default constructed iterator is equivalent to the STL container's {{{end()}}} value &mdash; indeed any //container-like// object exposing Lumiera Forward Iteration is encouraged to provide such an {{{end()}}}-function, additionally enabling iteration by {{{std::for_each}}} (or Lumiera's even more convenient {{{util::for_each()}}}).

!!Implementation notes
''iter-adapter.hpp'' provides some helper templates for building Lumiera Forward Iterators.
* __~IterAdapter__ is the most flexible variant, intended for use by custom facilities. An ~IterAdapter maintains an internal back-link to a facilitiy exposing an iteration control API, which is accessed through free functions as extension point. This iteration control API is similar to C#, allowing to advance to the next result and to check the current iteration state.
* __~RangeIter__ wraps two existing iterators &mdash; usually obtained from {{{begin()}}} and {{{end()}}} of an STL container embedded within the implementation. This allows for iterator chaining.
* __~PtrDerefIter__ works similar, but can be used on an STL container holding //pointers,// to be dereferenced automatically on access

Similar to the STL habits, Lumiera Forward Iterators should expose typedefs for {{{pointer}}}, {{{reference}}} and {{{value_type}}}.
Additionally, they may be used for resource management purposes by embedding a ref-counting facility, e.g. allowing to keep a snapshot or restult set around until it can't be accessed anymore.
This term has //two meanings, //so care has to be taken for not confusing them.
# in general use, a Frame means one full image of a video clip, i.e an array of rows of pixels. For interlaced footage, one Frame contains two halfimages, commonly called Fields. (Cinelerra2 confuses this terms)
# here in this design, we use Frame as an abstraction for a buffer of raw media data to be processed. If in doubt, we should label this "DataFrame".
#* one video Dataframe contains a single video frame
#* one audio Dataframe contains a block of raw audio samples
#* one OpenGL Dataframe could contain raw texture data (but I am lacking expertise for this topic)
An entity within the RenderEngine, responsible for translating a logical [[calculation stream|CalcStream]] (corresponding to a PlayProcess) into a sequence of individual RenderJob entries, which can then be handed over to the [[Scheduler]]. Performing this operation involves a special application of [[time quantisation|TimeQuant]]: after establishing a suitable starting point, a typically contiguous series of frame numbers need to be generated, together with the time coordinates for each of those frames.

The dispatcher works together with the job ticket(s) and the scheduler; actually these are the //core abstractions//&nbsp; the process of ''job planning'' relies on. While the actual scheduler implementation lives within the backend, the job tickets and the dispatcher are located within the [[Segmentation]], which is the backbone of the [[low-level model|LowLevelModel]]. More specifically, the dispatcher interface is //implemented//&nbsp; by a set of &rarr; [[dispatcher tables|DispatcherTables]] within the segmentation.

!defining the dispatcher interface
The purpose of this interface is to support the planning of new jobs, for a given CalcStream. From time to time, a chunk of new RenderJob entries will be prepared for the [[Scheduler]]. Each job knows his frame number and the actual ProcNode to operate. So, to start planning jobs, we need to translate time &rarr; frame number &rarr; segment &rarr; real exit node.

!!!Invocation situation
* our //current starting point//&nbsp; is given as ''time anchor'' closure
* we want to address a specific frame, offset by a given number of frames relative to the current position
* &rArr; the dispatcher returns the //fundamental coordinates// of that frame (''frame coordinates''), comprised of
** the absolute nominal time
** the absolute frame number
** a concrete ExitNode to pull for this frame
** focussed context information:
*** the relevant [[segment|Segmentation]] responsible for producing this frame
*** the corresponding JobTicket to use at this point

!!!the ~TimeAnchor
In fact, the whole process of playback or rendering is a continued series of exploration and evaluation. The outline of what needs to be calculated is determined continuously, proceeding in chunks of evaluation. The evaluation structure of the render engine is quite similar to the //fork-join//-pattern, just with the addition of timing constraints. This leads to an invocation pattern, where a partial evaluation happens from time to time. Each of those evaluations establishes a breaking point in time: everything //before// this point is settled and planned thus far. So, this point is an ''anchor'' or closure to root the next partial evaluation. More specifically, this ~TimeAnchor closure is the definitive binding between the abstract logical time of the session timeline, and the real wall-clock time forming the deadline for render evaluation.

!!!related timelines and the frame grid
The frame dispatch step joins and combines multiple time axes. Through the process of //scheduling,// the output generation is linked to real ''wall clock time'' and the dispatch step establishes the deadlines, taking the ''engine latency'' into account. As such, any render or playback process establishes an ''output time grid'', linking frame numbers to nominal output time or timecode, based on the ''output frame rate'' -- and both the framerate and the actual progression (speed) might be changed dynamically. But beyond all of this there is a third, basically independent temporal structure involved: the actual content to render, the ''effective session timeline''. While encoded in nominal, absolute, internal time values not necessarily grid aligned, in practice at least the //breaking points,// the temporal location where the content or structure of the pipeline changes, are aligned //to a grid used while creating the edit.// Yet this session timing structure need not be related in any way to the playback grid, nor is it necessarily the same as the ''source grid'' defined by the media data used to feed the pipeline.

These complex relationships are reflected in the invocation structure leading to an individual frame job. The [[calculation stream|CalcStream]] provides the [[render/playback timings|Timings]], while the actual implementation of the dispatcher, backed by the [[Fixture]] and thus linked to the session models, gets to relate the effective nominal time, the frame number, the exit node and the //processing function.//

!!!controlling the planning process
New render jobs are planned as an ongoing process, proceeding in chunks of evaluation. Typically, to calculate a single frame, several jobs are necessary -- to find out which and how, we'll have to investigate the model structures corresponding to this frame, resulting in a tree of prerequisites. Basically, the planning for each frame is seeded by establishing the nominal time position, in accordance to the current [[mode of playback|NonLinearPlayback]]. Conducted by the [[play controller|PlayController]], there is a strategy to define the precise way of spacing and sequence of frames to be calculated -- yet for the actual process of evaluating the prerequisites and planning the jobs, those details are irrelevant and hidden behind the dispatcher interface, as is most of the model and context information. The planning operation just produces a sequence of job definitions, which can then be associated with real time (wall clock) deadlines for delivery. The relation between the spacing and progression of the nominal frame time (as controlled by the playback mode) and the actual sequence of deadlines (which is more or less dictated by the output device) is rather loose and established anew for each planning chunk, relying on the ''time anchor''. The latter in turn uses the [[timings record|Timings]] of the [[calculation stream|CalcStream]] currently being planned, and these timings act as a strategy to represent the underlying timing grid and playback modalities.

While the sequence of frame jobs to be planned is possibly infinite, the actual evaluation is confined to the current planning chunk. When done with planning such a chunk of jobs, an additional ''continuation job'' is included to prepare a re-invocation of the planning function for preparation of the next chunk. Terminating playback is equivalent to not including or not invoking this continuation job. Please note that planning proceeds independently for each [[Feed]] -- in Lumiera the //current playback position//&nbsp; is just a conceptual projection of wall clock time to nominal time, yet there is no such thing like a synchronously proceeding "Playhead" 

!!!producing actual jobs
The JobTicket is created on demand, specialised for a single [[segment|Segmentation]] of the timeline and a single [[feed|Feed]] of data frames to be pulled from a ModelPort. Consequently this means that all frames and all channels within that realm will rely on the same job ticket -- which is a //higher order function,// a function producing another function: when provided with the actual channel number and the specific frame coordinates, the job ticket produces a [[concrete job definition|RenderJob]], which itself is a function to be invoked by the [[scheduler|Scheduler]] just in time.
The ''Gmerlin Audio Video Library''. &rarr; see [[homepage|http://gmerlin.sourceforge.net/gavl.html]]
Used within Lumiera as a foundation for working with raw video and audio media data
__G__roup __of__ __P__ictures: several compressed video formats don't encode single frames. Normally, such formats are considered mere //delivery formates// but it was one of the key strenghts of Cinelrra from start to be able to do real non linear editing on such formats (like the ~MPEG2-ts unsed in HDV video). The problem of course is that the data backend needs to decode the whole GOP to be serve  single raw video frames.

For this Lumiera design, we could consider making GOP just another raw media data frame type and integrate this decoding into the render pipeline, similar to an effect based on several source frames for every calculated output frame.

&rarr;see in [[Wikipedia|http://en.wikipedia.org/wiki/Group_of_pictures]]
Each [[Timeline]] has an associated set of global [[pipes|Pipe]] (global busses), similar to the subgroups of a sound mixing desk.
In the typical standard configuration, there is (at least) a video master and a sound master pipe. Like any pipe, ingoing connections attach to the input side, attached effects form a chain, where the last node acts as exit node. The PipeID of such a global bus can be used to route media streams, allowing the global pipe to act as a summation bus bar.
&rarr; discussion and design rationale of [[global pipes|GlobalPipeDesign]]

!Properties and decisions
* each timeline carries its own set of global pipes, as each timeline is an top-level element on its own
* like all [[pipes|Pipe]] the global ones are kept separated per stream (proto)type
* any global pipe //not// connected to another OutputDesignation automatically creates a ModelPort
* global pipes //do not appear automagically just by sending output to them// -- they need to be set up explicitly
* the top-level (BusMO) of the global pipes isn't itself a pipe. Thus the top-level of the pipes forms a list (typically a video and sound master)
* below, a tree-like structure //may// be created, building upon the same scope based routing technique as used for the tracks
//This page serves to shape and document the design of the global pipes//
Many aspects regarding the global pipes turned out while clarifying other parts of ~Proc-Layer's design. For some time it wasn't even clear if we'd need global pipes -- common video editing applications get on without. Mostly it was due to the usefulness of the layout found on sound mixing desks, and a vague notion to separate time-dependant from global parts, which finally led me to favouring such a global facility. This decision then helped in separating the concerns of timeline and sequence, making the //former// a collection of non-temporal entities, while the latter concentrates on time varying aspects.

!Design problem with global Pipes
actually building up the implementation of global pipes seems to pose a rather subtle design problem: it is difficult to determine how to do it //right.//
To start with, we need the ability to attach effects to global pipes. There is already an obvious way how to attach effects to clips (=local pipes), and thus it's desirable to handle it the same way for global pipes. At least there should be a really good reason //not//&nbsp; to do it the same way. Thus, we're going to attach these effects by placement into the scope of another placed MObject. And, moreover, this other object should be part of the HighLevelModel's tree, to allow using the PlacementIndex as implementation. So this reasoning brings us to re-using or postulating some kind of object, while lacking a point or reference //outside this design considerations//&nbsp; to justify the existence of the corresponding class or shaping its properties on itself. Which means &mdash; from a design view angle &mdash; we're entering slippery ground.
!!!~Model-A: dedicated object per pipe
Just for the sake of symmetry, for each global pipe we'd attach some suitable ~MObject as child of the BindingMO representing the timeline. If no further specific properties or functionality is required, we could use Track objects, which are generally used as containers within the model. Individual effects would then be attached as children, while output routing could be specified within the attaching placement, the same way as it's done with clips or tracks in general. As the pipe asset itself already stores a StreamType reference, all we'd need is some kind of link to the pipe asset or pipe-ID, and maybe façade functions within the binding to support the handling.
!!!~Model-B: attaching to the container
Acknowledging the missing justification, we could instead use //just something to attach// &mdash; and actually handle the real association elsewhere. The obvious "something" in this case would be the BindingMO, which already acts as implementation of the timeline (which is a façade asset). Thus, for this approach, the bus-level effects would be attached as direct children of the {{{Placement<BindingMO>}}}, just for the sake of beeing attached and stored within the session, with an additional convention for the actual ordering and association to a specific pipe. The Builder then would rather query the ~BindingMO to discover and build up the implementation of the global pipes in terms of the render nodes.

!!!Comparison
While there might still be some compromises or combined solutions &mdash; to support the decision, the following table detailes the handling in each case
|>|   !~Model-B|!~Model-A |
|Association: | by plug in placement|by scope |
|Output:      | entry in routing table|by plug in placement |
|Ordering:    |>| stored in placement |
|Building:    | sort children by plug+order<br/>query output from routing|build like a clip |
|Extras:      | ? |tree-like bus arrangement,<br/> multi-stream bus |

So through this detailed comparison ''~Model-A looks favourable'': while the other model requires us to invent a good deal of the handling specifically for the global pipes, the former can be combined from patterns and solutions already used in other parts of the model, plus it allows some interesting extensions. 
On a second thought, the fact that the [[Bus-MObject|BusMO]] is rather void of any specific meaning doesn't weight so much: As the Builder is based on the visitor pattern, the individual objects can be seen as //algebraic data types.// Besides, there is at least one little bit of specific functionality: a Bus object actually needs to //claim//&nbsp; to be the OutputDesignation, by referring to the same ~Pipe-ID used in other parts of the model to request output routing to this Bus. Without this match on both ends, an ~OutputDesignation may be mentioned at will, but no connection whatsoever will happen.


!{{red{WIP}}} Structure of the global pipes
;creating global pipes automatically?
:defining the global bus configuration is considered a crucial part of each project setup. Lumiera isn't meant to support fiddling around thoughtlessly. The user should be able to rely on crucial aspects of the global setup never being changed without notice.
;isn't wiring and routing going to be painful then?
:routing is scope based and we employ a hierarchical structure, so subgroups are routed automatically. Moreover, wiring is done based on best match regarding the stream type. We might consider feeding all non-connected output designations to the GUI after the build process, to allow short-cuts for creating further buses.
;why not making buses just part of the track tree?
:anything on a track has a temporal extension and may vary -- while it's the very nature of the global pipes to be static anchor points.
;why not having one grand unified root, including the outputs?
:you might consider that a matter of taste (or better common-sense). Things different in nature should not be forced into uniformity
;should global pipes be arranged as list or tree?
:sound mixing desks use list style arrangement, and this has proven to be quite viable, when combined with the ability to //send over// output from one mixer stripe to the input of another, allowing to build arbitrary complex filter matrices. On the other hand, organising a mix in //subgroups// can be considered best practice. This leads to arranging the pipes //as wood:// by default and on top level as list, optionally expanding into a subtree with automatic rooting, augmented by the ability to route any output to any input (cycles being detected and flagged as error).
All communication between Proc-Layer and GUI has to be routed through the respective LayerSeparationInterfaces. Following a fundamental design decision within Lumiera, these interface are //intended to be language agnostic// &mdash; forcing them to stick to the least common denominator. Which creates the additional problem of how to create a smooth integration without forcing the architecture into functional decomposition style. To solve this problem, we rely on the well known solution of using a __business facade__ and delegation proxies.
Thus, the Proc-Layer exposes (one or several) facade interfaces for the GUI to use it's functionality, and similarily the GUI provides a [[notification facade|GuiNotificationFacade]] for pushing back status information created as result of the edit operations, the build process and the render tasks.

!anatomy of the Proc/GUI interface
* the GuiFacade is used as a general lifecycle facade to start up the GUI and to set up the LayerSeparationInterfaces.
* GuiFacade is implemented by a class //in core// and exposes a notification proxy implementing GuiNotificationFacade, which actually is wired through the InterfaceSystem to forward to the corresponding implementation facade object within the GUI
* similarly, starting the fundamental subsystems within proc installs Interfaces into the InterfaceSystem and exposes them through proxy objects

special LayerSeparationInterface which serves the main purpose to load the GuiStarterPlugin, thus bringing up the Lumiera GTK UI at application start.
Considering how to interface to and integrate with the GUI Layer. Running the GUI is //optional,// but it requires to be [[started up|GuiStart]], installing the necessary LayerSeparationInterfaces. As an example how to integrate the GUI with the lower layers, in 1/2009 we created an PlayerDummy, which "pulls" dummy frames from the (not yet existing) engine and displays them within an XV viewer widget.

Probably the most important aspect regarding the GUI integration is how to get [[access to and operate on the Session|SessionInterface]]. More specifically, this includes [[referring to individual objects|MObjectRef]].
LayerSeparationInterface provided by the GUI.
Access point for the lower layers to push information and state changes (aynchronously) to the GUI. Actually, most operations within Lumiera are initiated by the user through the GUI. In the course of such actions, the GUI uses the services of the lower layer and typically recieves an synchronous response. In some exceptional cases, these operations may cause additional changes to happen asynchronously from the GUI's perspective. For example, an edit operation might trigger a re-build of the low-level model, which then detects an error.
Starting up the GUI is optional and is considered part of the Application start/stop and lifecycle.
* main and AppState activate the lifecyle methods on the ~GuiSubsysDescriptor, accessible via the GuiFacade
* loading a GuiStarterPlugin actually
** loads the GUI (shared lib)
** creates instances of all the GUI services available through LayerSeparationInterfaces
*** GuiNotificationFacade
*** DisplayFacade
** Finally, the ~GTK-main() is invoked.

!public services
The GUI provides a small number of public services, callable through LayerSeparationInterfaces. Besides that, the main purpose of the GUI of course is user interaction. Effectively the behaviour of the whole system is driven by GUI events to a large extent. These events are executed within the event handling thread (~GTK-main-Thread) and may in turn invoke services of the lower layers, again through the respective LayerSeparationInterfaces.
But the question of special interest here is how the //public services// of the GUI are implemented and made accessible for the lower Layers. Layer isolation is an issue here. If implemented in a rigorous manner, no facility within one layer may invoke implementation code of another layer directly. In practice, this tends to put quite some additional burden on the implementer, without and obvious benefit. Thus we decided to lower the barrier somewhat: while we still require that all service invoking calls are written against an public LayerSeparationInterface, actually the GUI (shared lib) is //linked// against the respective shared libs of the lower layers, thus especially enabling the exchange of iterators, closures and functor objects.

Note that we retain strict isolation in the other direction: no part of the lower layers is allowed to call directly into the GUI. Thus it's especially interesting how access to some GUI public service from the lower layers works in detail.
* when the GUI plugin starts, instances of the Services implementing those public service interfaces are created.
* these service objects in turn hold an ~InstanceHandle, which cares to register and open the corresponding C Language Interface
* additionally this InstanceHandle is configured such as to create an "facade proxy" object, which is implemented within liblumieracommon.so
Now, when invoking an operation on some public interface, the code in the lower layers actually executes an implementation of this operation //on the facade proxy,// which in turn forwards the call through the CL interface into the GUI, where it is actually implemented by the corresponding service object instance.
A specially configured LumieraPlugin, which actually contains or loads the complete code of the (GTK)GUI, and additionally is linked dynamically against the application core lib. During the [[UI startup process|GuiStart]], loading of this Plugin is triggered from {{{main()}}}. Actually this causes spawning of the GTK event thread and execution of the GTK main loop.
While the low-level model holds the data used for carrying out the actual media data processing (=rendering), the high-level model is what the user works upon when performing edit operations through the GUI (or script driven in &raquo;headless mode&laquo;). Its building blocks and combination rules determine largely what structures can be created within the [[Session]].
On the whole, it is a collection of [[media objects|MObjects]] stuck together and arranged by [[placements|Placement]].

Basically, the structure of the high-level model is is a very open and flexible one &mdash; every valid connection of the underlying object types is allowed &mdash; but the transformation into a low-level node network for rendering follows certain patterns and only takes into account any objects reachable while processing the session data in accordance to these patterns. Taking into account the parameters and the structure of these objects visited when building, the low-level render node network is configured in detail.

The fundamental metaphor or structural pattern is to create processing ''pipes'', which are a linear chain of data processing modules, starting from an source port and providing an exit point. [[Pipes|Pipe]] are a //concept or pattern,// they don't exist as objects. Each pipe has an input side and an output side and is in itself something like a Bus treating a single [[media stream|StreamType]] (but this stream may still have an internal structure, e.g. several channels related to a spatial audio system). Other processing entities like effects and transitions can be placed (attached) at the pipe, resulting them to be appended to form this chain. Optionally, there may be a ''wiring plug'', requesting the exit point to be connected to another pipe. When omitted, the wiring will be figured out automatically.
Thus, when making an connection //to// a pipe, output data will be sent to the //source port// (input side) of the pipe, wheras when making a connection //from// a pipe, data from it's exit point will be routed to the destination. Incidentally, the low-level model and the render engine employ //pull-based processing,// but this is rather of no relevance for the high-level model.



[img[draw/high-level1.png]]
Normally, pipes are limited to a //strictly linear chain// of data processors ("''effects''") working on a single data stream type, and consequently there is a single ''exit point'' which may be wired to an destination. As an exception to this rule, you may insert wire tap nodes (probe points), which explicitly may send data to an arbitrary input port; they are never wired automatically. It is possible to create cyclic connections by such arbitrary wiring, which will be detected by the builder and flagged as an error.

While pipes have a rather rigid and limited structure, it is allowed to make several connections to and from any pipe &mdash; even connections requiring an stream type conversion. It is not even necessary to specify //any// output destination, because then the wiring will be figured out automatically by searching the context and finally using some general rule. Connecting multiple outputs to the input of another pipe automatically creates a ''mixing step'' (which optionally can be controlled by a fader). Several pipes may be joined together by a ''transition'', which in the general case simultaneously treats N media streams. Of course, the most common case is to combine two streams into one output, thereby also mixing them. Most available transition plugins belong to this category, but, as said, the model isn't limited to this simple case, and moreover it is possible to attach several overlapping transitions covering the same time interval.

Individual Media Objects are attached, located or joined together by ''Placements''. A [[Placement]] is a handle for a single MObject (implemented as a refcounting smart-ptr) and contains a list of placement specifications, called LocatingPin. Adding an placement to the session acts as if creating an //instance.// (it behaves like a clone in case of multiple placements of the same object). Besides absolute and relative placement, there is also the possibility of a placement to stick directly to another MObject's placement, e.g. for attaching an effect to a clip or to connect an automation data set to an effect. This //stick-to placement// creates sort of a loose clustering of objects: it will derive the position from the placement it is attached to. Note that while the length and the in/out points are a //property of the ~MObject,// it's actual location depends on how it is //placed// and thus can be maintained quite dynamically. Note further that effects can have an length on their own, thus by using these attachement mechaics, the wiring and configuration within the high-level model can be quite time dependant.

[>img[draw/high-level2.png]]



Actually a ''clip'' is handled as if it was comprised of local pipe(s). In the example shown here, a two-channel clip has three effects attached, plus a wiring plug. Each of those attachments is used only if applicable to the media stream type the respective pipe will process. As the clip has two channels (e.g. video and audio), it will have two ''source ports'' pulling from the underlying media. Thus, as showed in the drawing to the right, by chaining up any attached effect applicable to the respective stream type defined by the source port, effectively each channel (sub)clip gets its own specifically adapted processing pipe.

@@clear(right):display(block):@@
!!Example of an complete Session
[img[draw/high-level3.png]]
The Session contains several independent [[sequences|Sequence]] plus an output bus section (''global Pipes'') attached to the [[Timeline]]. Each sequence holds a collection of ~MObjects placed within a ''tree of tracks''. 
Within Lumiera, tracks are a rather passive means for organizing media objects, but aren't involved into the data processing themselves. The possibility of nesting tracks allows for easy grouping. Like the other objects, tracks are connected together by placements: A track holds the list of placements of its child tracks. Each sequence holds a single placement pointing to the root track. 

As placements have the ability to cooperate and derive any missing placement specifications, this creates a hierarchical structure throughout the session, where parts on any level behave similar if applicable. For example, when a track is anchored to some external entity (label, sync point in sound, etc), all objects placed relatively to this track will adjust and follow automatically. This relation between the track tree and the individual objects is especially important for the wiring, which, if not defined locally within an ~MObject's placement, is derived by searching up this track tree and utilizing the wiring plug locating pins found there, if applicable. In the default configuration, the placement of an sequence's root track contains a wiring plug for video and another wiring plug for audio. This setup is sufficient for getting every object within this sequence wired up automatically to the correct global output pipe. Moreover, when adding another wiring plug to some sub track, we can intercept and reroute the connections of all objects creating output of this specific stream type within this track and on all child tracks.

Besides routing to a global pipe, wiring plugs can also connect to the source port of an ''meta-clip''. In this example session, the outputs of ~Seq-2 as defined by locating pins in it's root track's placement, are directed to the source ports of a [[meta-cllip|VirtualClip]] placed within ~Seq-1. Thus, within ~Seq-1, the contents of ~Seq-2 appear like a pseudo-media, from which the (meta) clip has been taken. They can be adorned with effects and processed further completely similar to a real clip.

Finally, this example shows an ''automation'' data set controlling some parameter of an effect contained in one of the global pipes. From the effect's POV, the automation is simply a ParamProvider, i.e a function yielding a scalar value over time. The automation data set may be implemented as a bézier curve, or by a mathematical function (e.g. sine or fractal pseudo random) or by some captured and interpolated data values. Interestingly, in this example the automation data set has been placed relatively to the meta clip (albeit on another track), thus it will follow and adjust when the latter is moved.
This wiki page is the entry point to detail notes covering some technical decisions, details and problems encountered in the course of the implementation of the Lumiera Renderengine, the Builder and the related parts.

* [[Packages, Interfaces and Namespaces|InterfaceNamespaces]]
* [[Memory Management Issues|MemoryManagement]]
* [[Creating and registering Assets|AssetCreation]]
* [[Creating new Objects|ObjectCreation]]
* [[Multichannel Media|MultichannelMedia]]
* [[Editing Operations|EditingOperations]]
* [[Handling of the current Session|CurrentSession]]
* [[collecting Ideas for Implementation Guidelines|ImplementationGuidelines]]
* [[using the Visitor pattern?|VisitorUse]] -- resulting in [[»Visiting-Tool« library implementation|VisitingToolImpl]]
* [[Handling of Tracks and render Pipes in the session|TrackPipeSequence]]. [[Handling of Tracks|TrackHandling]] and [[Pipes|PipeHandling]]
* [[getting default configured|DefaultsManagement]] Objects relying on [[rule-based Configuration Queries|ConfigRules]]
* [[integrating the Config Query system|ConfigQueryIntegration]]
* [[identifying the basic Builder operations|BasicBuildingOperations]] and [[planning the Implementation|PlanningNodeCreatorTool]]
* [[how to handle »attached placement«|AttachedPlacementProblem]]
* working out the [[basic building situations|BuilderPrimitives]] and [[mechanics of rendering|RenderMechanics]]
* how to classify and [[describe media stream types|StreamType]] and how to [[use them|StreamTypeUse]]
* considerations regarding [[identity and equality|ModelObjectIdentity]] of objects in the HighLevelModel
* the [[identification of frames and nodes|NodeFrameNumbering]]
* the relation of [[Project, Timelines and Sequences|TimelineSequences]]
* how to [[start the GUI|GuiStart]] and how to [[communicate|GuiCommunication]]
* build the first LayerSeparationInterfaces
* create an uniform pattern for [[passing and accessing object collections|ForwardIterator]]
* decide on SessionInterface and create [[Session datastructure layout|SessionDataMem]]
* shaping the GUI/~Proc-Interface, based on MObjectRef and the [[Command frontend|CommandHandling]]
* defining PlacementScope in order to allow for [[discovering session contents|Query]]
* working out a [[Wiring concept|Wiring]] and the foundations of OutputManagement
* shaping the foundations of the [[player subsystem|Player]]
* detail considerations regarding [[time and time quantisation|TimeQuant]]
* [[Timecode]] -- especially the link of [[TC formats and quantisation|TimecodeFormat]]
* designing how to [[build|BuildFixture]] the [[Fixture]] (...{{red{WIP}}}...)
* from [[play process|PlayProcess]] to [[frame dispatching|FrameDispatcher]] and [[node invocation|NodeInvocation]]
!Observations, Ideas, Proposals
''this page is a scrapbook for collecting ideas'' &mdash; please don't take anything noted here too literal. While writing code, I observe that I (ichthyo) follow certain informal guidelines, some of which I'd like to note down because they could evolve into general style guidelines for the Proc-Layer code.
* ''Inversion of Control'' is the leading design principle.
* but deliberately we stay just below the level of using Dependency Injection. Singletons and call-by-name are good enough. We're going to build //one// application, not any conceivable application.
* write error handling code only if the error situation can be actually //handled// at this place. Otherwise, be prepared for exceptions just passing by and thus handle any resources by "resource acquisition is initialisation" (RAII). Remember: error handling defeats decoupling and encapsulation.
* (almost) never {{{delete}}} an object directly, use {{{new}}} only when some smart pointer is at hand.
* clearly distinguish ''value objects'' from objects with ''reference semantics'', i.e. objects having a distict //object identity.//
* when user/client code is intended to create reference-semantics objects, make the ctor protected and provide a factory member called {{{create}}} instead, returning a smart pointer
* similarly, when we need just one instance of a given service, make the ctor protected and provide a factory member called {{{instance}}}, to be implemented by the [[dependency factory|DependencyFactory]].
* whenever possible, prefer this (lazy initialised [[dependency|DependencyFactory]]) approach and avoid static initialisation magic
* avoid doing anything non-local during the startup phase or shutdown phase of the application, especially avoid doing substantial work in any dtor.
* avoid asuming anything that can't be enforced by types, interfaces or signatures; this means: be prepared for open possibilities
* prefer {{{const}}} and initialisation code over assignment and active changes (inspired by functional programming)
* code is written for ''being read by humans''; code shall convey its meaning //even to the casual reader.//
/***
''InlineJavascriptPlugin for ~TiddlyWiki version 1.2.x and 2.0''
^^author: Eric Shulman - ELS Design Studios
source: http://www.TiddlyTools.com/#InlineJavascriptPlugin
license: [[Creative Commons Attribution-ShareAlike 2.5 License|http://creativecommons.org/licenses/by-sa/2.5/]]^^

Insert Javascript executable code directly into your tiddler content. Lets you ''call directly into TW core utility routines, define new functions, calculate values, add dynamically-generated TiddlyWiki-formatted output'' into tiddler content, or perform any other programmatic actions each time the tiddler is rendered.
!!!!!Usage
<<<
When installed, this plugin adds new wiki syntax for surrounding tiddler content with {{{<script>}}} and {{{</script>}}} markers, so that it can be treated as embedded javascript and executed each time the tiddler is rendered.

''Deferred execution from an 'onClick' link''
By including a label="..." parameter in the initial {{{<script>}}} marker, the plugin will create a link to an 'onclick' script that will only be executed when that specific link is clicked, rather than running the script each time the tiddler is rendered.

''External script source files:''
You can also load javascript from an external source URL, by including a src="..." parameter in the initial {{{<script>}}} marker (e.g., {{{<script src="demo.js"></script>}}}). This is particularly useful when incorporating third-party javascript libraries for use in custom extensions and plugins. The 'foreign' javascript code remains isolated in a separate file that can be easily replaced whenever an updated library file becomes available.

''Defining javascript functions and libraries:''
Although the external javascript file is loaded while the tiddler content is being rendered, any functions it defines will not be available for use until //after// the rendering has been completed. Thus, you cannot load a library and //immediately// use it's functions within the same tiddler. However, once that tiddler has been loaded, the library functions can be freely used in any tiddler (even the one in which it was initially loaded).

To ensure that your javascript functions are always available when needed, you should load the libraries from a tiddler that will be rendered as soon as your TiddlyWiki document is opened. For example, you could put your {{{<script src="..."></script>}}} syntax into a tiddler called LoadScripts, and then add {{{<<tiddler LoadScripts>>}}} in your MainMenu tiddler.

Since the MainMenu is always rendered immediately upon opening your document, the library will always be loaded before any other tiddlers that rely upon the functions it defines. Loading an external javascript library does not produce any direct output in the tiddler, so these definitions should have no impact on the appearance of your MainMenu.

''Creating dynamic tiddler content''
An important difference between this implementation of embedded scripting and conventional embedded javascript techniques for web pages is the method used to produce output that is dynamically inserted into the document:
* In a typical web document, you use the document.write() function to output text sequences (often containing HTML tags) that are then rendered when the entire document is first loaded into the browser window.
* However, in a ~TiddlyWiki document, tiddlers (and other DOM elements) are created, deleted, and rendered "on-the-fly", so writing directly to the global 'document' object does not produce the results you want (i.e., replacing the embedded script within the tiddler content), and completely replaces the entire ~TiddlyWiki document in your browser window.
* To allow these scripts to work unmodified, the plugin automatically converts all occurences of document.write() so that the output is inserted into the tiddler content instead of replacing the entire ~TiddlyWiki document.

If your script does not use document.write() to create dynamically embedded content within a tiddler, your javascript can, as an alternative, explicitly return a text value that the plugin can then pass through the wikify() rendering engine to insert into the tiddler display. For example, using {{{return "thistext"}}} will produce the same output as {{{document.write("thistext")}}}.

//Note: your script code is automatically 'wrapped' inside a function, {{{_out()}}}, so that any return value you provide can be correctly handled by the plugin and inserted into the tiddler. To avoid unpredictable results (and possibly fatal execution errors), this function should never be redefined or called from ''within'' your script code.//

''Accessing the ~TiddlyWiki DOM''
The plugin provides one pre-defined variable, 'place', that is passed in to your javascript code so that it can have direct access to the containing DOM element into which the tiddler output is currently being rendered.

Access to this DOM element allows you to create scripts that can:
* vary their actions based upon the specific location in which they are embedded
* access 'tiddler-relative' information (use findContainingTiddler(place))
* perform direct DOM manipulations (when returning wikified text is not enough)
<<<
!!!!!Examples
<<<
an "alert" message box:
{{{
<script>alert('InlineJavascriptPlugin: this is a demonstration message');</script>
}}}
<script>alert('InlineJavascriptPlugin: this is a demonstration message');</script>

dynamic output:
{{{
<script>return (new Date()).toString();</script>
}}}
<script>return (new Date()).toString();</script>

wikified dynamic output:
{{{
<script>return "link to current user: [["+config.options.txtUserName+"]]";</script>
}}}
<script>return "link to current user: [["+config.options.txtUserName+"]]";</script>

dynamic output using 'place' to get size information for current tiddler
{{{
<script>
 if (!window.story) window.story=window;
 var title=story.findContainingTiddler(place).id.substr(7);
 return title+" is using "+store.getTiddlerText(title).length+" bytes";
</script>
}}}
<script>
 if (!window.story) window.story=window;
 var title=story.findContainingTiddler(place).id.substr(7);
 return title+" is using "+store.getTiddlerText(title).length+" bytes";
</script>

creating an 'onclick' button/link that runs a script
{{{
<script label="click here">
 if (!window.story) window.story=window;
 alert("Hello World!\nlinktext='"+place.firstChild.data+"'\ntiddler='"+story.findContainingTiddler(place).id.substr(7)+"'");
</script>
}}}
<script label="click here">
 if (!window.story) window.story=window;
 alert("Hello World!\nlinktext='"+place.firstChild.data+"'\ntiddler='"+story.findContainingTiddler(place).id.substr(7)+"'");
</script>

loading a script from a source url
{{{
<script src="demo.js">return "loading demo.js..."</script>
<script label="click to execute demo() function">demo()</script>
}}}
where http://www.TiddlyTools.com/demo.js contains:
>function demo() { alert('this output is from demo(), defined in demo.js') }
>alert('InlineJavascriptPlugin: demo.js has been loaded');
<script src="demo.js">return "loading demo.js..."</script>
<script label="click to execute demo() function">demo()</script>
<<<
!!!!!Installation
<<<
import (or copy/paste) the following tiddlers into your document:
''InlineJavascriptPlugin'' (tagged with <<tag systemConfig>>)
<<<
!!!!!Revision History
<<<
''2006.01.05 [1.4.0]''
added support 'onclick' scripts. When label="..." param is present, a button/link is created using the indicated label text, and the script is only executed when the button/link is clicked. 'place' value is set to match the clicked button/link element.
''2005.12.13 [1.3.1]''
when catching eval error in IE, e.description contains the error text, instead of e.toString(). Fixed error reporting so IE shows the correct response text. Based on a suggestion by UdoBorkowski
''2005.11.09 [1.3.0]''
for 'inline' scripts (i.e., not scripts loaded with src="..."), automatically replace calls to 'document.write()' with 'place.innerHTML+=' so script output is directed into tiddler content
Based on a suggestion by BradleyMeck
''2005.11.08 [1.2.0]''
handle loading of javascript from an external URL via src="..." syntax
''2005.11.08 [1.1.0]''
pass 'place' param into scripts to provide direct DOM access 
''2005.11.08 [1.0.0]''
initial release
<<<
!!!!!Credits
<<<
This feature was developed by EricShulman from [[ELS Design Studios|http:/www.elsdesign.com]]
<<<
!!!!!Code
***/
//{{{
version.extensions.inlineJavascript= {major: 1, minor: 4, revision: 0, date: new Date(2006,1,5)};

config.formatters.push( {
 name: "inlineJavascript",
 match: "\\<script",
 lookahead: "\\<script(?: src=\\\"((?:.|\\n)*?)\\\")?(?: label=\\\"((?:.|\\n)*?)\\\")?\\>((?:.|\\n)*?)\\</script\\>",

 handler: function(w) {
 var lookaheadRegExp = new RegExp(this.lookahead,"mg");
 lookaheadRegExp.lastIndex = w.matchStart;
 var lookaheadMatch = lookaheadRegExp.exec(w.source)
 if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
 if (lookaheadMatch[1]) { // load a script library
 // make script tag, set src, add to body to execute, then remove for cleanup
 var script = document.createElement("script"); script.src = lookaheadMatch[1];
 document.body.appendChild(script); document.body.removeChild(script);
 }
 if (lookaheadMatch[2] && lookaheadMatch[3]) { // create a link to an 'onclick' script
 // add a link, define click handler, save code in link (pass 'place'), set link attributes
 var link=createTiddlyElement(w.output,"a",null,"tiddlyLinkExisting",lookaheadMatch[2]);
 link.onclick=function(){try{return(eval(this.code))}catch(e){alert(e.description?e.description:e.toString())}}
 link.code="function _out(place){"+lookaheadMatch[3]+"};_out(this);"
 link.setAttribute("href","javascript:;"); link.setAttribute("title",""); link.style.cursor="pointer";
 }
 else if (lookaheadMatch[3]) { // run inline script code
 var code="function _out(place){"+lookaheadMatch[3]+"};_out(w.output);"
 code=code.replace(/document.write\(/gi,'place.innerHTML+=(');
 try { var out = eval(code); } catch(e) { out = e.description?e.description:e.toString(); }
 if (out && out.length) wikify(out,w.output);
 }
 w.nextMatch = lookaheadMatch.index + lookaheadMatch[0].length;
 }
 }
} )
//}}}
An RAII class, used to manage a [[facade interface between layers|LayerSeparationInterface]].
The InstanceHandle is created by the service implementation and will automatically
* either register and open a ''C Language Interface'', using Lumiera's InterfaceSystem
* or load and open a LumieraPlugin, also by using the InterfaceSystem
* and optionally also create and manage a facade proxy to allow transparent access for client code

&rarr; see [[detailed description here|LayerSeparationInterfaces]]
Because we rely on strong decoupling and separation into self contained components, there is not much need for a common quasi-global namespace. Operations needing the cooperation of another subsystem will be delegated or even dispatched, consequently implementation code needs only the service acces points from "direct cooperation partner" subsystems. Hierarchical scopes besides classes are needed only when multiple subsystems share a set of common abstractions. Interface and Implementation use separate namespaces.

!common definitions
# surely there will be the need to use some macros (albeit code written in C++ can successfully avoid macros to some extent)
# there will be one global ''Application State'' representation (and some application services)
# we have some lib facilities, especially a common [[Time]] abstraction
For these we have one special (bilingual) header file __lumiera.h__, which places in its C++ part some declarations into __namespace lumiera__. These declarations should be pulled into the specific namespaces or (still better) into the implementation //on demand//. There is no nesting (we deliberately don't want an symbol appearing //automatically// in every part of the system).

!subsystem interface and facade
These large scale interfaces reside in special namespaces "~XXX_interface" (where XXX is the subsystem). The accompanning //definitions// depend on the implementation namespace and are placed in the top-level source folder of the corresponding subsystem.

 &rarr; Example: [[Interfaces/Namespaces of the Session Subsystem(s)|InterfacesSession]]

!contract checks and test code
From experiences with other middle scale projects, I prefer having the test code in a separate tree (because test code easily doubles the number of source files). But of course it should be placed into the same namespace as the code being checked, or (better?) into a nested namespace "test". It is esp. desirable to have good coverage on the contracts of the subsystem interfaces and mayor components (while it is not always feasible or advisable to cover every implementation detail).

&rarr; see also [[testsuite documentation in the main wiki|index.html#TestSuite]]

* Subdir src/proc contains Interface within namespace proc_interface
* Subdir src/proc/mobject contains commonly used entities (namespace mobject)
** nested namespace controller
** nested namespace builder
** nested namespace session
* Subdir src/proc/engine (namespace engine) uses directly the (local) interface components StateProxy and ParamProvider; triggering of the render process is initiated by the controller and can be requested via the controller facade. Normally, the playback/render controller located in the backend will just use this interface and won't be aware of the build process at all.

[img[Example: Interfaces/Namespaces of the ~Session-Subsystems|uml/fig130053.png]]
The actual media data is rendered by [[individually scheduled render jobs|RenderJob]]. All these calculations together implement a [[stream of calculations|CalcStream]], as demanded and directed by the PlayProcess. During the preparation of playback, a ''node planning phase'' is performed, to arrange for [[dispatching|FrameDispatcher]] the individual calculations per frame.  The goal of these //preparations//&nbsp; is to find out
* what channel(s) to pull
* what prerequisites to prepare
* what parameters to provide
The result of this planning phase is the {{{JobTicket}}}, a complete ''execution plan''.
This planning is uniform for each [[segment|Segmentation]] and treated for all channels together, resulting in a nested tree structure of sub job tickets, allocated and stored alongside with the processing nodes and wiring descriptors to form the segment's data and descriptor network. Job tickets are //higher order functions:// entering a concrete frame number and channel into a given job ticket will produce an actual job descriptor, which in itself is again a function, to be invoked through the scheduler when it's time to trigger the actual calculations.

!Structure of the Render Jobs created
To be more precise: in the general case, invoking the ~JobTicket with a given frame number will produce //multiple jobs//  -- typically each frame rendering will require at least one further source media frame; and because Lumiera render jobs will //never block waiting on IO,// this source media access will be packaged as a separate [[resource retrieving job|ResourceJob]], to be treated specifically by the scheduler.

To support the generation of multiple dependent jobs, a ~JobTicket might refer to further ~JobTickets corresponding to the prerequisites. These prerequisite ~JobTickets are the result of a classical recursive descent call into the top level ProcNode, to perform the planning step. There is always an 1:1 relation between actual jobs generated and the corresponding tickets. More precisely, for each possible job to generate there is a suitable ''job closure'', representing exactly the context information necessary to get that job started. To stress that point: a ProcNode might be configured such as to perform a series of recursive invocations of other prerequisite nodes right away (especially when those recursive invocations correspond just to further CPU bound calculations) -- yet still there is only //one//&nbsp; ~JobTicket, because there will be only one Job to invoke the whole sequence. On the other hand, when there is a prerequisite requiring an operation to be scheduled separately, a corresponding separate {{{JobClosure}}} will be referred.
A major //business interface// &mdash; used by the layers for interfacing to each other; also to be invoked externally by scripts.
&rarr; [[overfiew and technical details|LayerSeparationInterfaces]]
Lumiera uses a 3-layered architecture. Separation between layers is crucial. Any communication between the layers regarding the normal operation of the application should //at least be initiated// through ''Layer abstraction interfaces'' (Facade Interfaces). This is a low-impact version of layering, because, aside from this triggering, direct cooperation of parts within the single Lumiera process is allowed, under the condition that is is implemented using additional abstractions (interfaces with implementation level granularity). We stick to the policy of //disallowing direct coupling of implementations located in different layers.//

[>img[Anatomy of a Layer Separation Interface|uml/fig132869.png]]
The goal is for the interface to remain fairly transparent for the client and to get an automatic lifecycle management. 
To implement such a structure, each layer separation interface actually is comprised of several parts:
* an C Language Interface ("''CL Interface''") to be installed into the InterfaceSystem
* a ''facade interface'' defining the respective abstractions in terms of the client side impl. language (C or C++)
* a ''service implementation'' directly addressed by the implementation instantiated within the InterfaceSystem
* a ''facade proxy'' object on the client side, which usually is given inline alongside with the CL interface definition.

!opening and closing
Handling the lifecycle can be tricky, because client- and service-side need to carry out the opening and closing operations in sync and observing a specific call order to ensure calls get blocked already on the client side unless the whole interface compound is really up and running. To add to this complexity, plugins and built-in interfaces are handled differently regarding the question who is in charge of the lifecycle: interfaces are installed on the service side, whereas the loading of plugins is triggered from client side by requesting the plugin from the loader.
Anyway, interfaces are resources which best should be managed automatically. At least within the C++ part of the application we can ensure this by using the InstanceHandle template. This way the handling of plugins and interfaces can be unified and the opening and closing of the facade proxy happens automatically.

The general idea is, that each facade interface actually provides access to a specific service; there will always be a single implementation object somewhere, which can be thought of as acting as "the" service. This service-providing object will then contain the mentioned InstanceHandle; thus, its lifecycle becomes identical with the service lifecycle.
* when the service relies on a [[plugin|LumieraPlugin]], this service providing object (containing the InstanceHandle) needs to sit at the service accessing side (as the plugin may not yet be loaded and someone has to pull it up).
* otherwise, when the service is just an interface to an already loaded facility, the service providing object (containing the InstanceHandle) will live on the service providing side, not the client side. Then the ctor of the service providing object needs to be able to access an interface instance definition (&rarr; InterfaceSystem); the only difference to the plugin case is to create the InstanceHandle with a different ctor, passing the interface and the implementing instance.

!outline of the major interfaces
|!Layer|>|!Interface |
|GUI|GuiFacade|UI lifecycle &rarr; GuiStart|
|~|GuiNotificationFacade|status/error messages, asynchronous object status change notifications, trigger shutdown|
|~|DisplayFacade|pushing frames to a display/viewer|
|Proc|SessionFacade|session lifecycle|
|~|EditFacade|edit operations, object mutations|
|~|PlayerDummy|player mockup, maybe move to backend?|
|//Lumiera's major interfaces//|c


An implementation facility used by the session manager implementation to ensure a consistent lifecycle. A template-method-like skeleton of operations can be invoked by the session manager to go through the various lifecycle stages, while the actual implementation functionality is delegated to facilities within the session. The assumption is for the lifecycle advisor to be executed within a controlled environment, a single instance and single threaded.

---------------------
!Implementation notes
The goal on the implementation level is to get a consolidated view on the whole lifecycle.
Reading the source code should convey a complete picture about what is going on with respect to the session lifecycle.

!!entrance points
;close
: the shutdown sequence cleanly unwinds any contents and returns into //uninitialised state.//
;reset
: is comprised of two phases: shutdown and startup of an empty new session
: shutdown will be skipped automatically when uninitialised
: startup has to inject default session content
;load
: again, first an existing session is brought down cleanly
: the startup sequence behaves similarly to __reset__, but injects serialised content
Note that, while causing a short //freeze period,// __saving__ and __(re-)building__ aren't considered lifecycle operations

!!operation sequences
The ''pull up'' sequence performs basic initialisation of the session facilities and then, after the pImpl-switch, executes the various loading and startup phases. Any previous session switched away is assumed to unwind automatically, nothing is done especially to disconnect such an existing session, we simply require the session subsystem to be in a pristine state initially.
The ''shut down'' sequence does exactly that: halt processing and rendering, disconnect an existing session, if any, get back into initial state. It doesn't care for unwinding session contents, which is assumed to happen automatically when references to previous session contents go out of scope.
Opening and accessing media files on disk poses several problems, most of which belong to the domain of Lumiera's data backend. Here, we focus on the questions related to making media data available to the session and the render engine. Each media will be represented by an MediaAsset object, which indeed could be a compound object (in case of MultichannelMedia). Building this asset object thus includes getting information from the real file on disk. For delegating this to the backend, we use the following query interface:
* {{{queryFile(char* name)}}} requests accessing the file and yields some (opaque) handle when successful.
* {{{queryChannel(fHandle, int)}}} will then be issued in sequence with ascending index numbers, until it returns {{{NULL}}}.
* the returned struct (pointer) will provide the following information:
** some identifier which can be used to create a name for the corresponding media (channel) asset
** some identifier characterizing the access method (codec) needed to get at the media data. This should be rather a high level description of the media stream type, e.g. "H264"
** some (opaque) handle usable for accessing this specific stream. When the render engine later on pulls data for this channel, it will pass this handle down to the backend.

{{red{to be defined in more detail later...}}}

&rarr; see "~MediaAccessFacade" for a (preliminary) interface definitioin
&rarr; see "~MediaAccessMock" for a mock/test implementaion
Used to actually implement the various kinds of [[Placement]] of ~MObjects. ~LocatingPin is the root of a hierarchy of different kinds of placing, constraining and locating a Media Object. Basically, this is an instance of the ''state pattern'': The user sees one Placement object with value semantics, but when the properties of the Placement are changed, actually a ~LocatingPin object (or rather a chain of ~LocatingPins) is changed within the Placement. Subclasses of ~LocatingPin implement different placing/constraining behaviour:
* {{{FixedLocation}}} places a MObject to a fixed temporal position and track
* {{{RelativeLocation}}} is used to atach the MObject to some other anchor MObject
* //additional constraints, placement objectives, range restrictions, pattern rules will follow...//
All sorts of "things" to be placed and manipulated by the user in the session. This interface abstracts the details and just supposes
* the media object has a duration
* it is allways //placed// in some manner, i.e. it is allways accessed via a [[Placement]]
* {{red{and what else?}}}

&rarr; [[overview of the MObject hierarchy|MObjects]]
''The Problem of referring to an [[MObject]]'' stems from the object //as a concept// encompassing a wider scope then just the current implementation instance. If the object was just a runtime entity in memory, we could use a simple (language) reference or pointer. Actually, this isn't sufficient, as the object reference will pass LayerSeparationInterfaces, will be handed over to code not written in the same implementation language, will be included in an ''UNDO'' record for the UndoManager, and thus will need to be serialized and stored permanently within the SessionStorage.
Moreover [[MObject instances|MObject]] have a 2-level structure: the core object holds just the properties in a strict sense, i.e. the properties which the object //owns.// Any properties due to putting the object into a specific context, i.e. all relation properties are represented as [[Placement]] of the object. Thus, when viewed from the client side, a reference to a specific ~MObject //instance,// actually denotes a //specific//&nbsp; Placement of this object into the Session.

!Requirements
* just the reference allone is sufficient to access the placement //and//&nbsp; the core object.
* the reference needs to be valid even after internal restructuring in the object store.
* there must be a way to pass these references through serialisation/deserialisation
* we need a plain-C representation of the reference, which ideally should be incorporated into the complete implementation
* references should either be integrated into memory management, or at least it should be possible to detect dangling references
* it should be possible to make handling of the references completely transparent, if the implementation language supports doing so.

!Two models
For the implementation of the object references, as linked to the memory management in general, there seem to be the following approaches
* the handle is a simple table index. Consequently we'll need a garbage collection to deal with deleted objects. This model has several flavours
** using an generation ID to tag the index handle, keep an translation table on each GC, mapping old indices to new ones and raise an error when encountering an outdated handle
** use a specific datastructure allowing the handles to remain valid in case of cleanup/reorganisation. (Hashtable or similar)
** keep track of all index handles and rewrite them on GC
* handles are refcounting smart pointers. This solution is C++ specific, albeit more elegant and minimalistic. As all the main interfaces should be accessible via C bindings, we'd need to use a replacement mechanism on the LayerSeparationInterfaces, which could be mapped to or handled by an C++ smart-ptr. (We used a similar approach for the PlayerDummy design study)

Obviously, the second approach has quite some appeal &mdash; but, in order to use it, we'd have to mitigate its drawbacks: it bears the danger of creating a second separate code path for C language based clients, presumably receiving lesser care, maintenance and stress testing. The mentioned solution worked out earlier this year for the player mockup (1/2009) tries at least partially to integrate the C functionality, as the actual implementation is derived from the C struct used as handle, thus allowing to use the same pointers for both kinds of interface, and in turn by doing the de-allocation by a call through the C dtor function, which is installed as deleter function with the boost::smart_ptr used as base class of the handle. Conceptually, the interface on this handle is related to the actual implementation refered to by the handle (handle == smart_ptr) as if the latter was a subclass of the former. But on the level of the implementation, there is no inheritance relation between the handle and the referent, and especially this allows to define the handle's interface in terms of abstract interface types usable on the client side, while the referent's interface operates on the types of the service implementation. Thus, the drawback of using a C language interface is turned into the advantage of completely separating implementation and client.

!Implementation concept
Presumably, none of the both models is usable as-is; rather we try to reconstruct the viable properties of both, starting out with the more elegant second model. Thus, basically the ''reference is a smart-ptr'' referring to the core object. Additionally, it incorporates a ''systematic ID denoting the location of the placement''. This ID without the smart-ptr part is used for the C-implementation, making the full handle implementation a shortcut for an access sequence, which first querries the placement from the Session, followed by dereferencing the placement to get at the core object. Thus, the implementation builds upon another abstraction, the &rarr; PlacementRef, which in turn assumes for an index within the implementation of the [[session datastructure|SessionDataMem]] to track and retrieve the actual Placement.
[img[Structure of MObjectRef and PlacementRef|uml/fig136581.png]]


!using ~MObject references
~MObject references have a distinct lifecycle: usually, they are created //empty// (invalid ref, inactive state), followed by activating them by attachment to an existing placement within the session. Later on, the reference can be closed (detached, deactivated). Activation can be done either directly by a {{{Placement<MO>&}}}, or indirectly by any {{{Placement::ID}}} tag or even plain LUID denoting a placement known to the PlacementIndex embedded within the [[Session]]. Activation can fail, because the validity of the reference is checked. In this respect, they behave exactly like PlacementRef, and indeed are implemented on top of the latter. From this point onward, the referred ~MObject is kept alive by the reference &mdash; but note, this doesn't extend to the placement, which still may be modified or removed from the session without further notice. {{red{TODO investigate synchronisation guarantees or a locking mechanism}}} Thus, client code should never store direct references to the placement.

~MObject references have value semantics, i.e. they don't have an identity and can be copied freely. Dereferencing yields a direct (language) ref to the MObject, while the placement can be accessed by a separate function {{{getPlacement()}}}. Moreover, the MObjectRef instance provides a direct API to some of the most common query functions you could imagine to call on both the object and the placement (i.e. to find out about the start time, length, ....)
The ~MObjects Subsystem contains everything related to the [[Session]] and the various Media Objects placed within. It is complemented by the Asset Management (see &rarr; [[Asset]]). Examples for [[MObjects |MObject]](&rarr; def) being:
* audio/video clips
* [[effects and plugins|EffectHandling]]
* special facilities like mask and projector
* [[Automation]] sets
* labels and other (maybe functional) markup

This Design strives to achieve a StrongSeparation between the low-level Structures used to carry out the actual rendering and the high level Entities living in the session and being manipulated by the user. In this high level view, the Objects are grouped and located by [[Placements|Placement]], providing a flexible and open way to express different groupings, locations and ordering constraints between the Media Objects.
&rarr; EditingOperations
&rarr; PlacementHandling
&rarr; SessionOverview


[img[Classess related to the session|uml/fig128133.png]]
The HighLevelModel consists of MObjects, which are attached to one another through their [[Placement]]. While this is a generic scheme to arrange objects in a tree of [[scopes|PlacementScope]], some attachments are handled specifically and may trigger side-effects

{{red{drafted feature as of 6/2010}}}

* a [[binding|BindingMO]] attached to root is linked to a [[Timeline]]
* a [[Track]] attached to root corresponds to a [[Sequence]]

&rarr; see ModelDependencies
''[[Lumiera|http://Lumiera.org/documentation]]''
[[Proc-Layer|ProcLayer and Engine]]
[[MObjects]]
[[Wiring]]
[[Implementation|ImplementationDetails]]
[[Admin]]
Problem is: when removing an Asset, all corresponding MObjects need to disappear. This means, besides the obvious ~Ref-Link (MObject referring to an asset) we need backlinks or a sort of registry. And still worse: we need to remove the affected MObject from the object network in the session and rebuild the Fixture...

&rarr; for a general design discussion see [[Relation of Clip and Asset|RelationClipAsset]]

//Currently// Ichthyo considers the following approach:
* all references between MObjects and Assets are implemented as __refcounting__ boost::shared_ptr
* the opposite direction is also a __strong reference__, effectively keeping the clip-MO alive even if it is no longer in use in the session (note this is a cyclic dependency that needs to be actively broken on deletion). This design decision is based on  logical considerations (&rarr; see "deletions, Model-2" [[here|RelationClipAsset]]). This back-link is implemented by a Placement which is stored internally within the asset::Clip, it is needed for clean deletion of Assets, for GUI search functions and for adding the Clip to the session (again after been removed, or multiple times as if cloned).
* MObjects and Assets implement an {{{unlink()}}} function releasing any internal links causing circular dependencies. It is always implemented such as to drop //optional// associations while retaining those associations mandatory for fulfilling the objects contract.
* Instead of a delete, we call this unlink() function and let the shared_ptr handle the actual deletion. Thus, even if the object is already unlinked, it is still valid and usable as long as some other entity holds a smart-ptr. An ongoing render process for example can still use a clip asset and the corresponding media asset linked as parent, but this media asset's link to the dependant clip has already been cleared (and the media is no longer registered with the AssetManager of course).
* so the back-link from dependant asset up to the parent asset is mandatory for the child asset to be functional, but preferably it should be {{{const}}} (only used for information retrieval)
* the whole hierarchy has to be crafted accordingly, but this isn't much of a limitation
* we don't use a registry, rather we model the real dependencies by individual dependency links. So a MediaAsset gets links to all Clips created from this Asset and by traversing this tree, we can handle the deletion
* after the deletion, the Fixture needs to be rebuilt.
* but any render processes still can have pointers to the Asset to be removed, and the shared_ptr will ensure, that the referred objects stay alive as long as needed.

{{red{let's see if this approach works...}}}
Contrary to the &rarr;[[Assets and MObjects|ManagementAssetRelation]], the usage pattern for [[render nodes|ProcNode]] is quite simple: All nodes are created together every time a new segment of the network is being build and will be used together until this segment is re-built, at which point they can be thrown away altogether. While it would be easy to handle the nodes automatically by smart-ptr (the creation is accessible only by use of the {{{NodeFactory}}} anyways), it //seems advisable to care for a bulk allocation/deallocation here.// The reason being not so much the amount of memory (which is expected to be moderate), but the fact the build process can be triggered repeatedly several times a second when tweaking the session, which could lead to fragmentation and memory pressure.

__10/2008__: the allocation mechanism can surely be improved later, but for now I am going for a simple implementation based on heap allocated objects owned by a vector or smart-ptrs. For each segment of the render nodes network, we have several families of objects, each of with will be maintained by a separate low-level memory manager (as said, for now implemented as vector of smart-ptrs). Together, they form an AllocationCluster; all objects contained in such a cluster will be destroyed together.
<!--{{{-->
<link rel='alternate' type='application/rss+xml' title='RSS' href='index.xml'/>
<!--}}}-->

<style type="text/css">#contentWrapper {display:none;}</style><div id="SplashScreen" style="border: 3px solid #ccc; display: block; text-align: center; width: 320px; margin: 100px auto; padding: 50px; color:#000; font-size: 28px; font-family:Tahoma; background-color:#eee;">loading <b>Proc-Layer</b> devel doku<blink> ...</blink><br><br><span style="font-size: 14px; color:red;">Requires Javascript.</span></div>
The Interface asset::Media is a //key abstraction// It ties together several concepts and enables to deal with them on the interfaces in a uniform manner. Besides, like every Asset kind, it belongs rather to the bookkeeping view: an asset::Media holds the specific properties and parametrisation of the media source it stands for. Regarding the __inward interface__ &mdash; as used from within the [[model|HighLevelModel]] or the [[render nodes|ProcNode]], it is irrelevant if any given asset::Media object stands for a complete media source, just a clip taken from this source or if a placeholder version of the real media source is used instead.
[img[Asset Classess|uml/fig130437.png]]
{{red{NOTE 3/2010:}}} Considering to change that significantly. Especially considering to collapse clip-asset and clip-MO into a single entity with multiple inheritance
&rarr; regarding MultichannelMedia (see the notes at bottom)

&rarr; see also LoadingMedia
The Proc-Layer is designed such as to avoid unnecessary assumptions regarding the properties of the media data and streams. Thus, for anything which is not completely generic, we rely on an abstract [[type description|StreamTypeDescriptor]], which provides a ''Facade'' to an actual library implementation. This way, the fundamental operations can be invoked, like allocating a buffer to hold media data.

In the context of Lumiera and especially in the Proc-Layer, __media implementation library__ means
* a subsystem which allows to work with media data of a specific kind
* such as to provide the minimal set of operations
** allocating a frame buffer
** describing the type of the data within such a buffer
* and this subsystem or external library has been integrated to be used through Lumiera by writing adaptation code for accessing these basic operations through the [[implementation facade interface|StreamTypeImplFacade]]
* such a link to an type implementation is registered and maintained by the [[stream type manager|STypeManager]]

!Problem of the implementation data types
Because we deliberately won't make any asumptions about the implementation library (besides the ones imposed indirectly by the facade interface), we can't integrate the data types of the library first class into the type system. All we can do is to use marker types and rely on the builder to have checked the compatibility of the actual data beforehand. 
It would be possible to circumvent this problem by requiring all supported implementation libraries to be known at compile time, because then the actual media implementation type could be linked to a facade type by generic programming. Indeed, Lumiera follows this route with regards to the possible kinds of MObject or [[Asset]] &mdash; but to the contraty, for the problem in question here, being able to include support for a new media data type just by adding a plugin by far outweights the benefits of compile-time checked implementation type selection. So, as a consequence of this design decision we //note the possibility of the media file type discovery code to be misconfigured// and select the //wrong implementation library at runtime.// And thus the render engine needs to be prepared for the source reading node of any pipe to flounder completely, and protect the rest of the system accordingly
Of course: Cinelerra currently leaks memory and crashes regularilly. For the newly written code, besides retaining the same level of performance, a main goal is to use methods and techniques known to support the writing of quality code. So, besides the MultithreadConsiderations, a solid strategy for managing the ownership of allocated memory blocks is necessary right from start.

!Problems
# Memory management needs to work correct in a //fault tolerant environment//. That means that we need to be prepared to //handle on a non-local scale// some sorts of error conditions (without aborting the application). To be more precise: some error condition arises locally, which leads to a local abort and just the disabling/failing of some subsystem without affecting the application as a whole. This can happen on a regular base (e.g. rendering fails) and thus is __no excuse for leaking memory__
# Some (not all) parts of the core application are non-deterministic. That means, we can't tie the memory management to any assumptions on behalf of the execution path

!C++ solution
First of all -- this doesn't concern //every// allocation. It rather means there are certain //dangerous areas// which need to be identified. Anyhow, instead of carrying inherent complexities of the problem into the solution, we should rather look for  common solution pattern(s) which help factoring out complexity.

For the case here in question this seems to be the __R__esource __A__llocation __I__s __I__nitialisation pattern (''RAII''). Which boils down to basically never using bare pointers when concerned with ownership. Client code allways gets to use a wrapper object, which cannot be obtained unless going through some well defined construction site. As an extension to the baisc RAII pattern, C++ allows us to build //smart wrapper objects//, thereby delegating any kind of de-registration or resource freeing automatically. This usage pattern doesn't necessarily imply (and in fact isn't limited to) just ref-counting. 

!!usage scenarios
# __existence is being used__: Objects just live for being referred to in a object network. In this case, use refcounting smart-pointers for every ref. (note: problem with cyclic refs)
# __entity bound ownership__: Objects can be tied to some long living entity in the program, which holds the smart-pointer
#* if the existence of these ref-holding entity can be //guaranteed// (as if by contract), then the other users can build a object network with conventional pointers
#* otherwise, when the ref-holding entity //can disappear// in a regular program state, we need weak-refs and checking (because by our postulate the controlled resource needs to be destructed immediately, otherwise we would have the first case, existence == being used)

!!!dangerous uses
* the render nodes &rarr; [[detail analysis|ManagementRenderNodes]] {{red{TODO}}}
* the MObjects in the session &rarr; [[detail analysis|ManagementMObjects]] {{red{TODO}}}
* Asset - MObject relationship. &rarr; [[detail analysis|ManagementAssetRelation]] {{red{TODO}}}

!!!rather harmless
* Frames (buffers), because they belong to a given [[RenderProcess (=StateProxy)|StateProxy]] and are just passed in into the individual [[ProcNode]]s. This can be handled consistently with conventional methods.
* each StateProxy belongs to one top-level call to the [[Controller-Facade|Controller]]
* similar for the builder tools, which belong to a build process. Moreover, they are pooled and reused.
* the [[sequences|Sequence]] and the defined [[assets|Asset]] belong together to one [[Session]]. If the Session is closed, this means a internal shutdown of the whole ProcLayer, i.e. closing of all GUI representations and terminating all render processes. If these calles are implemented as blocking operations, we can assert that as long as any GUI representation or any render process is running, there is a valid session and model.

!using Factories
And, last but not least, doing large scale allocations is the job of the backend. Exceptions being long-lived objects, like the session or the sequences, which are created once and don't bear the danger of causing memory pressure. Generally speaking, client code shouldn't issue "new" and "delete" when it comes in handy. Questions of setup and lifecycle should allways be delegated, typically through the usage of some [[factory|Factories]], which might return the product conveniently wrapped into a RAII style handle. Memory allocation is crucial for performance, and needs to be adapted to the actual platform -- which is impossible unless abstracted and treated as a separate concern.
This category is comprised of the various aspects of the way the application controls and manages its own behaviour. They are more related to the way the application behaves, as opposed to the way the edited data is structured and organised (which is the realm of [[structural assets|StructAsset]]
* StreamType &rarr; a type system for describing and relating media data streams
* ScaleGrid &rarr; to manage time scales and frame alignment

!accessing meta assets
It turns out that all meta assets follow a distinct usage pattern: //they aren't built as individual entities.// Either, they are introduced into the system as part of a larger scale configuration activity, or they are //derived from category.// The latter fits in with a prototype-like approach; initially, the individual entry just serves to keep track of a categorisation, while at some point, such a link into a describing category may evolve into a local differentiation of some settings.

Another distinct property of meta assets is to be just a generic front-end to some very specific data entry, which needs to be allocated and maintained and provided on demand. Consider for example configuration rules, which have both a textual and an AST representation and will be assembled and composed into an effective rule set, depending on usage scope. Another example would be the enormous amount of parameter data created by parameter automation in the session. While certainly the raw data needs to be stored and retrieved somehow, the purpose of the corresponding meta asset is to access and manipulate this data in a structured and specific fashion.

!!!self referential structure
These observation leads to a design relying on a self referential structure: each meta asset is a {{{meta::Descriptor}}}. In the most basic version -- as provided by the generic implementation by {{{asset::Meta}}}, this descriptor is just the link to another descriptor, which represents a category. Thus, meta assets are created or accessed by
* just an EntryID, which implicitly also establishes a type, the intent being "get me yet another of this kind"
* a descriptor and an EntryID, to get a new element with a more distinct characterisation.

!!!mutating meta assets
Meta assets are ''immutable'' -- but they can be //superseded.//
For each meta asset instance, initially a //builder// is created for setting up the properties; when done, the builder will "drop off" the new meta asset instance. The same procedure is used for augmenting or superseding an existing element.
Lumiera's Proc-Layer is built around //two interconnected models,// mediated by the [[Builder]]. Basically, the &rarr;[[Session]] is an external interface to the HighLevelModel, while the &rarr;RenderEngine operates the structures of the LowLevelModel.
Our design of the models (both [[high-level|HighLevelModel]] and [[low-level|LowLevelModel]]) relies partially on dependent objects being kept consitently in sync. Currently (2/2010), __ichthyo__'s assessment is to consider this topic not important and pervasive enough to justify building a dedicated solution, like e.g. a central tracking and registration service. An important point to consider with this assesment is the fact that the session implementation is beeing kept mostly single-threaded. Thus, lacking one central place to handle this issue, care has to be taken to capture and treat all the relevant individual dependencies properly at the implementation level.

!known interdependencies
[>img[Fundamental object relations used in the session|uml/fig136453.png]]
* the session API relies on two kinds of facade like assets: [[Timeline]] and [[Sequence]], linked to the BindingMO and Track objects within the model respectively.
* conceptually, the DefaultsManagement and the AssetManager count as being part of the [[global model scope|ModelRootMO]], but, due to their importance, these facilities are accessible through an singleton interface.
* currently as of 2/2010 the exact dependency of the automation calculation during the render process onto the automation definitions within the HighLevelModel remains to be specified.

!!Timelines and Sequences
While implemented as StructAsset, additionally we need to assure every instance gets linked to the relevant parts of the model and registered with the session. Contrast this with other kinds of assets, which may just remain enlisted, but never actually used.

;the Session
:...is linked 1:1 with timelines and sequences. Registration and deregistration is directly tied to creation and destruction.
: __created__ &rArr; default timeline
: __destroy__ &rArr; discard all timelines, discard all sequences

;Timeline
:acts as facade and is implemented by an root-attached BindingMO. Can't exist in isolation.
: __created__ &rArr; create a dedicated new binding, either useing an existing sequence, or a newly created empty sequence
: __destroy__ &rArr; remove binding, while the previously bound sequence remains in model.
;root-placed Binding
:while generally a Binding can exist in the model, when attached to root, a Timeline will be created
: __created__ &rArr; an existing sequence might be given on creation, otherwhise a default configured sequence is created
: __destroy__ &rArr; implemented by detaching from root (see below) prior to purging from the model.
: __attached__ to root &rArr; invoke Timeline creation
: __detached__ from root &rArr; will care to destroy the corresponding timeline

;Sequence
:is completely dependent on a root-scoped track, can optionally be bound, into one/multiple timelines/VirtualClip, or unbound
: __created__ &rArr; mandates specification of an track-MO, (necessarily) placed into root scope &mdash; {{red{TODO: what to do with non-root tracks?}}}
: __destroy__ &rArr; destroy any binding using this sequence, purge the corresponding track from model, if applicable, including all contents
: __querying__ &rArr; forwards to creating a root-placed track, unless the queried sequence exists already
;root-placed Track
:attachment of a track to root scope is detected magically and causes creation of a Sequence
: __attached__ to root &rArr; invoke sequence creation
: __detached__ from root &rArr; find and detach from corresponding sequence, which is then destroyed. //Note:// contents remain unaffected
: irrespective if the track exists, is empty, or gets purged entirely, only the connection to root scope counts; thus relocating a track by [[Placement]] might cause its scope with all nested contents to become a sequence of its own or become part of another sequence. As sequences aren't required to be bound into a timeline, they may be present in the model as invisible, passive container

!!Magic attachments
While generally the HighLevelModel allows all kinds of arrangements and attachments, certain connections are [[detected automatically|ScopeTrigger]] and may trigger special actions, like the creation of Timeline or Sequence façade objects as described above. The implementation of such [[magic attachments|MagicAttachment]] relies on the PlacementIndex.
When it comes to addressing and distinguishing object instances, there are two different models of treatment, and usually any class can be related to one of these: An object with ''value semantics'' is completely defined through this "value", and not distinguishable beyond that. Usually, value objects can be copied, handled and passed freely, without any ownership. To the contrary, an object with ''reference semantics'' has an unique identity, even if otherwise completely opaque. It is rather like a facility, "living" somewhere, often owned and managed by another object (or behaving special in some other way). Usually, client code deals with such objects through a reference token (which has value semantics). Care has to be taken with //mutable objects,// as any change might influence the object's identity. While this usually is acceptable for value objects, it is prohibited for objects with reference semantics. These are typically created by //factories// &mdash; and this fabrication is the only process to define the identity.

!Assets
Each [[Asset]] holds an identification tuple; the hash derived from this constant tuple is used as ~Asset-ID.
* the {{{Asset::Ident}}} tuple contains the following information
*# a __name-ID__, which is a human understandable but sanitised word
*# a tree-like classification of the asset's __category__, comprised of
*#* asset kind {{{{AUDIO, VIDEO, EFFECT, CODEC, STRUCT, META}}}}
*#* a path in a virtual classification tree. Some &raquo;folders&laquo; have magic meanings
*# an __origin-ID__ to denote the origin, authorship or organisation, acting like a namespace
*# a __version number__.
Of these, the tuple {{{(org,category,name)}}} defines the unique asset identity, and is hashed into the asset-ID. At most one asset with a given ident-tuple (including version) can exist in the whole system. Any higher version is supposed to be fully backwards compatible to all previous versions. Zero is reserved for internal purposes. {{red{1/10: shouldn't we extend the version to (major,minor), to match the plug-in versioning?}}} Thus, the version can be incremented without changing the identity, but the system won't allow co-existence of multiple versions.
* Assets are ''equality'' comparable (even ''ordered''), based on their //identity// &mdash; sans version.
* Categories are ''equality'' comparable value objects

!~MObjects
As of 1/10, MObjects are mostly placeholders or dummies, because the actual SessionLogic has still to be defined in detail.

The following properties can be considered as settled:
* reference semantics
* non-copyable, created by MObjectFactory, managed automatically
* each ~MObject is associated n:1 to an asset.
* besides that, it has an opaque instance identity, which is never made explicit.
* this identity springs from the way the object is created and manipulated. It is //persistent.//
* because of the ~MObject-identity's nature, there is //no point in comparing ~MObjects.//
* ~MObjects are always attached to the session by a placement, which creates kind-of an //instance//
* thus, because placements act as a subdivision of ~MObject identification, in practice always placements will be compared.

!Placements
[[Placements|Placement]] are somewhat special, as they mix value and reference semantics. First off, they are configuration values, copyable and smart-pointers, referring to a primary subject (clip, effect, track, label, binding,....). But, //by adding a placement to the session,// we create an unique instance-identity. This is implemented by copying the placement into the internal session store and thereby creating a new hash-ID, which is then registered within the PlacementIndex. Thus, a ''placement into the model'' has a distict identity.
* Placements are ''equality'' comparable, based on this instance identity (hash-ID)
* besides, there is an equivalence relation regarding the "placement specification" contained in the [[locating pins|LocatingPin]] of the Placement.
** they can be compared for ''equivalent definition'': the contained definitions are the same and in the same order
** alternatively, they can be checked for ''effective equivalence'': both placements to be compared resolve to the same position
note: the placement equality relation carries over to ~PlacementRef and ~MObjectRef

!Commands
{{red{WIP}}} For now, commands are denoted by an unique, human-readable ID, which is hard-coded in the source. We might add an LUID and a version numbering scheme later on.
Commands are used as ''prototype object'' &mdash; thus we face special challenges regarding the identity, which haven't yet been addressed.

!References and Handles
These are used as token for dealing with other objects and have no identity of their own. PlacementRef tokens embody a copy of the referred placement's hash-ID. MObjectRef handles are built on top of the former, additionally holding a smart-ptr to the primary subject.
* these reference handles simply reflect the equality relation on placements, by virtue of comparing the hash-ID.
* besides, we could build upon the placement locating chain equivalence relations to define a semantic equivalence on ~MObjectRefs

Any point where output possibly might be produced. Model port entities are located within the [[Fixture]] &mdash; model port as a concept spans the high-level and low-level view. A model port can be associated both to a pipe in the HighLevelModel but at the same time denotes a set of corresponding [[exit nodes|ExitNode]] within the [[segments|Segmentation]] of the render nodes network.

A model port is rather derived than configured; it emerges when a pipe [[claims|WiringClaim]] an output destination, while some other entity at the same time actually //uses this designation as a target,// either directly or indirectly. This match of provision and usage is detected during the build process and produces an entry in the fixture's model port table. These model ports in the fixture are keyed by ~Pipe-ID, thus each model port has an associated StreamType.

Model ports are the effective, resulting outputs of each timeline; additional ports result from [[connecting a viewer|ViewConnection]]. Any render or display process happens at a model port.

!formal specification
Model port is a //conceptual entity,// denoting the possibility to pull generated data of a distinct (stream)type from a specific bus within the model -- any possible output produced or provided by Lumiera is bound to appear at a model port. The namespace of model ports is global, each being associated with a ~Pipe-ID.

Model ports are represented by small non-copyable descriptor objects with distinct identity, which are owned and managed by the [[Fixture]]. Clients are mandated to resolve a model port on each usage, as configuration changes within the model might cause ports to appear and decease. To stress this usage pattern, actually {{{ModelPort}}} instances are small copyable value objects (smart handles), which can be used to access data within an opaque registry. Each model port belongs to a specific Timeline and is aware of this association, as is the timeline, allowing to get the collection of all ports of a given timeline. Besides, within the Fixture each model port refers to a specific [[segmentation of the time axis|Segmentation]] (relevant for this special timeline actually). Thus, with the help of this segmentation, a model port can yield an ExitNode to pull frames for a given time.

Model ports are conceptual entities, denoting the points where output might possibly be produced &rarr; see [[definition|ModelPort]].
But there is an actual representation, a collection of small descriptor objects managed by the Fixture and organised within the model port table datastructure. Because model ports are discovered during the build process, we need the ability to (re)build this table dynamically, finally swapping in the modified configuration with a transactional switch. Only the builder is allowed to perform such mutations, while for client code the model ports are immutable.

!supported operations
* get the model port by ~Pipe-ID
* a collection of model ports per timeline
* sub-grouping by media kind, possibly even by StreamType
* possibility to enumerate model ports in distinct //order,// defined within the timeline
* with the additional possibility to filter by media kind or suitable stream type.
* a way to express the fact of //not having a model port.//

!!!mutating and rebuilding
Model ports are added once and never changed. The corresponding timeline and pipe is known at setup time, but the overall number of model ports is determined only as a result of completing the build process. At that point, the newly built configuration is swapped in transactionally to become the current configuration.

!Implementation considerations
The transactional switch creates a clear partitioning in the lifespan of the model port table. //Before// that point, entries are just added, but not accessed in any way. //After// that point, no further mutation occurs, but lookup is frequent and happens in a variety of different configurations and transient orderings.

This observation leads to the idea of using //model port references// as frontend to provide all kinds of access, searching and reordering. These encapsulate the actual access by silently assuming reference to "the" global current model port configuration. This way the actual model port descriptors could be bulk allocated in a similar manner as the processing nodes and wiring descriptors. Access to stale model ports could be detected by the port references, allowing also for a {{{bool}}} checkable "has no port" information.

A model port registry, maintained by the builder, is responsible for storing the discovered model ports within a model port table, which is then swapped in after completing the build process. The {{{builder::ModelPortRegistry}}} acts as management interface, while client code accesses just the {{{ModelPort}}} frontend. A link to the actual registry instance is hooked into that frontend when bringing up the builder subsystem.
A special kind of MObject, serving as a marker or entry point at the root of the HighLevelModel. As any ~MObject, it is attached to the model by a [[Placement]]. And in this special case, this placement froms the ''root scope'' of the model, thus containing any other PlacementScope (e.g. tracks, clips with effects,...)

This special ''session root object'' provides a link between the model part and the &raquo;bookkeeping&laquo; part of the session, i.e. the [[assets|Asset]]. It is created and maintained by the session (implementation level) &mdash; allowing to store and load the asset definitions as contents of the model root element.

__Note__: nothing within the PlacementIndex requires the root object to be of a specific type; the index just assumes a {{{Placement<MObject>}}} (or subclass) to exist as root element. And indeed, for several unit tests we create an Index mock with a tree of dummy ~MObjects and temporarily shaddow the "real" PlacementIndex by this mock (&rarr; see SessionServices for the API allowing to access this //mock index//- functionality)

Based on practical experiences, Ichthyo tends to consider Multichannel Media as the base case, while counting media files providing just one single media stream as exotic corner cases. This may seem counter intuitive at first sight; you should think of  it as an attempt to avoid right from start some of the common shortcomings found in many video editors, especially
* having to deal with keeping a "link" between audio and video clips
* silly limitations on the supported audio setups (e.g. "sound is mono, stereo or Dolby-5.1")
* unnecessary complexity when dealing with more realistic setups, esp. when working on dialogue scenes
* inability to edit stereoscopic (3D) video in a natural fashion

!Compound Media
[>img[Outline of the Build Process|uml/fig131333.png]]
Basically, each [[media asset|MediaAsset]] is considered to be a compound of several elementary media (tracks), possibly of various different media kinds. Adding support for placeholders (''proxy clips'') at some point in future will add still more complexity (because then there will be even dependencies between some of these elementary media). To handle, edit and render compound media, we need to impose some structural limitations. But anyhow, we try to configure as much as possible already at the "asset level" and make the rest of the proc layer behave just according to the configuration given with each asset.

!!Handling within the Model
* from a Media asset, we can get a [[Processing Pattern (ProcPatt)|ProcPatt]] describing how to build a render pipeline for this media
* we can create a Clip (MObject) from each Media, which will be linked back to the media asset internally.
* media can be compound internally, but will be handled as a single unit as long as possible. Even for compound media, we get a single clip.
* within the assets, we get a distinct ChannelConfig asset to represent the structural assembly of compound media based on elementary parts.
* sometimes its necessary to split and join the individual channels in processing, for technical reasons. Clearly the goal is to hide this kind of accidental complexity and treat it as an implementation detail. At HighLevelModel scope, conceptually there is just one "stream" for each distinct kind of media, and thus there is only one [[Pipe]]. But note, compound media can be structured beyond having several channels. The typical clip is a compound of image and sound. While still handled as one unit, this separation can't be concealed entirely; some effects can be applied to sound or image solely, and routing needs to separate sound and image at least when reaching the [[global pipes|GlobalPipe]] within the timeline. 
* the Builder gets at the ProcPatt (descriptor) of the underlying media for each clip and uses this description as a template to build the render pipeline. That is, the ProcPatt specifies the codec asset and maybe some additional effect assets (deinterlace, scale) necessary for feeding media data corresponding to this clip/media into the render nodes network.

!!Handling within the engine
While initially it seems intuitive just to break down everything into elementary stream processing within the engine, unfortunately a more in-depth analysis reveals that this approach isn't viable. There are some crucial kinds of media processing, which absolutely require having //all channels available at the actual processing step.// Notable examples being sound panning, reverb and compressors. Same holds true for processing of 3D (stereoscopic) images. In some cases we can get away with replicating identical processor nodes over the multiple channels and applying the same parameters to all of them. The decision which approach to take here is a tricky one, and requires much in-depth knowledge about the material to be processed -- typically the quality and the image or sound fidelity depends on these kind of minute distinctions. Many -- otherwise fine -- existing software falls short on this domain. Making such fine points accessible through [[processing rules|Rules]] is one of the core goals of the Lumiera project.

As an immediate consequence of not being able to reduce processing to elementary stream processing, we need to introduce a [[media type system|StreamType]], allowing us to reason and translate between the conceptual unit at the model level, the compound of individual processing chains in the builder and scheduling level, and the still necessary individual render jobs to produce the actual data for each channel stream. Moreover it is clear that the channel configuration needs to be flexible, and not automatically bound to the existence of a single media with that configuration. And last but not least, through this approach, we also enable handling of nested sequences as virtual clips with virtual (multichannel) media.

&rArr; conclusions
* either the ~ClipMO referres directly to a ChannelConfig asset &mdash; or in case of the VirtualClip a BindingMO takes this role.
* as the BindingMO is also used to implement the top-level timelines, the treatment of global and local pipes is united
* every pipe (bus) should be able to carry multiple channels, //but with the limitation to only a single media StreamType//
* this "multichannel-of-same-kind" capability carries over to all entities within the build process, including ModelPort entries and even the OutputSlot elements
* in playback / rendering, within each "Feed" (e.g. for image and sound) we get [[calculation streams|CalcStream]] matching the individual channels
* thus, only as late as when allocating / "opening" an OutputSlot for actual rendering, we get multiple handles for plain single channels.
* the PlayProcess serves to join both views, providing a single PlayController front-end, while [[dispatching|FrameDispatcher]] to single channel processing.
Various aspects of the individual [[render node|ProcNode]] are subject to configuration and may influence the output quality or the behaviour of the render process.
* the selection //what// actual implementation (plugin) to used for a formally defined &raquo;[[Effect|EffectHandling]]&laquo;
* the intermediary/common StreamType to use within a [[Pipe]]
* the render technology (CPU, hardware accelerated {{red{&rarr; Future}}})
* the ScheduleStrategy (possibly subdividing the calculation of a single frame)
* if this node becomes a possible CachePoint or DataMigrationPoint in RenderFarm mode
* details of picking a suitable [[operation mode|RenderImplDetails]] of the node (e.g. utilitsing "in-place" calculation)
~NodeCreatorTool is a [[visiting tool|VisitorUse]] used as second step in the [[Builder]]. Starting out from a [[Fixture]], the builder first [[divides the Timeline into segments|SegmentationTool]] and then processes each segment with the ~NodeCreatorTool to build a render nodes network (Render Engine) for this part of the timeline. While visiting individual Objects and Placements, the ~NodeCreatorTool creates and wires the necessary [[nodes|ProcNode]]
!Problem of Frame identification

!Problem of Node numbering
In the most general case the render network may be just a DAG (not just a tree). Especially, multiple exit points may lead down to the same node, and following each of this possible paths the node may be at a different depth on each. This rules out a simple counter starting from the exit level, leaving us with the possibility of either employing a rather convoluted addressing scheme or using arbitrary ID numbers.{{red{...which is what we do for now}}}
The [[nodes|ProcNode]] are wired to form a "Directed Acyclic Graph"; each node knows its predecessor(s), but not its successor(s).  The RenderProcess is organized according to the ''pull principle'', thus we find an operation {{{pull()}}} at the core of this process. Meaning that there isn't a central entity to invoke nodes consecutively. Rather, the nodes themselves contain the detailed knowledge regarding prerequisites, so the calculation plan is worked out recursively. Yet still there are some prerequisite resources to be made available for any calculation to happen. So the actual calculation is broken down into atomic chunks of work, resulting in a 2-phase invocation whenever "pulling" a node. For this to work, we need the nodes to adhere to a specific protocol:
;planning phase
:when a node invocation is foreseeable to be required for getting a specific frame for a specific nominal and actual time, the engine has to find out the actual operations to happen
:# the planning is initiated by issuing an "get me output" request, finally resulting in a JobTicket
:# recursively, the node propagates "get me output" requests for its prerequisites
:# after retrieving the planning information for these prerequisites, the node encodes specifics of the actual invocation situation into a closure called StateAdapter <br/>{{red{TODO: why not just labeling this &raquo;~StateClosure&laquo;?}}}
:# finally, all this information is packaged into a JobTicket representing the planning results.
;pull phase
:now the actual node invocation is embedded within a job, activated through the scheduler to deliver //just in time.//
:# Node is pulled, with a StateProxy object as parameter (encapsulating BufferProvider for access to the required frames or buffers)
:# Node may now retrieve current parameter values, using the state accessible via the StateProxy
:# to prepare for the actual {{{process()}}} call, the node now has to retrieve the input prerequisites
:#* when the planning phase determined availability from the cache, then just these cached buffer(s) are now retrieved, dereferencing a BuffHandle
:#* alternatively the planning might have arranged for some other kind of input to be provided through a prerequisite Job. Again, the corresponding BuffHandle can now be dereferenced
:#* Nodes may be planned to have a nested structure, thus directly invoking {{{pull()}}} call(s) to prerequisite nodes without further scheduling
:# when input is ready prior to the {{{process()}}} call, output buffers will be allocated by locking the output [[buffer handles|BuffHandle]] prepared during the planning phase
:# since all buffers and prerequistes are available, the Node may now prepare a frame pointer array and finally invoke the external {{{process()}}} to kick off the actual calculations
:# finally, when the {{{pull()}}} call returns, "parent" state originating the pull holds onto the buffers containing the calculated output result.
{{red{WIP as of 9/11  -- many details here are still to be worked out and might change as we go}}}

{{red{Update  8/13  -- work on this part of the code base has stalled, but now the plain is to get back to this topic when coding down from the Player to the Engine interface and from there to the NodeInvocation. The design as outlined above was mostly coded in 2011, but never really tested or finished; you can expect some reworkings and simplifications, but basically this design looks OK}}}

some points to note:
* the WiringDescriptor is {{{const}}} and precalculated while building (remember another thread may call in parallel)
* when a node is "inplace-capable", input and output buffer may actually point to the same location
* but there is no guarantee for this to happen, because the cache may be involved (and we can't overwrite the contents of a cache frame)
* nodes in general may require N inputs and M output frames, which are expected to be processed in a single call
* some of the technical details of buffer management are encapsulated within the BufferTable of each invocation

&rarr; the [["mechanics" of the render process|RenderMechanics]]
&rarr; more fine grained [[implementation details|RenderImplDetails]]
The calculations for rendering and playback are designed with a base case in mind: calculating a linear sequence of frames consecutive in time.
But there are several important modes of playback, which violate that assumption...
* jump-to / skip
* looping
* pause
* reversed direction
* changed speed
** slow-motion
** fast-forward/backward shuffling 
* scrubbing
* free-wheeling (as fast as possible) 

!search for a suitable implementation approach {{red{WIP 1/2013}}}
The crucial point seems to be when we're picking a starting point for the planning related to a new frame. &rarr; {{{PlanningStepGenerator}}}
Closely related is the question when and how to terminate a planning chunk, what to dispose as a continuation, and when to cease planning altogether.

!requirement analysis
These non linear playback modes do pose some specific challenges on the overall control structure distributed over the various collaborators within the play and render subsystem.The following description treats each of the special modes within its relations to this engine context
;jumping
:creates a discontinuity in //nominal time,// while the progress of real wall clock time deadlines remains unaffected
:we need to distinguish two cases
:* a //pre planned jump// can be worked into the frame dispatch just like normal progression. It doesn't create any additional challenge on timely delivery
:* to the contrary, a //spontaneous re-adjustment of playback position// deprives the engine of any delivery headroom, forcing us to catch up anew.<br/>&rarr; we should introduce a configurable slippage offset, to be added on the real time deadlines in this case, to give the engine some head start
:since each skip might create an output discontinuity, the de-clicking facility in the output sink should be notified explicitly (implying that we need an API, reachable from within the JobClosure)
;looping
:looped playback is implemented as repeated skip at the loop boundary; as such, this case always counts as pre planned jump.
;pausing
:paused playback represents a special state of the engine, where we expect playback to be able to commence right away, with minimal latency
:&rarr;we need to take several coordinated measures to make this happen
:* when going to paused state, previously scheduled jobs aren't cancelled, rather rescheduled to background rendering, but in a way which effectively pins the first frames within cache
:* additionally, the OutputSlot needs to provide a special mode where output is //frozen// and any delivery is silently blocked. The reason is, we can't cancel already triggered output jobs
:* on return to normal playback, we need to ensure that the availability of cached results will reliably prevent superfluous prerequisite jobs from being scheduled at all &rarr; we need conditional prerequisites
:The availability of such a pausing mechanism has several ramifications. We could conceive an auto-paused mode, switching to playback after sufficient background pre rendering to ensure the necessary scheduling headroom. Since the link to wall clock time and thus the real time deadlines are established the moment actual playback starts, we might even transition through auto paused mode whenever playback starts from scratch into some play mode.
;single stepping
:this can be seen as application of paused mode: we'd schedule a single play-out job, as if resuming from paused state, but we re-enter paused state immediately
;reversed play direction
:while basically trivial to implement, the challenge lies in possible naive implementation decisions assuming monotonic ascending frame times. Additionally, media decoders might need some hinting...
:however, reversed (and speed adjusted) sound playback is surprisingly tricky -- even the most simplistic solution foces us to insert an effect processor into the calculation path.
;speed variations
:the relation between nominal time and real wall clock time needs to include a //speed factor.//
;fast cueing
:the purpose of cuing is to skip through a large amount of material to spot some specific parts. For this to work, the presented material needs to be still recognisable in some way. Typically this is done by presenting small continuous chunks of material interleaved with regular skips. For editing purposes, this method is often perceived as lacking, especially by experienced editors. The former, mechanical editing systems to the contrary had the ability to run with actually increased frame rate, without skipping any material.
:&rarr; for one, this is a very specific configuration of the loop play mode.
:&rarr; it is desirable to improve the editor's working experience here. We might consider actually increasing the frame rate, given the increased availability of high-framerate capable displays. Another approach would be to apply some kind of granular synthesis, dissolving several consecutive segments of material. The latter would prompt to include a specific buffering and processing device not present in the render path for normal playback. Since we do dedicated scheduling for each playback mode, we're indeed in a position to implement such facilities.
;scrubbing
:the actual scrubbing facility is an interactive device, typically even involving some kind of hardware control interface. But the engine needs to support scrubbing, which translates into chasing a playback target, which is re-adjusted regularly. The implementation facilities discussed thus far are sufficient to implement this feature, combined with the ability of life changes to the playback mode.
;free-wheeling
:at first sight, this feature is trivial to implement. All it takes is to ignore any real time scheduling targets, so it boils down to including a flag into the [[timing descriptor|Timings]]. But there is a catch. Since free-wheeling only makes sense for writing to a file like sink, actually the engine might be overrunning the consumer. In the end, we get to choose between the following alternatives: do we allow the output jobs to block in that case, or do we want to implement some kind of flow regulation?

!essential implementation level features
Drawing from this requirement analysis, we might identify some mandatory implementation elements necessary to incorporate these playback modes into the player and engine subsystem.
;for the __job planning stage__:
:we need a way to interact with a planning strategy, included when constituting the CalcStream
:* ability for planned discontinuities in the nominal time of the "next frame"
:* ability for placing such discontinuities repeatedly, as for looped playback
:* allow for interleaved skips and processed chunks, possibly with increased speed
:* ability to inject meta jobs under specific circumstances
:* placing callbacks into these meta jobs, e.g. for auto-paused mode
;for the __timings__:
:we need flexibility when establishing the deadlines
:* allow for an added offset when re-establishing the link between nominal and real time on replanning and superseding of planned jobs
:* flexible link between nominal and real time, allowing for reversed playback and changed speed
:* configurable slippage offset
;for the __play controller__:
:basically all changes regarding non linear playback modes are initiated and controlled here
:* a paused state
:* re-entering playback by callback
:* re-entering paused state by callback
:* a link to the existing feeds and calculation streams for superseding the current planning
:* use a strategy for fast-cueing (interleaved skips, increased speed, higher framerate, change model port to use a preconfigured granulator device)
;for the __scheduler interface__:
:we need some structural devices actually to implement those non-standard modes of operation
:* conditional prerequisites (prevent evaluation, re-evaluate later)
:* special "as available" delivery, both for free-wheeling and background
:* a special way of "cancelling" jobs, which effectively re-schedules them into background.
:* a way for hinting the cache to store background frames with decreasing priority, thus ensuring the foremost availability of the first frames when picking up playback again
;for the __output sinks__:
:on the receiver side, we need some support to generate smooth and error free output delivery
:* automated detection of timing glitches, activating the discontinuity handling (&raquo;de-click facility&laquo;)
:* low-level API for signalling discontinuity to the OutputSlot. This information pertains to the currently delivered frame -- this is necessary when this frame //is actually being delivered timely.//
:* high-level API to switch any ~OutputSlot into "frozen mode", disabling any further output, even in case of accidental delivery of further data by jobs currently in progression.
:* ability to detect and signal overload of the receiver, either through blocking or for flow-control
We have to consider carefully how to handle the Creation of new class instances. Because, when done naively, it can defeat all efforts of separating subsystems, or &mdash; the other extreme &mdash; lead to a //switch-on-typeID//  programming style. We strive at a solution somewhere in the middle by utilizing __Abstract Factories__ on Interface or key abstraction classes, but providing specialized overloads for the different use cases. So in each use case we have to decide if we want to create a instance of some general concept (Interface), or if we have a direct collaboration and thus need the Factory to provide a more specific sub-Interface or even a concrete type.

!Object creation use cases
!![[Assets|Asset]]
|!Action|>|!creates |
|loading a media file|asset::Media, asset::Codec| |
|viewing media|asset::Sequence, session::Clip and Placement (on hold)| for the whole Media, if not already existent|
|mark selection as clip|session::Clip, Placement with unspec. LocatingPin| doesn't add to session|
|loading Plugin|asset::Effect| usually at program startup|
|create Session|asset::Sequence, asset::Timeline, asset::Pipe| |
&rarr; [[creating and registering Assets|AssetCreation]]
&rarr; [[loading media|LoadingMedia]]

!![[MObjects|MObject]]
|add media to sequence|session::Clip, Placement with unspecified LocatingPin| creating whole-media clip on-the-fly |
|add Clip to sequence|copy of Placement| creates intependent Placement of existing ~Clip-MO|
|attach Effect|session::Effect, Placement with RelativeLocation| |
|start using Automation|session::Auto, asset::Dataset, RelativeLocation Placement| |

!Invariants
when creating Objects, certain invariants have to be maintained. Because creating an Object can be considered an atomic operation and must not leave any related objects in an inconsistent state. Each of our interfaces implies some invariants:
* every Placement has a Subject it places
* MObjects are always created to be placed in some way or the other
* [[Assets|Asset]] manage a dependency graph. Creating a derived Object (e.g. a Clip from a Media) implies a new dependency. (&rarr; [[memory management|ManagementAssetRelation]] relies on this)
Cinelerra2 introduced OpenGL support for rendering previews. I must admit, I am very unhappy with this, because
* it just supports some hardware
* it makes building difficult
* it can't handle all color models Cinelerra is capable of
* it introduces a separate codepath including some complicated copying of video data into the textures (and back?)
* it can't be used for rendering

So my judgement would be: in contrary to a realtime/gaming application, for quality video editing it is not worth the effort implementing OpenGL support in all details and with all its complexity. I would accept ~OpenGL as an option, if it could be pushed down into a Library, so it can be handled and maintained transparently and doesnt bind our limited developer manpower.

But because I know the opinions on this topc are varying (users tend to be delighted if they hear "~OpenGL", because it seems to be likted to the notion of "speed" and "power" todays) &mdash; I try to integrate ~OpenGL as a possibility into this design of the Render Engine. But I insist on putting up the requirement that it //must not jeopardize the code structure.//

My proposed aproach is to treat OpenGL as a separate video raw data type, requiring separete and specialized [[Processing Nodes|ProcNode]] for all calculations. Thus the Builder could connect OpenGL nodes if it is possible to cover the render path in whole or partially or maybe even just for preview.
A low-level abstraction within the [[Builder]] &mdash; it serves to encapsulate the details of making multi-channel connections between the render nodes: In some cases, a node can handle N channels internally, while in other cases we need to replicate the node N times and wire each channel individually. As it stands, the OperationPoint marks the ''borderline between high-level and low-level model'': it is invoked in terms of ~MObjects and other entities of the high-level view, but internally it manages to create ProcNode and similar entities of the low-level model.

The operation point is provided by the current BuilderMould and used by the [[processing pattern|ProcPatt]] executing within this mould and conducting the current build step. The operation point's interface allows //to abstract//&nbsp; these details, as well as to //gain additional control//&nbsp; if necessary (e.g. addressing only one of the channels). The most prominent build instruction used within the processing patterns (which is the instruction {{{"attach"}}}) relies on the aforementioned //approach of abstracted handling,// letting the operation point determine automatically how to make the connection.

This is possible because the operation point has been provided (by the mould) with information about the media stream type to be wired, which, together with information accessible at the [[render node interface|ProcNode]] and from the [[referred processing assets|ProcAsset]], with the help of the [[connection manager|ConManager]] allows to figure out what's possible and how to do the desired connections. Additionally, in the course of deciding about possible connections, the PathManager is consulted to guide strategic decisions regarding the [[render node configuration|NodeConfiguration]], possible type conversions and the rendering technology to employ.
An ever recurring problem in the design of Luimiera's ~Proc-Layer is how to refer to output destinations, and how to organise them.
Wiring the flexible interconnections between the [[pipes|Pipe]] should take into account both the StreamType and the specific usage context ([[scope|PlacementScope]]) -- and the challenge is to avoid hard-linking of connections and tangling with the specifics of the target to be addressed and connected. This page, started __6/2010__ by collecting observations to work out the relations, arrives at defining a //key abstraction// of output management.

!Observations
* effectively each [[Timeline]] is known to expose a set of global Pipes
* when connecting a Sequence to a Timeline or a VirtualClip, we also establish a mapping
* this mapping translates possible media stream channels produced by the sequence into (real) output slots located in the enclosing entity
* uncertainty on who has to provide the global pipes, implementation wise &mdash;
** as Timeline is just a façade, BindingMO has to expose something which can be referred for attaching effects (to global pipes)
** when used as VirtualClip, there is somehow a channel configuration, either as asset, or exposed by the BindingMO
* Placements always resolve at least two dimensions: time and output. The latter means that a [[Placement]] can figure out to where to connect
* the resolution ability of Placements could help to overcome the problems in conjunction with a VirtualClip: missing output destination information could be inherited down....
* expanding on the basic concept of a Placement in N-dimensional configuration space, this //figuring out// would denote the ability to resolve the final output destination
* this resolution to a final destination is explicitly context dependent. We engage into quite some complexities to make this happen (&rarr; BindingScopeProblem)
* [[processing patterns|ProcPatt]] are used for creating nodes on the source network of a clip, and similarly for fader, overlay and mixing into a summation pipe
* in case the track tree of a sequence doesn't contain specific routing advice, connections will be done directly to the global pipes in order and by matching StreamType (i.e. typically video to video master, audio to stereo audio master). When a monitor (viewer window) is attached to a timeline, similar output connections are made from the timeline's global pipes, i.e. the video display will take the contents of the first video (master) bus, and the first stereo audio pipe will be pulled and sent to system audio out.
* a mismatch between the system output possibilities and the stream type of a bus to be monitored should result in the same adaptation mechanism to kick in, as is used internally, when connecting an ~MObject to the next bus. Possibly we'll use separate rules in this case (allow 3D to flat, stereo to mono, render 5.1 into Ambisonics...)

!Conclusions
* there are //direct, indirect//&nbsp; and //relative//&nbsp; referrals to an output designation
* //indirect// means to derive the destination transitively (looking at the destination's output designation and so on)
* //relative// in this context means that we refer to "the N^^th^^ of this kind" (e.g. the second video out)
* we need to attach some metadata with an output; especially we need an associated StreamType
* the referral to an output designation can be observed on and is structured into several //levels://
** within the body of the model, mostly we address output destinations relatively
** at some point, we'll address a //subgroup// within the global pipes, which acts like a summation sink
** there might be //master pipe(s),// collecting the output of various subgroups
** finally, there is the hardware output or the distinct channel within the rendered result &mdash; never to be referenced explicitly
!!!relation to Pipes
in almost every case mentioned above, the output designation is identical with the starting point of a [[Pipe]]. This might be a global pipe or a ClipSourcePort. Thus it sounds reasonable to use pipe-~IDs directly as output designation. Pipes, as an accountable entity (=asset) just //leap into existence by being referred.// On the other hand, the //actual//&nbsp; pipe is a semantic concept, a specific structural arrangement of objects. Basically it means that some object acts as attachment point and thereby //claims//&nbsp; to be the entrance side of a pipe, while other processor objects chain up in sequence.
!!!system outputs
System level output connections seem to be an exception to the above rule. There is no pipe at an external port, and these externally visible connection points can appear and disappear, driven by external conditions. Yet the question is if system outputs shall be directly addressable at all as output designation? Generally speaking, Lumiera is not intended to be a system wide connection manager or a real time performance software. Thus it's advisable to refrain from direct referrals to system level connections from within the model. Rather, there should be a separate OutputManagement to handle external outputs and displays, both in windows or full screen. So these external outputs become rather a matter of application configuration &mdash; and for all the other purposes we're free to ''use pipe-~IDs as output designation''.
!!!consequences of mentioning an output designation
The immediate consequence is that a [[Pipe]] with the given ID exists as an accountable entity. Only if &mdash; additionally &mdash; a suitable object within the model //claims to root this pipe,//  a connection to this designation is wired, using an appropriate [[processing pattern|ProcPatt]]. A further consequence then is for the mentioned output designation to become a possible candidate for a ModelPort, an exit node of the built render nodes network. By default, only those designations without further outgoing connections actually become active model ports (but an existing and wired pipe exit node can be promoted to a model port explicitly).
&rarr; OutputManagement

!!Challenge: mapping of output designations
An entire sequence can be embedded within another sequence as a VirtualClip. While for a simple clip there is a Clip-~MObject placed into the model, holding an reference to a media asset, in case of a virtual clip an BindingMO takes on the role of the clip object. Note that, within another context, BindingMO also acts as [[Timeline]] implementation &mdash; indeed even the same sequence might be bound simultaneously as a timeline and into a virtual clip. This flexibility creates a special twist, as the contents of this sequence have no way of finding out if they're used as timeline or embedded virtual clip. So parts of this sequence might mention a OutputDesignation which, when using the sequence as virtual clip, needs to be translated into a virtual media channel, which can be treated in the same manner like any channel (video, audio,...) found within real media. Especially, a new output designation has to be derived in a sensible way, which largely depends on how the original output designation has been specified.
&rarr; see OutputMapping regarding details and implementation of this mapping mechanism




!Output designation definition
OutputDesignation is a handle, denoting a target [[Pipe]] to connect.
It exposes a function to resolve to a Pipe, and to retrieve the StreamType of that resolved output. It can be ''defined'' either explicitly by ~Pipe-ID, or by an indirect or relative specification. The later cases are resolved on demand only (which may be later and might even change the meaning, depending on the context). It's done this way intentionally to gain flexibility and avoid hard wiring (=explicit ~Pipe-ID)

!!!Implementation notes
Thus the output designation needs to be a copyable value object, but opaque beyond that. Mandated by the various ways to specify an output designation, a hidden state arises regarding the partial resolution. The implementation solves that dilemma by relying on the [[State|http://en.wikipedia.org/wiki/State_pattern]] pattern in combination with an opaque in-place buffer.


!Use of output designations
While actually data frames are //pulled,// on a conceptual level data is assumed to "flow" through the pipes from source to output. This conceptual ("high level") model of data flow is comprised of the pipes (which are rather rigid), and flexible interconnections. The purpose of an output designation is to specify where the data should go, especially through these flexible interconnections. Thus, when reaching the exit point of a pipe, an output designation will be //queried.// Finding a suitable output designation involves two parts:
* first of all, we need to know //what to route// -- kind of the traits of the data. This is given by the //current pipe.//
* then we'll need to determine an output designation //suitable for this data.// This is given by a "Plug" (WiringRequest) in the placement, and may be derived.
* finally, this output designation will be //resolved// -- at least partially, resulting in a target pipe to be used for the wiring
As both of these specifications are given by [[Pipe]]-~IDs, the actual designation information may be reduced. Much can be infered from the circumstances, because any pipe includes a StreamType, and an output designation for an incompatible stream type is irrelevant. (e.g. and audio output when the pipe currently in question deals with video)
//writing down some thoughts//

* ruled out the system outputs as OutputDesignation.
* thus, any designation is a [[Pipe]]-ID.
* consequently, it is not obviously clear if such an designation is the final exit point
* please note the [[Engine interface proposal|http://lumiera.org/documentation/devel/rfc_pending/EngineInterfaceOverview.html]]
* this introduces the notion of a ModelPort: //a point in the (high level) model where output can be produced//
* thus obviously we need an OutputManager element to track the association of OutputDesignation to OutputSlot

Do we get a single [[Fixture]] &mdash; guess yes

From the implementation side, the only interesting exit nodes are the ones to be //actually pulled,// i.e. those immeditately corresponding to the final output.
* __rendering__ happens immediately at the output of a GlobalPipe (i.e. at a [[Timeline]], which is top-level)
* __playback__ always happens at a viewer element

!Attaching and mapping of exit nodes
Initially, [[Output designations|OutputDesignation]] are typically just local or relative references to another OutputDesignation; yet after some resolution steps, we'll arrive at an OutputDesignation //defined absolute.// Basically, these are equivalent to a [[Pipe]]-ID choosen as target for the connection and -- they become //real//&nbsp; by some object //claiming to root this pipe.// The applicability of this pattern is figured out dynamically while building the render network, resulting in a collection of [[model ports|ModelPort]] as part of the current [[Fixture]]. A RenderProcess can be started to pull from these -- and only from these -- active exit points of the model. Besides, when the timeline enclosing these model ports is [[connected to a viewer|ViewerPlayConnection]], an [[output network|OutputNetwork]] //is built to allow hooking exit points to the viewer component.// Both cases encompass a mapping of exit nodes to actual output channels. Usually, this mapping just relies on relative addressing of the output sinks, starting to allocate connections with the //first of each kind// (e.g. "connect to the first usable audio output destination").

We should note that in both cases this [[mapping operation|OutputMapping]] is controlled and driven and constrained by the output side of the connection: A viewer has fixed output capabilities, and rendering targets a specific container format -- again with fixed and pre-settled channel configuration ({{red{TODO 9/11}}} when configurting a render process, it might be necessary to pre-compute the //possible kinds of output streams,// so to provide a sensible pre-selection of possible output container formats for the user to select from). Thus, as a starting point, we'll create a default configured mapping, assigning channels in order. This mapping then should be exposed to modification and tweaking by the user. For rendering, this is part of the render options dialog, while in case of a viwer connection, a switch board is created to allow modifying the default mapping.

!Connection to external outputs
External output destinations are never addressed directly from within the model. This is an design decision. Rather, model parts connect to an OutputDesignation, and these in turn may be  [[connected to a viewer element|ViewerPlayConnection]]. At this point, related to the viewer element, there is a mapping to external destination(s): for images, a viewer typically has an implicit, natural destination (read: actually there is a corresponding viewer window or widget), while for sound we use an mapping rule, which could be overridden locally in the viewer.

Any external output sink is managed as a [[slot|DisplayerSlot]] in the ~OutputManager. Such a slot can be opened and allocated for a playback process, which allows the latter to push calculated data frames to the output. Depending on the kind of output, there might be various, often tight requirements on the timed delivery of output data, but any details are abstracted away &mdash; any slot implementation provides a way to handle time-outs gracefully, e.g. by just showing the last video frame delivered, or by looping and fading sound
&rarr; the OutputManager interface describes handling this mapping association
&rarr; see also the PlayService

!the global output manager
Within the model routing is done mostly just by referring to an OutputDesignation -- but at some point finally we need to map these abstract designations to real output capabilities. This happens at the //output managing elements.// This interface, OutputManager, exposes these mappings of logical to real outputs and allows to  manage and control them. Several elements within the application, most notably the [[viewers|ViewerAsset]], provide an implementation of this interface -- yet there is one primary implementation, the ''global output manager'', known as OutputDirector. It can be accessed through the {{{Output}}} façade interface and is the final authority when it comes to allocating and mapping of real output possibilities. The OutputDirector tracks all the OutputSlot elements currently installed and available for output.

The relation between the central OutputDirector and the peripheral OutputManager implementations is hierarchical. Because output slots are usually registered rather at some peripheral output manager implementation, a direct mapping from OutputDesignation (i.e. global pipe) to these slots is created foremost at that peripheral level. Resolving a global pipe into an output slot is the core concern of any OutputManager implementation. Thus, when there is a locally preconfigured mapping, like e.g. for a viewer's video master pipe to the output slot installed by the corresponding GUI viewer element, then this mapping will be picked up foremost to resolve the video master output.

For a viewer widget in the GUI this yields exactly the expeted behaviour, but in other cases, e.g. for sound output, we need more general, more globally scoped output slots. In these cases, when a local mapping is absent, the query for output resolution is passed on up to the  OutputDirector, drawing on the collection of globally available output slots for that specific kind of media.
{{red{open question 11/11: is it possible to retrieve a slot from another peripheral node?}}}

!!!output modes
Most output connections and drivers embody some kind of //operation mode:// Display is characterised by resolution and colour depth, sound by number of channels and sampling rate, amongst others. There might be a mismatch with the output expectations represented by [[output designations|OutputDesignation]] within the model. Nontheless we limit those actual operation modes strictly to the OutputManager realm. They should not leak out into the model within the session.
In practice, this decision might turn out to be rather rigid, but some additional mechanisms allow for more flexibility
* when [[connecting|ViewerPlayConnection]] timeline to viewer and output, stream type conversions may be added automatically or manually
* since resolution of an output designation into an OutputSlot is initiated by querying an output manager, this query might include additional constraints, which //some// (not all) concrete output implementations might evaluate to provide an more suitably configured output slot variant.
The term &raquo;''Output Manager''&laquo; might denote two things: first, there is an {{{proc::play::OutputManager}}} interface, which can be exposed by several components within the application, most notably the [[viewer elements|ViewerAsset]]. And then, there is "the" global output manager, the OutputDirector, which finally tracks all registered OutputSlot elements and thus is the gatekeeper for any output leaving the application.

&rarr; see [[output management overview|OutputManagement]]
&rarr; see OutputSlot
&rarr; see ViewerPlayConnection

!Role of an output manager
The output manager interface describes an entity handling two distinct concerns, tied together within a local //scope.//
* a ''mapping'' to resolve a ModelPort (given as ~Pipe-ID) into an OutputSlot
* the ''registration'' and management of possible output slots, thereby creating a preferred local mapping for future connections

Note that an OutputSlot acts as a unit for registration and also for allocating / "opening" an output, while generally there might still be multiple physical outputs grouped into a single slot. This is relevant especially for sound output. A single slot is just the ability to allocate output ports up to a given limit (e.g. 2 for a stereo device, or 6 for a 5.1 device). These multiple channels are allways connected following a natural channel ordering. Thus the mapping is a simple 1:1 association from pipe to slot, assuming that the media types are compatible (and this has been checked already on a higher level).

The //registration//&nbsp; of an output slot installs a functor or association rule, which later on allows to claim and connect up to a preconfigured number of channels. This allocation or usage of a slot is exclusive (i.e. only a single client at a time can allocate a slot, even if not using all the possible channels). Each output manager instance may or may not be configured with a //fall-back rule:// when no association or mapping can be established locally, the connection request might be passed down to the global OutputDirector. Again, we can expect this to be the standard behaviour for sound, while video likely will rather be handled locally, e.g. within a GUI widget (but we're not bound to configure it exactly this way)
An output mapping serves to //resolve//&nbsp; [[output designations|OutputDesignation]].

!Mapping situations
;external outputs
:external outputs aren't addressed directly. Rather we set up a default (channel) mapping, which then can be overridden by local rules.
:Thus, in this case we query with an internal OutputDesignation as parameter and expect an OutputSlot
;viewer connections
:any Timeline produces a number of [[model ports|ModelPort]]. On the other hand, any viewer exposes a small number of output designations, representing the image and sound output(s).
:Thus, in this case we resolve similar to a bus connection, possibly overridden by already pre-existing or predefined connections.
;switch board
:a viewer might receive multiple outputs and overlays, necessitating a user operated control to select what's actually to be displayed
:Thus, in this case we need a backwards resolution at the lower end of the output network, to connect to the model port as selected through the viewer's SwitchBoard
;global pipes or virtual media
:when binding a Sequence as Timeline or VirtualClip, a mapping from output designations used within the Sequence to virtual channels or global pipes is required
:Thus, in this case we need to associate output designations with ~Pipe-IDs encountered in the context according to some rules &mdash; again maybe overridden by pre-existing connections

!Conclusions
All these mapping steps are listed here, because they exhibit a common pattern.
* the immediately triggering input key is a [[Pipe]]-ID (and thus provides a stream type); additional connection hints may be given.
* the mapped result draws from a limited selection of elements, which again can be keyed by ~Pipe-IDs
* the mapping is initialised based on default mapping rules acting as fallback
* the mapping may be extended by explicitly setting some associations
* the mapping itself is a stateful value object
* there is an //unconnected//&nbsp; state.

!Implementation notes
Thus the mapping is a copyable value object, using an associative array. It may be attached to a model object and persisted alongside. The mapping is assumed to run a defaults query when necessary. To allow for that, it should be configured with a query template (string). Frequently, special //default pipe// markers will be used at places where no distinct pipe-ID is specified explicitly. Besides that, invocations might supply additional predicates (e.g. {{{ord(2)}}} to point at "the second stream of this kind") thereby hinting the defaults resolution. Moreover, the mapping needs a way to retrieve the set of possible results, allowing to filter the results of the rules based default. Mappings might be defined explicitly. Instead of storing a //bottom value,// an {{{isDefined()}}} predicate might be preferable.

First and foremost, mapping can be seen as a //functional abstraction.// As it's used at implementation level, encapsulation of detail types in't the primary concern, so it's a candidate for generic programming: For each of those use cases outlined above, a distinct mapping type is created by instantiating the {{{OutputMapping<DEF>}}} template with a specifically tailored definition context ({{{DEF}}}), which takes on the role of a strategy. Individual instances of this concrete mapping type may be default created and copied freely. This instantiation process includes picking up the concrete result type and building a functor object for resolving on the fly. Thus, in the way typical for generic programming, the more involved special details are moved out of sight, while being still in scope for the purpose of inlining. But there //is// a concern better to be encapsulated and concealed at the usage site, namely accessing the rules system. Thus mapping leads itself to the frequently used implementation pattern where there is a generic frontend as header, calling into opaque functions embedded within a separate compilation unit.
Within the Lumiera player and output subsystem, actually sending data to an external output requires to allocate an ''output slot''
This is the central metaphor for the organisation of actual (system level) outputs; using this concept allows to separate and abstract the data calculation and the organisation of playback and rendering from the specifics of the actual output sink. Actual output possibilities (video in GUI window, video fullscreen, sound, Jack, rendering to file) can be added and removed dynamically from various components (backend, GUI), all using the same resolution and mapping mechanisms (&rarr; OutputManagement)

!Properties of an output slot
Each OutputSlot is an unique and distinguishable entity. It corresponds explicitly to an external output, or a group of such outputs (e.g. left and right soundcard output channels), or an output file or similar capability accepting media content. First off, an output slot needs to be provided, configured and registered, using an implementation for the kind of media data to be output (sound, video) and the special circumstances of the output capability (render a file, display video in a GUI widget, send video to a full screen display, establish a Jack port, just use some kind of "sound out"). An output slot is always limited to a single kind of media, and to a single connection unit, but this connection may still be comprised of multiple channels (stereoscopic video, multichannel sound).

In order to be usable as //output sink,// an output slot needs to be //allocated,// i.e. tied to and locked for a specific client. At any time, there may be only a single client using a given output slot this way. To stress this point: output slots don't provide any kind of inherent mixing capability; any adaptation, mixing, overlaying and sharing needs to be done within the nodes network producing the output data fed to the slot. (in special cases, some external output capabilities -- e.g. the Jack audio connection system -- may still provide additional mixing capabilities, but that's beyond the scope of the Lumiera application)

[>img[Outputslot implementation structures|uml/fig151685.png]]
Once allocated, the output slot returns a set of concrete ''sink handles'' (one for each physical channel expecting data). The calculating process feeds its results into those handles. Size and other characteristics of the data frames are assumed to be suitable, which typically won't be verified at that level anymore (but the sink handle provides a hook for assertions). Besides that, the allocation of an output slot reveals detailed ''timing expectations''. The client is required to comply to these [[timings|Timings]] when ''emitting'' data -- he's even required to provide a //current time specification,// alongside with the data. Based on this information, the output slot has the ability to handle timing failures gracefully; the concrete output slot implementation is expected to provide some kind of de-click or de-flicker facility, which kicks in automatically when a timing failure is detected.

!!!usage and implementation
Clients retrieve just a reference to an output slot by asking a suitable OutputManager for an output possibility supporting a specific format. Usually, they just "claim" this slot by invoking {{{allocate()}}}, which behind the scenes causes building of the actual output connections and mechanisms. For each such connection -- corresponding to a single channel within the media format handled by this ~OutputSlot -- the client gets a smart-handle {{{DataSink}}}. The concrete ~OutputSlot implementation performs operations quite specific to the kind of output and external interface in question. All tese specific handling is embodied within the concrete connection implementation used by the concrete ~OutputSlot

!!!timing expectations
Besides the sink handles, allocation of an output slot defines some timing constraints, which are binding for the client. These [[Timings]] are detailed and explicit, including a grid of deadlines for each frame to deliver, plus a fixed //latency.// Within this context, &raquo;latency&laquo; means the requirement to be ahead of the nominal time by a certain amount, to compensate for the processing time necessary to propagate the media to the physical output pin. The output slot implementation itself is bound by external constraints to deliver data at a fixed framerate and aligned to an externally defined timing grid, plus the data needs to be handed over ahead of these time points by an time amount given by the latency. Depending on the data exchange model, there is an additional time window limiting the buffer management.

The assumption is for the client to have elaborate timing capabilities at his disposal. More specifically, the client is assumed to be a job running within the engine scheduler and thus can be configured to run //after// another job has finished, and to run within certain time limits. Thus the client is able to provide a //current nominal time// -- which is suitably close to the actual wall clock time. The output slot implementation can be written such as to work out from this time specification if the call is timely or overdue -- and react accordingly.

!!!output modes
some concrete output connections and drivers embody a specific operation mode (e.g. sample rate or number of channels). The decision and setup of these operational configuration is initiated together with the [[resolution|OutputMapping]] of an OutputDesignation within the OutputManager, finally leading to an output slot (reference), which can be assumed to be suitably configured, before the client allocates this slot for active use. Moreover, an individual output sink (corresponding to a single channel) may just remain unused -- until there is an {{{emit()}}} call and successful data handover, this channel will just feature silence or remain black. (More flexible system, e.g. Jack, allow to generate an arbitrary number of output pins -- Lumiera will support this by allowing to set up additional output slots and attach this information to the current session &rarr; SessionConfigurationAttachment)

!!!Lifecycle and storage
The concrete OutputSlot implementation is owned and managed by the facility actually providing the output possibility in question. For example, the GUI provides viewer widgets, while some sound output backend provides sound ports. The associated OutputSlot implementation object is required to stay alive as long as it's registered with some OutputManager. It needs to be de-registered explicitly prior to destruction -- and this deregistration may block until all clients using this slot did terminate. Beyond that, an output slot implementation is expected to handle all kinds of failures gracefully -- preferably just emitting a signal (callback functor).
{{red{TODO 7/11: Deregistration is an unsolved problem....}}}

!!!unified data exchange cycle
The planned delivery time of a frame is used as an ID throughout that cycle
# within a defined time window prior to delivery, the client can ''allocate and retrieve the buffer'' from the BufferProvider.
# the client has to ''emit'' within a (short) time window prior to deadline
# now the slot gets exclusive access to the buffer for output, signalling the buffer release to the buffer provider when done.

!!!lapses
This data exchange protocol operates on a rather low level; there is only limited protection against timing glitches
|  !step|!problem ||!consequences | !protection |
| (1)|out of time window ||interference with previous/later use of the buffer | prevent in scheduler! |
|~|does not happen ||harmless as such | emit ignored |
|~|buffer unavailable ||inhibits further operation | ↯ |
| (2)|out of time window ||harmless as such | emit ignored |
|~|out of order ||allowed, unless out of time | -- |
|~|does not happen ||frame treated as glitch | -- |
|~|buffer unallocated ||frame treated as glitch | emit ignored |
| (3)|emit missing ||frame treated as glitch | -- |
|~|fail to release buffer ||unable to use buffer further | mark unavailable |
|~|buffer unavailable ||serious malfunction of playback | request playback stop |

Thus there are two serious problem situations
* allocating the buffer out of time window bears the danger of output data corruption; but the general assumption is for the scheduler to ensure each job start time remains within the defined window and all prerequisite jobs have terminated successfully. To handle clean-up, we additionally need special jobs running always, in order, and be notified of prerequisite failures.
* failure to release a buffer in a timely fashion blocks any further use of that buffer, any further jobs in need of that buffer will die immediately. This situation can only be caused by a serious problem //within the slot, related to the output mechanism.// Thus there should be some kind of trigger (e.g. when this happens 2 times consecutively) to request aborting the playback or render as a whole.
&rarr; SchedulerRequirements
&rarr; OutputSlotDesign
&rarr; OutputSlotImpl
The OutputSlot interface describes a point where generated media data can actually be sent to the external world. It is expected to be implemented by adapters and bridges to existing drivers or external interface libraries, like a viewer widget in the GUI, ALSA or Jack sound output, rendering to file, using an external media format library. The design of this core facility was rather difficult and stretched out over quite some time span -- this page documents the considerations and the decisions taken.

!intention
OutpotSlot is a metaphor to unify the organisation of actual (system level) outputs; using this concept allows to separate and abstract the data calculation and the organisation of playback and rendering from the specifics of the actual output sink. Actual output possibilities (video in GUI window, video fullscreen, sound, Jack, rendering to file) can be added and removed dynamically from various components (backend, GUI), all using the same resolution and mapping mechanisms (&rarr; OutputManagement)

!design possibilities
!!properties as a starting point
* each OutputSlot is an unique and distinguishable entity. It corresponds explicitly to an external output
* an output slot needs to be provided, configured and registered, using an implementation specifically tailored for the kind of media data
* an output slot is always limited to a single kind of media, and to a single connection unit, but this connection may still be comprised of multiple channels.
* in order to be usable as //output sink,// an output slot needs to be //allocated,// i.e. tied to and locked for a specific client.
* this allocation is exclusive: at any time, there may be only a single client using a given output slot.
* the calculating process feeds its results into //sink handles//&nbsp; provided by the allocated output slot.
* allocation of an output slot leads to very specific [[timing expectations|Timings]]
* the client is required to comply to these timings and operate according to a strictly defined protocol.
* timing glitches will be detected due to this protocol; the output slot provides mechanisms for failing gracefully in these cases

!!data exchange models
Data is handed over by the client invoking an {{{emit(time,...)}}} function on the sink handle. Theoretically there are two different models how this data hand-over might be performed. This corresponds to the fact, that in some cases our own code manages the output and the buffers, while in other situations we intend to use existing library solutions or even external server applications to handle output
;buffer handover model
:the client owns the data buffer and cares for allocation and de-allocation. The {{{emit()}}}-call just propagates a pointer to the buffer holding the data ready for output. The output slot implementation in turn has the liability to copy or otherwise use this data within a given time limit.
;shared buffer model
:here the output mechanism owns the buffer. Within a certain time window prior to the expected time of the {{{emit()}}}-call, the client may obtain this buffer (pointer) to fill in the data. The slot implementation won't touch this buffer until the {{{emit()}}} handover, which in this case just provides the time and signalles the client is done with that buffer. If the data emitting handshake doesn't happen at all, it counts as late and superseded by the next handshake.

!!relation to timing
Besides the sink handles, allocation of an output slot defines some timing constraints, which are binding for the client. These include a grid of deadlines for each frame to deliver, plus a fixed //latency.// The output slot implementation itself is bound by external constraints to deliver data at a fixed framerate and aligned to an externally defined timing grid, plus the data needs to be handed over ahead of these time points by an time amount given by the latency. Depending on the data exchange model, there is an additional time window limiting the buffer management.

The assumption is for the client to have elaborate timing capabilities at his disposal. More specifically, the client is assumed to be a job running within the engine scheduler and thus can be configured to run //after// another job has finished, and to run within certain time limits. Thus the client is able to provide a //current nominal time// -- which is suitably close to the actual wall clock time. The output slot implementation can be written such as to work out from this time specification if the call is timely or overdue -- and react accordingly.

{{red{TODO 6/11}}}in this spec, both data exchange models exhibit a weakness regarding the releasing of buffers. At which time is it safe to release a buffer, when the handover didn't happen? Do we need an explicit callback, and how could this callback be triggered? This is similar to the problem of closing a network connection, i.e. the problem is generally unsolvable, but can be handled pragmatically within certain limits.
{{red{WIP 11/11}}}meanwhile I've worked out the BufferProvider interface in deail. There's now a deatiled buffer handover protocol defined, which is supported by a little state engine tracking BufferMetadata. This mechanism provides a transition to {{{BLOCKED}}} state when order or timing constraints are being violated, which practically solves this problem. How to detect and resolve such a floundered state from the engine point of view still remains to be addressed.

!!!Lifecycle and storage
The concrete OutputSlot implementation is owned and managed by the facility actually providing the output possibility in question. For example, the GUI provides viewer widgets, while some sound output backend provides sound ports. The associated OutputSlot implementation object is required to stay alive as long as it's registered with some OutputManager. It needs to be de-registered explicitly prior to destruction -- and this deregistration may block until all clients using this slot did terminate. Beyond that, an output slot implementation is expected to handle all kinds of failures gracefully -- preferably just emitting a signal (callback functor).
{{red{TODO 7/11: Deregistration is an unsolved problem....}}}

-----
!Implementation / design problems
How to handle the selection of those two data exchange models!
-- Problem is, typically this choice isn't up to the client; rather, the concrete OutputSlot implementation dictates what model to use. But, as it stands, the client needs to cooperate and behave differently to a certain degree. Unless we manage to factor out an common denominator of both models.

Thus: Client gets an {{{OutputSlot&}}}, without knowing the exact implementation type &rArr; how can the client pick up the right strategy?
Solving this problem through //generic programming// -- i.e coding both cases effectively different, but similar -- would require to provide both implementation options already at //compile time!//

{{red{currently}}} I see two possible, yet quite different approaches...
;generic
:when creating individual jobs, we utilise a //factory obtained from the output slot.//
;unified
:extend and adapt the protocol such to make both models similar; concentrate all differences //within a separate buffer provider.//
!!!discussion
the generic approach looks as it's becoming rather convoluted in practice. We'd need to hand over additional parameters to the factory, which passes them through to the actual job implementation created. And there would be a coupling between slot and job (the slot is aware it's going to be used by a job, and even provides the implementation). Obviously, a benefit is that the actual code path executed within the job is without indirections, and all written down in a single location. Another benefit is the possibility to extend this approach to cover further buffer handling models -- it doesn't pose any requirements on the structure of the buffer handling --
On the other hand, if we accept to retrieve the buffer(s) via an indirection, which we kind of do anyway //within the render node implementation// -- the unified model looks more like a clean solution. It's more like doing away with some local optimisations possible if we handle the models explicitly, so it's not much of a loss, given that the majority of the processing time will be spent within the inner pixel calculation loops for frame processing anyway. When following this approach, the BufferProvider becomes a third, independent partner, and the slot cooperates tightly with this buffer provider, while the client (processing node) still just talks to the slot. Basically, this unified solution works like extending the shared buffer model to both cases.
&rArr; __conclusion__: go for the unified approach!

!!!unified data exchange cycle
The planned delivery time of a frame is used as an ID throughout that cycle
# within a defined time window prior to delivery, the client can ''allocate and retrieve the buffer'' from the BufferProvider.
# the client has to ''emit'' within a (short) time window prior to deadline
# now the slot gets exclusive access to the buffer for output, signalling the buffer release to the buffer provider when done.

&rarr; OutputSlotImpl
OutputSlot is an abstraction, allowing unified treatment of various physical output connections from within the render jobs. The actual output slot is a subclass object, created and managed from the "driver code" for a specific output connection. Moreover, each output slot will be outfitted with a concrete BufferProvider to reflect the actual buffer handling policy applicable for this specific output connection. Some output connections might e.g. require delivery of the media data into a buffer residing on external hardware, while others work just fine when pointed to some arbitrary memory block holding generated data.

!operation steps
[>img[Sequenz of output data exchange steps|uml/fig145157.png]]The OutputSlot class defines some standard operations as protected virtual functions. These represent the actual steps in the data exchange protocol corresponding to this output slot.
;lock()
:claims the specified buffer for exclusive use.
:Client may now dispose output data
;transfer()
:transfers control from the client to the actual output connection for feeding data.
:Triggered by the client invoking {{{emit()}}}
;pushout()
:request the actual implementation to push the data to output.
:Usually invoked from the {{{transfer()}}} implementation, alternatively from a separate thread.


!buffer states
While the BufferProvider abstracts away the actual access to the output buffers and just hands out a ''buffer handle'', the server side (here the concrete output slot) is allowed to associate and maintain a ''state flag'' with each buffer. The general assumption is that writing this state flag is atomic, and that other facilities will care for the necessary memory barriers (that is: the output slot and the buffer provider will just access this state flag without much ado). The generic part of the output slot implementation utilises this buffer state flag to implement a state machine, which -- together with the timing constraints established with the [[help of the scheduler|SchedulerRequirements]] -- ensures sane access to the buffer without collisions.
|        !state||>| !lock()   |>| !transfer() |>| !pushout() |
|     {{{NIL}}}||↯| ·        |↯|             | ↯ |          |
|    {{{FREE}}}||✔|↷ locked  |✔|· (ignore)  | · | ·         |
|  {{{LOCKED}}}||↯|↷ emitted |✔|↷ emitted   | · |↷ free    |
| {{{EMITTED}}}||↯|↷ blocked |↯|↷ blocked   | · |↷ free    |
| {{{BLOCKED}}}||↯| ✗        |↯| ✗          | ∅ |↷ free    |
where · means no operation, ✔ marks the standard cases (OK response to caller), ↯ will throw and thus kill the calling job, ∅ will treat this frame as //glitch,// ✗ will request playback stop.
The rationale is for all states out-of-order to transition into the {{{BLOCKED}}}-state eventually, which, when hit by the next operation, will request playback stop.
The Lumiera Processing Layer is comprised of various subsystems and can be separated into a low-level and a high-level part. At the low-level end is the [[Render Engine|OverviewRenderEngine]] which basically is a network of render nodes cooperating closely with the Backend Layer in order to carry out the actual playback and media transforming calculations. Whereas on the high-level side we find several different [[Media Objects|MObjects]] that can be placed into the session, edited and manipulated. This is complemented by the [[Asset Management|Asset]], which is the "bookkeeping view" of all the different "things" within each [[Session|SessionOverview]].

There is rather strong separation between these two levels, and &mdash; <br/>correspondingly you'll encounter the data held within the Processing Layer organized in two different views, the ''high-level-model'' and the ''low-level-model''
* from users (and GUI) perspective, you'll see a [[Session|SessionOverview]] with a timeline-like structure, where various [[Media Objects|MObjects]] are arranged and [[placed|Placement]]. By looking closer, you'll find that there are data connections and all processing is organized around processing chains or [[pipes|Pipe]], which can be either global (in the Session) or local (in real or [[virtual|VirtualClip]] clips)
* when dealing with the actual calculations in the Engine (&rarr; see OverviewRenderEngine), you won't find any Tracks, Media Objects or Pipes &mdash; rather you'll find a network of interconnected [[render nodes|ProcNode]] forming the low level model. Each structurally constant segment of the timeline corresponds to a separate node network providing an ExitNode corresponding to each of the global pipes; pulling frames from them means running the engine.
* it is the job of the [[Builder]] create and wire up this render nodes network when provided with a given hig-level-model. So, actually the builder (together with the so called [[Fixture]]) form an isolation layer in the middle, separating the //editing part&nbsp;// from the //processing part.//
[img[Block Diagram|uml/fig128005.png]]
Render Engine, [[Builder]] and [[Controller]] are closely related Subsystems. Actually, the [[Builder]] //creates// a newly configured Render Engine //for every// RenderProcess. Before doing so, it queries from the Session (or, to be more precise, from the [[Fixture]] within the current Session) all necessary Media Object Placement information. The [[Builder]] then derives from this information the actual assembly of [[Processing Nodes|ProcNode]] comprising the Render Engine. Thus:
 * the source of the build process is a sequence of absolute (explicit) [[Placements|Placement]] called the [[Playlist]]
 * the [[build process|BuildProcess]] is driven, configured and controlled by the [[Controller]] subsystem component. It encompasses the actual playback configuration and State of the System.
 * the resulting Render Engine is a list of [[Processors]], each configured to calculate a segment of the timeline with uniform properties. Each of these Processors in turn is a graph of interconnected ProcNode.s.

see also: RenderEntities, [[two Examples (Object diagrams)|Examples]] 

{{red{TODO: adjust terminology in this drawing: "Playlist" &rarr; "Fixture" and "Graph" &rarr; "Segment"}}}
[img[Overview: Components of the Renderengine|uml/fig128261.png]]
<!--{{{-->
<div class='header' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
	<div class='headerShadow'>
		<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
		<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
	</div>
	<div class='headerForeground'>
		<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
		<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
	</div>
</div>
<!-- horizontal MainMenu -->
<div id='topMenu' refresh='content' tiddler='MainMenu'></div>
<!-- original MainMenu menu -->
<!-- <div id='mainMenu' refresh='content' tiddler='MainMenu'></div> -->
<div id='sidebar'>
	<div id='sidebarOptions' refresh='content' tiddler='SideBarOptions'></div>
	<div id='sidebarTabs' refresh='content' force='true' tiddler='SideBarTabs'></div>
</div>
<div id='displayArea'>
	<div id='messageArea'></div>
	<div id='tiddlerDisplay'></div>
</div>
<!--}}}-->
A ParamProvider is the counterpart for (one or many) [[Parameter]] instances. It implements the value access function made available by the Parameter object to its clients.

To give a concrete example: 
* a Fade Plugin needs the actual fade value for Frame t=xxx
* the Plugin has a Parameter Object (from which we could query the information of this parameter being a continuous float function)
* this Parameter Object provides a getValue() function, which is internally linked (i.e. by configuration) to a //Parameter Provider//
* the actual object implementing the ParamProvider Interface could be a Automation MObject located somewhere in the session and would do bezier interpolation on a given keyframe set.
* Param providers are created on demand; while building the Render Engine configuration actually at work, the Builder would have to setup a link between the Plugin Parameter Object and the ParamProvider; he can do so, because he sees the link between the Automation MObject and the corresponding Effect MObject

!!ParamProvider ownership and lifecycle
Actually, ParamProvider is just an interface which is implemented either by a constant or an [[Automation]] function. In both cases, access is via direct reference, while the link to the ParamProvider is maintained by a smart-ptr, which &mdash; in the case of automation may share ownership with the [[Placement]] of the automation data set.

&rarr; see the class diagram for [[Automation]]
&rarr; see EffectHandling
Parameters are all possibly variable control values used within the Render Engine. Contrast this with configuration values, which are considered to be fixed and need an internal reset of the application (or session) state to take effect.

A ''Parameter Object'' provides a descriptor of the kind of parameter, together with a function used to pull the //actual value// of this parameter. Here, //actual// has a two-fold meaning:
* if called without a time specification, it is either a global (but variable) system or session parameter or a default value for automated Parameters. (the intention is to treat this cases uniformly)
* if called with a time specification, it is the query for an &mdash; probably interpolated &mdash; [[Automation]] value at this absolute time. The corresponding ParamProvider should fall back transparently to a default or session value if no time varying data is available

{{red{TODO: define how Automation works}}}
/***
|<html><a name="Top"/></html>''Name:''|PartTiddlerPlugin|
|''Version:''|1.0.6 (2006-11-07)|
|''Source:''|http://tiddlywiki.abego-software.de/#PartTiddlerPlugin|
|''Author:''|UdoBorkowski (ub [at] abego-software [dot] de)|
|''Licence:''|[[BSD open source license]]|
|''TiddlyWiki:''|2.0|
|''Browser:''|Firefox 1.0.4+; InternetExplorer 6.0|
!Table of Content<html><a name="TOC"/></html>
* <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Description',null, event)">Description, Syntax</a></html>
* <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Applications',null, event)">Applications</a></html>
** <html><a href="javascript:;" onclick="window.scrollAnchorVisible('LongTiddler',null, event)">Refering to Paragraphs of a Longer Tiddler</a></html>
** <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Citation',null, event)">Citation Index</a></html>
** <html><a href="javascript:;" onclick="window.scrollAnchorVisible('TableCells',null, event)">Creating "multi-line" Table Cells</a></html>
** <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Tabs',null, event)">Creating Tabs</a></html>
** <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Sliders',null, event)">Using Sliders</a></html>
* <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Revisions',null, event)">Revision History</a></html>
* <html><a href="javascript:;" onclick="window.scrollAnchorVisible('Code',null, event)">Code</a></html>
!Description<html><a name="Description"/></html>
With the {{{<part aPartName> ... </part>}}} feature you can structure your tiddler text into separate (named) parts. 
Each part can be referenced as a "normal" tiddler, using the "//tiddlerName//''/''//partName//" syntax (e.g. "About/Features"). E.g. you may create links to the parts, use it in {{{<<tiddler...>>}}} or {{{<<tabs...>>}}} macros etc.

''Syntax:'' 
|>|''<part'' //partName// [''hidden''] ''>'' //any tiddler content// ''</part>''|
|//partName//|The name of the part. You may reference a part tiddler with the combined tiddler name "//nameOfContainerTidder//''/''//partName//.|
|''hidden''|When defined the content of the part is not displayed in the container tiddler. But when the part is explicitly referenced (e.g. in a {{{<<tiddler...>>}}} macro or in a link) the part's content is displayed.|
|<html><i>any&nbsp;tiddler&nbsp;content</i></html>|<html>The content of the part.<br>A part can have any content that a "normal" tiddler may have, e.g. you may use all the formattings and macros defined.</html>|
|>|~~Syntax formatting: Keywords in ''bold'', optional parts in [...]. 'or' means that exactly one of the two alternatives must exist.~~|
<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!Applications<html><a name="Applications"/></html>
!!Refering to Paragraphs of a Longer Tiddler<html><a name="LongTiddler"/></html>
Assume you have written a long description in a tiddler and now you want to refer to the content of a certain paragraph in that tiddler (e.g. some definition.) Just wrap the text with a ''part'' block, give it a nice name, create a "pretty link" (like {{{[[Discussion Groups|Introduction/DiscussionGroups]]}}}) and you are done.

Notice this complements the approach to first writing a lot of small tiddlers and combine these tiddlers to one larger tiddler in a second step (e.g. using the {{{<<tiddler...>>}}} macro). Using the ''part'' feature you can first write a "classic" (longer) text that can be read "from top to bottom" and later "reuse" parts of this text for some more "non-linear" reading.

<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!!Citation Index<html><a name="Citation"/></html>
Create a tiddler "Citations" that contains your "citations". 
Wrap every citation with a part and a proper name. 

''Example''
{{{
<part BAX98>Baxter, Ira D. et al: //Clone Detection Using Abstract Syntax Trees.// 
in //Proc. ICSM//, 1998.</part>

<part BEL02>Bellon, Stefan: //Vergleich von Techniken zur Erkennung duplizierten Quellcodes.// 
Thesis, Uni Stuttgart, 2002.</part>

<part DUC99>Ducasse, Stéfane et al: //A Language Independent Approach for Detecting Duplicated Code.// 
in //Proc. ICSM//, 1999.</part>
}}}

You may now "cite" them just by using a pretty link like {{{[[Citations/BAX98]]}}} or even more pretty, like this {{{[[BAX98|Citations/BAX98]]}}}.

<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!!Creating "multi-line" Table Cells<html><a name="TableCells"/></html>
You may have noticed that it is hard to create table cells with "multi-line" content. E.g. if you want to create a bullet list inside a table cell you cannot just write the bullet list
{{{
* Item 1
* Item 2
* Item 3
}}}
into a table cell (i.e. between the | ... | bars) because every bullet item must start in a new line but all cells of a table row must be in one line.

Using the ''part'' feature this problem can be solved. Just create a hidden part that contains the cells content and use a {{{<<tiddler >>}}} macro to include its content in the table's cell.

''Example''
{{{
|!Subject|!Items|
|subject1|<<tiddler ./Cell1>>|
|subject2|<<tiddler ./Cell2>>|

<part Cell1 hidden>
* Item 1
* Item 2
* Item 3
</part>
...
}}}

Notice that inside the {{{<<tiddler ...>>}}} macro you may refer to the "current tiddler" using the ".".

BTW: The same approach can be used to create bullet lists with items that contain more than one line.

<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!!Creating Tabs<html><a name="Tabs"/></html>
The build-in {{{<<tabs ...>>}}} macro requires that you defined an additional tiddler for every tab it displays. When you want to have "nested" tabs you need to define a tiddler for the "main tab" and one for every tab it contains. I.e. the definition of a set of tabs that is visually displayed at one place is distributed across multiple tiddlers.

With the ''part'' feature you can put the complete definition in one tiddler, making it easier to keep an overview and maintain the tab sets.

''Example''
The standard tabs at the sidebar are defined by the following eight tiddlers:
* SideBarTabs
* TabAll
* TabMore
* TabMoreMissing
* TabMoreOrphans
* TabMoreShadowed
* TabTags
* TabTimeline

Instead of these eight tiddlers one could define the following SideBarTabs tiddler that uses the ''part'' feature:
{{{
<<tabs txtMainTab 
 Timeline Timeline SideBarTabs/Timeline 
 All 'All tiddlers' SideBarTabs/All 
 Tags 'All tags' SideBarTabs/Tags 
 More 'More lists' SideBarTabs/More>>
<part Timeline hidden><<timeline>></part>
<part All hidden><<list all>></part>
<part Tags hidden><<allTags>></part>
<part More hidden><<tabs txtMoreTab 
 Missing 'Missing tiddlers' SideBarTabs/Missing 
 Orphans 'Orphaned tiddlers' SideBarTabs/Orphans 
 Shadowed 'Shadowed tiddlers' SideBarTabs/Shadowed>></part>
<part Missing hidden><<list missing>></part>
<part Orphans hidden><<list orphans>></part>
<part Shadowed hidden><<list shadowed>></part>
}}}

Notice that you can easily "overwrite" individual parts in separate tiddlers that have the full name of the part.

E.g. if you don't like the classic timeline tab but only want to see the 100 most recent tiddlers you could create a tiddler "~SideBarTabs/Timeline" with the following content:
{{{
<<forEachTiddler 
 sortBy 'tiddler.modified' descending 
 write '(index < 100) ? "* [["+tiddler.title+"]]\n":""'>>
}}}
<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!!Using Sliders<html><a name="Sliders"/></html>
Very similar to the build-in {{{<<tabs ...>>}}} macro (see above) the {{{<<slider ...>>}}} macro requires that you defined an additional tiddler that holds the content "to be slid". You can avoid creating this extra tiddler by using the ''part'' feature

''Example''
In a tiddler "About" we may use the slider to show some details that are documented in the tiddler's "Details" part.
{{{
...
<<slider chkAboutDetails About/Details details "Click here to see more details">>
<part Details hidden>
To give you a better overview ...
</part>
...
}}}

Notice that putting the content of the slider into the slider's tiddler also has an extra benefit: When you decide you need to edit the content of the slider you can just doubleclick the content, the tiddler opens for editing and you can directly start editing the content (in the part section). In the "old" approach you would doubleclick the tiddler, see that the slider is using tiddler X, have to look for the tiddler X and can finally open it for editing. So using the ''part'' approach results in a much short workflow.

<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!Revision history<html><a name="Revisions"/></html>
* v1.0.6 (2006-11-07)
** Bugfix: cannot edit tiddler when UploadPlugin by Bidix is installed. Thanks to José Luis González Castro for reporting the bug.
* v1.0.5 (2006-03-02)
** Bugfix: Example with multi-line table cells does not work in IE6. Thanks to Paulo Soares for reporting the bug.
* v1.0.4 (2006-02-28)
** Bugfix: Shadow tiddlers cannot be edited (in TW 2.0.6). Thanks to Torsten Vanek for reporting the bug.
* v1.0.3 (2006-02-26)
** Adapt code to newly introduced Tiddler.prototype.isReadOnly() function (in TW 2.0.6). Thanks to Paulo Soares for reporting the problem.
* v1.0.2 (2006-02-05)
** Also allow other macros than the "tiddler" macro use the "." in the part reference (to refer to "this" tiddler)
* v1.0.1 (2006-01-27)
** Added Table of Content for plugin documentation. Thanks to RichCarrillo for suggesting.
** Bugfix: newReminder plugin does not work when PartTiddler is installed. Thanks to PauloSoares for reporting.
* v1.0.0 (2006-01-25)
** initial version
<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!Code<html><a name="Code"/></html>
<html><sub><a href="javascript:;" onclick="window.scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>
***/
//{{{
//============================================================================
// PartTiddlerPlugin

// Ensure that the PartTiddler Plugin is only installed once.
//
if (!version.extensions.PartTiddlerPlugin) {



version.extensions.PartTiddlerPlugin = {
 major: 1, minor: 0, revision: 6,
 date: new Date(2006, 10, 7), 
 type: 'plugin',
 source: "http://tiddlywiki.abego-software.de/#PartTiddlerPlugin"
};

if (!window.abego) window.abego = {};
if (version.major < 2) alertAndThrow("PartTiddlerPlugin requires TiddlyWiki 2.0 or newer.");

//============================================================================
// Common Helpers

// Looks for the next newline, starting at the index-th char of text. 
//
// If there are only whitespaces between index and the newline 
// the index behind the newline is returned, 
// otherwise (or when no newline is found) index is returned.
//
var skipEmptyEndOfLine = function(text, index) {
 var re = /(\n|[^\s])/g;
 re.lastIndex = index;
 var result = re.exec(text);
 return (result && text.charAt(result.index) == '\n') 
 ? result.index+1
 : index;
}


//============================================================================
// Constants

var partEndOrStartTagRE = /(<\/part>)|(<part(?:\s+)((?:[^>])+)>)/mg;
var partEndTagREString = "<\\/part>";
var partEndTagString = "</part>";

//============================================================================
// Plugin Specific Helpers

// Parse the parameters inside a <part ...> tag and return the result.
//
// @return [may be null] {partName: ..., isHidden: ...}
//
var parseStartTagParams = function(paramText) {
 var params = paramText.readMacroParams();
 if (params.length == 0 || params[0].length == 0) return null;
 
 var name = params[0];
 var paramsIndex = 1;
 var hidden = false;
 if (paramsIndex < params.length) {
 hidden = params[paramsIndex] == "hidden";
 paramsIndex++;
 }
 
 return {
 partName: name, 
 isHidden: hidden
 };
}

// Returns the match to the next (end or start) part tag in the text, 
// starting the search at startIndex.
// 
// When no such tag is found null is returned, otherwise a "Match" is returned:
// [0]: full match
// [1]: matched "end" tag (or null when no end tag match)
// [2]: matched "start" tag (or null when no start tag match)
// [3]: content of start tag (or null if no start tag match)
//
var findNextPartEndOrStartTagMatch = function(text, startIndex) {
 var re = new RegExp(partEndOrStartTagRE);
 re.lastIndex = startIndex;
 var match = re.exec(text);
 return match;
}

//============================================================================
// Formatter

// Process the <part ...> ... </part> starting at (w.source, w.matchStart) for formatting.
//
// @return true if a complete part section (including the end tag) could be processed, false otherwise.
//
var handlePartSection = function(w) {
 var tagMatch = findNextPartEndOrStartTagMatch(w.source, w.matchStart);
 if (!tagMatch) return false;
 if (tagMatch.index != w.matchStart || !tagMatch[2]) return false;

 // Parse the start tag parameters
 var arguments = parseStartTagParams(tagMatch[3]);
 if (!arguments) return false;
 
 // Continue processing
 var startTagEndIndex = skipEmptyEndOfLine(w.source, tagMatch.index + tagMatch[0].length);
 var endMatch = findNextPartEndOrStartTagMatch(w.source, startTagEndIndex);
 if (endMatch && endMatch[1]) {
 if (!arguments.isHidden) {
 w.nextMatch = startTagEndIndex;
 w.subWikify(w.output,partEndTagREString);
 }
 w.nextMatch = skipEmptyEndOfLine(w.source, endMatch.index + endMatch[0].length);
 
 return true;
 }
 return false;
}

config.formatters.push( {
 name: "part",
 match: "<part\\s+[^>]+>",
 
 handler: function(w) {
 if (!handlePartSection(w)) {
 w.outputText(w.output,w.matchStart,w.matchStart+w.matchLength);
 }
 }
} )

//============================================================================
// Extend "fetchTiddler" functionality to also recognize "part"s of tiddlers 
// as tiddlers.

var currentParent = null; // used for the "." parent (e.g. in the "tiddler" macro)

// Return the match to the first <part ...> tag of the text that has the
// requrest partName.
//
// @return [may be null]
//
var findPartStartTagByName = function(text, partName) {
 var i = 0;
 
 while (true) {
 var tagMatch = findNextPartEndOrStartTagMatch(text, i);
 if (!tagMatch) return null;

 if (tagMatch[2]) {
 // Is start tag
 
 // Check the name
 var arguments = parseStartTagParams(tagMatch[3]);
 if (arguments && arguments.partName == partName) {
 return tagMatch;
 }
 }
 i += tagMatch[0].length;
 }
}

// Return the part "partName" of the given parentTiddler as a "readOnly" Tiddler 
// object, using fullName as the Tiddler's title. 
//
// All remaining properties of the new Tiddler (tags etc.) are inherited from 
// the parentTiddler.
// 
// @return [may be null]
//
var getPart = function(parentTiddler, partName, fullName) {
 var text = parentTiddler.text;
 var startTag = findPartStartTagByName(text, partName);
 if (!startTag) return null;
 
 var endIndexOfStartTag = skipEmptyEndOfLine(text, startTag.index+startTag[0].length);
 var indexOfEndTag = text.indexOf(partEndTagString, endIndexOfStartTag);

 if (indexOfEndTag >= 0) {
 var partTiddlerText = text.substring(endIndexOfStartTag,indexOfEndTag);
 var partTiddler = new Tiddler();
 partTiddler.set(
 fullName,
 partTiddlerText,
 parentTiddler.modifier,
 parentTiddler.modified,
 parentTiddler.tags,
 parentTiddler.created);
 partTiddler.abegoIsPartTiddler = true;
 return partTiddler;
 }
 
 return null;
}

// Hijack the store.fetchTiddler to recognize the "part" addresses.
//

var oldFetchTiddler = store.fetchTiddler ;
store.fetchTiddler = function(title) {
 var result = oldFetchTiddler.apply(this, arguments);
 if (!result && title) {
 var i = title.lastIndexOf('/');
 if (i > 0) {
 var parentName = title.substring(0, i);
 var partName = title.substring(i+1);
 var parent = (parentName == ".") 
 ? currentParent 
 : oldFetchTiddler.apply(this, [parentName]);
 if (parent) {
 return getPart(parent, partName, parent.title+"/"+partName);
 }
 }
 }
 return result; 
};


// The user must not edit a readOnly/partTiddler
//

config.commands.editTiddler.oldIsReadOnlyFunction = Tiddler.prototype.isReadOnly;

Tiddler.prototype.isReadOnly = function() {
 // Tiddler.isReadOnly was introduced with TW 2.0.6.
 // For older version we explicitly check the global readOnly flag
 if (config.commands.editTiddler.oldIsReadOnlyFunction) {
 if (config.commands.editTiddler.oldIsReadOnlyFunction.apply(this, arguments)) return true;
 } else {
 if (readOnly) return true;
 }

 return this.abegoIsPartTiddler;
}

config.commands.editTiddler.handler = function(event,src,title)
{
 var t = store.getTiddler(title);
 // Edit the tiddler if it either is not a tiddler (but a shadowTiddler)
 // or the tiddler is not readOnly
 if(!t || !t.abegoIsPartTiddler)
 {
 clearMessage();
 story.displayTiddler(null,title,DEFAULT_EDIT_TEMPLATE);
 story.focusTiddler(title,"text");
 return false;
 }
}

// To allow the "./partName" syntax in macros we need to hijack 
// the invokeMacro to define the "currentParent" while it is running.
// 
var oldInvokeMacro = window.invokeMacro;
function myInvokeMacro(place,macro,params,wikifier,tiddler) {
 var oldCurrentParent = currentParent;
 if (tiddler) currentParent = tiddler;
 try {
 oldInvokeMacro.apply(this, arguments);
 } finally {
 currentParent = oldCurrentParent;
 }
}
window.invokeMacro = myInvokeMacro;

// Scroll the anchor anchorName in the viewer of the given tiddler visible.
// When no tiddler is defined use the tiddler of the target given event is used.
window.scrollAnchorVisible = function(anchorName, tiddler, evt) {
 var tiddlerElem = null;
 if (tiddler) {
 tiddlerElem = document.getElementById(story.idPrefix + tiddler);
 }
 if (!tiddlerElem && evt) {
 var target = resolveTarget(evt);
 tiddlerElem = story.findContainingTiddler(target);
 }
 if (!tiddlerElem) return;

 var children = tiddlerElem.getElementsByTagName("a");
 for (var i = 0; i < children.length; i++) {
 var child = children[i];
 var name = child.getAttribute("name");
 if (name == anchorName) {
 var y = findPosY(child);
 window.scrollTo(0,y);
 return;
 }
 }
}

} // of "install only once"
//}}}

/***
<html><sub><a href="javascript:;" onclick="scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>

!Licence and Copyright
Copyright (c) abego Software ~GmbH, 2006 ([[www.abego-software.de|http://www.abego-software.de]])

Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:

Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.

Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or other
materials provided with the distribution.

Neither the name of abego Software nor the names of its contributors may be
used to endorse or promote products derived from this software without specific
prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.

<html><sub><a href="javascript:;" onclick="scrollAnchorVisible('Top',null, event)">[Top]</sub></a></html>
***/
Facility guiding decisions regarding the strategy to employ for rendering or wiring up connections. The PathManager is querried through the OperationPoint, when executing the connection steps within the Build process.
Pipes play an central role within the Proc Layer, because for everything placed and handled within the session, the final goal is to get it transformed into data which can be retrieved at some pipe's exit port. Pipes are special facilities, rather like inventory, separate and not treated like all the other objects.
We don't distinguish between "input" and "output" ports &mdash; rather, pipes are thought to be ''hooks for making connections to''. By following this line of thought, each pipe has an input side and an output side and is in itself something like a ''Bus'' or ''processing chain''. Other processing entities like effects and transitions can be placed (attached) at the pipe, resulting them to be appended to form this chain. Likewise, we can place [[wiring requests|WiringRequest]] to the pipe, meaning we want it connected so to send it's output to another destination pipe. The [[Builder]] may generate further wiring requests to fulfil the placement of other entities.
Thus //Pipes are the basic building blocks// of the whole render network. We distinguish ''global available'' Pipes, which are like the sum groups of a mixing console, and the ''lokal pipe'' or [[source ports|ClipSourcePort]] of the individual clips, which exist only within the duration of the corresponding clip. The design //limits the possible kinds of pipes // to these two types &mdash; thus we can build local processing chains at clips and global processing chains at the global pipes of the session and that's all we can do. (because of the flexibility which comes with the concept of [[placements|Placement]], this is no real limitation)

Pipes are denoted and addressed by [[pipe IDs|PipeID]], implemented as ID of a pipe asset. Besides explicitly named pipes, there are some generic placeholder ~IDs for a standard configured pipe of a given type. This is done to avoid creating a separate ~Pipe-ID for each and every clip to build.

While pipes are rather rigid building blocks -- and especially are limited to a single StreamType without conversions, the interconnections or ''routing'' links to the contrary are more flexible. They are specified and controled through the use of an OutputDesignation, which, when fully resolved, should again yield a target ~Pipe-ID to connect. The mentioned resolution of an output designation typically involves an OutputMapping.

Similar structures and mechanisms are extended beyond the core model: The GUI can connect the viewer(s) to some pipe (and moreover can use [[probe points|ProbePoint]] placed like effects and connected to some pipe), and likewise, when starting a ''render'', we get the opportunity to specify the ModelPort (exit point of a GlobalPipe) to pull the data from. Pulling data from some pipe is the (only) way to activate the render nodes network reachable from this pipe.

&rarr; [[Handling of Tracks|TrackHandling]]
&rarr; [[Handling of Pipes|PipeHandling]]

!Identification
Pipes are distinct objects and can be identified by their asset ~IDs. Besides, as for all [[structural assets|StructAsset]] there are extended query capabilities, including a symbolic pipe-id and a media (stream) type id. Any pipe can accept and deliver exactly one media stream kind (which may be inherently structured though, e.g. spatial sound systems or stereoscopic video)

!creating pipes
Pipe assets are created automatically by being used and referred. Each [[Timeline]] holds a collection of global pipes, attached to the BindingMO, which is the representation or attachement point of the Timeline within the HighLevelModel ([[Session]]) ({{red{todo: implementation missing as of 11/09}}}), and further pipes can be created by using a new pipe reference in some placement. Moreover, every clip has an (implicit) [[source port|ClipSourcePort]], which will appear as pipe asset when first used (referred) while [[building|BuildProcess]]. Note that creating a new pipe implies using a [[processing pattern|ProcPatt]], which will be queried from the [[Defaults Manager|DefaultsManagement]] (resulting in the use of some preconfigured pattern or maybe the creation of a new ProcPatt object if necessary)

!removal
Deleting a Pipe is an advanced operation, because it includes finding and "detaching" all references, otherwise the pipe will leap back into existence immediately. Thus, global pipe entries in the Session and pipe references in [[locating pins|LocatingPin]] within any placement have to be removed, while clips using a given source port will be disabled. {{red{todo: implementation deferred}}}

!using Pipes
there is not much you can do directly with a pipe asset. It is an point of reference, after all. Any connection or routing to a target pipe is done by a placement with a suitable WiringPlug in some part of the timeline (&rarr; OutputDesignation), so it isn't stored with the pipe. You can edit the (user visible) description an you can globally disable a pipe asset. The pipe's ID and media stream type of course are fixed, because any connection and referral (via the asset ID) is based on them. Later on, we should provide a {{{rewire(oldPipe, newPipe)}}} to search any ref to the {{{oldPipe}}} and try to rewrite it to use the {{{newPipe}}}, possibly with a new media stream type.
Pipes are integrated with the [[management of defaults|DefaultsManagement]]. For example, any pipe implicitly uses some [[processing pattern|ProcPatt]] &mdash; it may default to the empty pattern. This way, any kind of standard wiring might be applied to the pipes (e.g a fader for audio, similar to the classic mixing consoles). This //is // a global property of the pipe, but &mdash; contrary to the stream type &mdash; this pattern may be switched {{red{really? -- lots of open questions here as of 11/10}}}
A straight processing chain or ''Pipe'' is a core concept in Lumiera's HighLevelModel. Like most of the other building blocks in that model, pipes are represented in the asset view, which leads to having unique pipe-asset ~IDs for all pipes encountered within the model. Now, because of the special building pattern used within that model, which is comprised of pipes and flexible interconnections, the routing and wiring can be done in terms of pipe-~IDs, using them as a short representation of an OutputDesignation. As a consequence, by these relations, the pipe-~IDs are promoted to an universal key for wiring, routing and output targets, usable from the high-level objects down to the low level nodes within the engine.
* pipe-~IDs are used to label the global busses
* pipe-~IDs are used to denote output designations for routing
* pipe-~IDs are attached to the real output possibilities, available when resolving such a designation
* pipe-~IDs serve to denote the ExitNode corresponding to a global pipe within a single segment of the low-level model
* pipe-~IDs consequently can be used to address a ModelPort, corresponding both to such a global bus and the corresponding exit nodes
A Placement represents a //relation:// it is always linked to a //Subject// (this being a [[Media Object|MObject]]) and has the meaning to //place// this Subject in some manner, either relatively to other Media Objects, by some Constraint or simply absolute at (time, output). The latter case is especially important for the build process and thus represented by a special [[Sub-Interface ExplicitPlacement|ExplicitPlacement]]. Besides this simple cases, Placements can also express more specific kinds of "locating" an object, like placing a sound source at a pan position or placing a video clip at a given layer (above or below another video clip)

So basically placements represent a query interface: you can allways ask the placement to find out about the position of the related object in terms of (time, output), and &mdash; depending on the specific object and situation &mdash; also about these additional [[placement derived dimensions|PlacementDerivedDimension]] like sound pan or layer order or similar things which also fit into the general concept of "placing" an object.

The fact of being placed in the [[Session|SessionOverview]] is constitutive for all sorts of [[MObject]]s, without Placement they make no sense. Thus &mdash; technically &mdash; Placements act as ''smart pointers''. Of course, there are several kinds of Placements and they are templated on the type of MObject they are refering to. Placements can be //aggregated// to increasingly constrain the resulting "location" of the refered ~MObject. See &rarr; [[handling of Placements|PlacementHandling]] for more details

!Placements as instance
Effectively, the placement of a given MObject into the Session acts as setting up an concrete instance of this object. This way, placements exhibit a dual nature. When viewed on themselves, like any reference or smart-pointer they behave like values. But, by adding a placement to the session, we again create a unique distinguishable entity with reference semantics: there could be multiple placements of the same object but with varying placement properties. Such a placement-bound-into-the-session is denoted by an generic placement-ID or (as we call it) &rarr; PlacementRef; behind the scenes there is a PlacementIndex keeping track of those "instances" &mdash; allowing us to hand out the PlacementRef (which is just an opaque id) to client code outside the Proc-Layer and generally use it as an shorthand, behaving as if it was an MObject instance
For any [[media object|MObject]] within the session, we can allways at least query the time (reference/start) point and the output destination from the [[Placement]], by which the object is being handled. But the simple act of placing an object in some way, can &mdash; depending on the context &mdash; create additional degrees of freedom. To list some important examples:
* placing a video clip overlapping with other clips on other tracks creates the possibility for the clip to be above another clip or to be combined in various other ways with the other clips at the same time position
* placing a mono sound object plugged to a stereophoic output destination creates the freedom to define the pan position
The Placement interface allows to query for these additional //parameter values derived from the fact of being placed.//

!defining additional dimensions
probably a LocatingPin but... {{red{TODO any details are yet unknown as of 5/08}}}
!querying additional dimensions
basically you resolve the Placement, yielding an ExplicitPlacement... {{red{TODO but any details of how additional dimensions are resolved is still undefined as of 5/08}}}
[[Placement]]s are at the very core of all [[editing operations|EditingOperations]], because they act as handles (smart pointers) to access the [[media objects|MObject]] to be manipulated. Placements themselves are lightweight and can be handled with //value semantics//. But, when adding a Placement to the [[Session]], it gains an distinguishable identity and should be treated by reference from then on: changing the location properties of this placement has a tangible effect on the way the placed object appears in the context of the session. Besides the direct (language) references, there is a special PlacementRef type which builds on this registration of the placement within the session, can be represented as POD and thus passed over external interfaces. Many editing tasks include finding some Placement in the session and reference as parameter. By acting //on the Placement object,// we can change parameters of the way the media object is placed (e.g. adjust an offset), while by //dereferencing//&nbsp; the Placement object, we access the "real" media object (e.g. for trimming its length). Placements are ''templated'' on the type of the actual ~MObject they refer to, thus defining the interface/methods usable on this object.

Actually, the way each Placement ties and locates its subject is implemented by one or several small LocatingPin objects, where subclasses of LocatingPin implement the various different methods of placing and resolving the final location. Notably, we can give a ~FixedLocation or we can atach to another ~MObject to get a ~RelativeLocation, etc. In the typical use case, these ~LocatingPins are added to the Placement, but never retrieved directly. Rather the Placement acts as a ''query interface'' for determining the location of the related object. Here, "location" can be thought of as encompassing multiple dimenstions at the same time. An object can be
* located at a specific point in time
* related to and plugged into a specific output or global bus
* defined to have a position within some [[context-dependant additional dimensions|PlacementDerivedDimension]] like
** the pan position, either on the stereophoic base, or within a fully periphoic (spatial) sound system
** the layer order and overlay mode for video (normal, additive, subtractive, masking)
** the stereoscopic window position (depth parameter) for 3D video
** channel and parameter selection for MIDI data

Placements have //value semantics,// i.e. we don't stress the identity of a placement object (~MObjects on the other hand //do have// a distinguishable identity): initially, you create a Placement parametrized to some specific kind by adding [[LocatingPin]]s (fixed, relative,...) and possibliy you use a subclass of {{{Placement<MObject>}}} to encode additional type information, say {{{Placement<Clip>}}}, but later on, you treat the placement polymorphically and don't care about its kind. The sole purpose of the placement's kind is to select some virtual function implementing the desired behaviour. There is no limitation to one single Placement per ~MObject, indeed we can have several different Placements of the same MObject (from a users point of view, these behave like being clones). Besides, we can ''aggregate'' additional [[LocatingPin]]s to one Placements, resulting in their properties and constraints being combined to yield the actual position of the referred ~MObject.

!design decisions
* the actual way of placing is implemented similar to the ''State Pattern'' by small embedded LocatingPin objects.
* these LocatingPin objects form a ''decorator'' like chain
* resolving into an ExplicitPlacement traverses this chain
* //overconstraining// a placement is not an error, we just stop traversing the chain (ignoring the remaining additional Placements) at the moment the position is completely defined.
* placements can be treated like values, but incorporate an identity tag for the purpose of registering with the session.
* we provide subclasses to be able to form collections of e.g. {{{Placement<Effect>}}}, but we don't stress polymorphism here. &rarr; PlacementType
* Why was the question how to access a ~MObject subinterface a Problem?
*# we want the Session/Fixture to be a collection of Placements. This means, either we store pointers, or Placement needs to be //one// unique type!
*# but if Placement is //a single type//, then we can get only MObjects from a Placement.
*# then either we had to do everything by a visitor (which gets the concrete subtype dynamically), or we'd end up switching on type.

An implementation facility used to keep track of individual Placements and their relations.
Especially, the [[Session]] maintains such an index, allowing to use the (opaque) PlacementRef tags for referring to a specific "instance" of an MObject, //placed// in a unique way into the current session. And, moreover, this index allows for one placement referring to another placement, so to implement a //relative// placement mode. Because there is an index behind the scenes, it is possible to actually access such a referral in the reverse direction, which is necessary for implementing the desired placement behaviour (if an object instance used as anchor is moved, all objects placed relatively to it have to move accordingly, which necessitates finding those other objects).

Besides searching, [[placement instances|Placement]] can be added to the index, thereby creating a copy managed by the backing data structure. Thus, the session's PlacementIndex is the backbone of the session data structure, and the session's contents are actually contained within it.

!rationale of the choosen implementation
What implementation approach to take for the index largely depends on the usage pattern. Generally speaking, the Lumiera [[Session]] is a collection of MObjects attached by [[Placement]]; any relations between these objects are established on a logical level, implemented as markers within the respective placements. For [[building the render nodes graph|BuildProcess]], a consolidated view of the session's effective contents is created ("[[Fixture]]"), then to be traversed while emitting the LowLevelModel. The GUI is expected to query the index to discover parts of the structure; the [[object reference tags|MObjectRef]] returned by these queries will be used as arguments to any mutating operation on the objects within the session.

Using a ''flat hashtable'' allows to access a Placement denoted by ID in O(1). This way we get the Placement, but nothing more. So, additionally we'd have to set up an data record holding additional information:
* the [[scope|PlacementScope]] containing this placement
* allowing to create a path "up" from this scope, which is used for resolving any queries
* (maybe/planned) relations to other placements

Alternatively, we could try to use a ''structure based index'', thereby avoiding the mentioned description record by folding any of the contained information into the surrounding data structure:
* the scope would be obvious from the index, resp. from the path used to resolve this index
* any other information, especially the relations, would be folded into the placement
* this way, the "index" could be reduced to being the session data structure itself.

//does a placement need to know it's own ID?//&nbsp; Obviously, there needs to be a way to find out the ID for a given placement, especially in the following situations:
* for most of the operations above, when querying additional information from index for a given placement
* to create a PlacementRef (this is a variant of the "shared ptr from this"-problem)
On second sight, this problem turns out to be more involved, because either we have to keep a second index table for the reverse lookup (memory address -> ID), or have to tie the placement by a back-link when adding it to the index/session data structure, or (alternatively) it forces us to store a copy of the ID //within// the placement itself. The last possibility seems to create the least impact; but implementing it this way effectively gears the implementation towards a hashtable based approach.


!supported operations
The placement index is utilized by editing operations and by executing the build process. Besides these core operations it allows for resolving PlacementRef objects. This latter functionality is used by all kinds of relative placements and for dealing with them while building, but it is also used to resolve [[object reference tags|MObjectRef]], which possibly may have been handed out via an external API or may have crossed layer boundaries. From these use cases we derive the following main operations to support:
* find the actual [[Placement]] object for a given ID
* find the //scope//&nbsp; a given placement resides in. More specifically, find the [[placement defining this scope|PlacementScope]]
* find (any/all) other placements referring to a given placement ("within this scope")
* add a new placement to a scope given as parameter
* remove a placement from index
* (planned) move a placement to a different scope within the session

!!!Handling of Subtypes
While usually type relations don't carry over to smart-poitner like types, in case of Placement I used a special definition pattern to artificially create such type relations ({{{Placement<session::Clip>}}} is subclass of {{{Placement<MObject>}}}). Now, as we're going to copy and maintain Placements within the storage backing the index, the question is: do we actually store subtypes (all having the same size btw) or do we use a vtable based mechanism to recover the type information on access?

Actually, the handling of placement types quite flexible; the actual hierarchy of Placement types can be determined in the //usage context// &mdash; it is not really stored within the placement, and there is no point in storing it within the index. Only the type of the //pointee//&nbsp; can be checked with the help of Placement's vtable.

Thus, things just fall into place here, without the need of any additional implementation logic. The index stores {{{Placement<MObject>}}} instances. The usage context will provide a suitable meaning for more specifically typed placements, and as long as this is in line with the type relations on the pointee(s), as checked by the {{{Placement::isCompatible<TY>()}}} call, the placement relations will just work out right by the the cast happening automatically on results retrieval.
&rarr; see PlacementType

!implementation
Consequently, we incorporate a random hash (implemented as {{{LUID}}}) into the individual placement, this way creating an distinguishable //placement identity,// which is //not retained on copying.// The actual ID tag is complemented by a compile time type (template parameter), thus allowing to pass on additional context information through API calls. Placements themselves use a vtable (and thus RTTI), allowing to re-discover the exact type at runtime. Any further relation information is contained within the placement's [[locating pins|LocatingPin]], thus, any further description records can be avoided by storing the placements immediately //within the index.// To summarise, the implementation is comprised of
* a main table resolving hash-ID to storage location
* information about the enclosing scope for each placement, stored within the main entry
* a reverse lookup table to find all placements contained within a given scope
* an instance holding and managing facility based on pooled allocation
A generic reference mechanism for Placements, as added to the current session.
While this reference itself is not tied to the actual memory layout (meaning it's //not// a disguised pointer), the implementation relies on a [[placement index facility|PlacementIndex]] for tracking and retrieving the actual Placement implementation object. As a plus, this approach allows to create active interconnections between placements. We utilise this possibility to create a system of [[nested scopes|PlacementScope]]. The index facility allows to //reverse// the relation denoted by such a reference, inasmuch it is possible to retrieve all other placements referring to a given target placement. But for an (external) user, this link to an index implementation is kept transparent and implicit.

!implementation considerations
From the usage context it is clear that the PlacementRef needs to incorporate a simple ID as the only actual data in memory, so it can be downcasted to a POD and passed as such via LayerSeparationInterfaces. And, of course, this ID tag should be the one used by PlacementIndex for organising the Placement index entries, thus enabling the PlacementRef to be passed immediately to the underlying index for resolution. Thus, this design decision is interconnected with the implementation technique used for the index (&rarr; PlacementIndex). From the requirement of the ID tag to be contained in a fixed sized storage, and also from the expected kinds of queries Ichthyo (5/09) choose to incorporate a {{{LUID}}} as a random hash immediately into the placement and build the ID tag on top of it. 

!using placement references
Placement references can be created directly from a given placement, or just from an {{{Placement::ID}}} tag. Creation and dereferencing can fail, because the validity of the reference is checked with the index. This implies accessing the //current session// behind the scenes. Placement references have value semantics. Dereferencing searches the denoted Placement via index, yielding a direct (language) ref.

Placement references mimic the behaviour of a real placement, i.e. they proxy the placement API (actually the passive, query-related part of placement's API functions), while directly forwarding calls to the pointee (~MObejct or subclass) when using {{{operator->()}}}. They can be copied and especially allow a lot of assignments (from placement, placement-ID or even plain LUID), even including a dynamic downcast on the pointee. Bottom line is that a placement ref can pretty much be used in place of a language ref to a real placement, which is crucial for implementing MObjectRef.

MObjects are attached into the [[Session]] by adding a [[Placement]]. Because this especially includes the possibility of //grouping or container objects,// e.g. [[sequences|Sequence]] or [[tracks|Track]] or [[meta-clips|VirtualClip]], any placement may optionally define and root a scope, and every placement is at least contained in one encompassing scope &mdash; of course with the exception of the absolute top level, which can be thought off as being contained in a scope of handling rules.

Thus, while the [[sequences|Sequence]] act as generic container holding a pile of placments, actually there is a more fine grained structure based on the nesting of the tracks, which especially in Lumiera's HighLevelModel belong to the sequence (they aren't a property of the top level timeline as one might expect). Building upon these observations, we actually require each addition of a placement to specify a scope. Consequently, for each Placement at hand it is possible to determine an //containing scope,// which in turn is associated with some Placement of a top-level ~MObject for this scope. The latter is called the ''scope top''. An example would be the {{{Placement<Track>}}} acting as scope of all the clips placed onto this track. The //implementation//&nbsp; of this tie-to-scope is provided by the same mechanism as utilised for relative placements, i.e. an directional placement relation. Actually, this relation is implemented by the PlacementIndex within the current [[Session]].


[>img[Structure of Placment Scopes|draw/ScopeStructure1.png]]
!Kinds of scopes
There is only a limited number of situations constituting a scope
* conceptually, the very top level is a scope of general rules.
* the next level is the link of [[binding|BindingMO]] of a [[Sequence]] into either a (top-level) [[Timeline]] or as virtual media into a VirtualClip. It is implemented through a {{{Placement<Binding>}}}.
* each sequence has at least one (manadtory) top-level placement holding its root track
* tracks may contain nested sub tracks.
* clips and (track-level) effects likewise are associated with an enclosing track.
* an important special case of relative placement is when an object is [[attached|AttachedPlacementProblem]] to another leading object, like e.g. an effect modifying a clip
__note__: attaching a Sequence in multiple ways &rarr; [[causes scoping problems|BindingScopeProblem]]

!Purpose of Placement scoping
Similar to the common mechanisms of object visibility in programming languages, placement scopes guide the search for and resolution of properties of placement. Any such property //not defined locally// within the placement is queried ascending through the sequence of nested scopes. Thus, global definitions can be shadowed by local ones.
Placement is a smart-ptr. As such, usually smart-pointers are templated on the pointee type, but a type relation between different target types doesn't carry over into a type relation on the corresponding smart-pointers. Now, as a [[Placement]] or a PlacementRef often is used to designate a specific "instance" of an MObject placed into the current session, the type parametrisation plays a crucial role when it comes to processing the objects contained within the session. Because the session deliberately has not much additional structure, besides the structure created by [[scopes and aggregations|PlacementScope]] within the session's contents.

To this end, we're using a special definition pattern for Placements, so
* a placement can refer to a specific sub-Interface like Track, Clip, Effect
* a specialised placement can stand-in for the more generic type.

!generic handling
Thus, ~MObject and Placement<~MObject> relate to the "everything is an object" view of affairs. More specific placements are registered, searched and retrieved within the session through this generic interface. In a similar vein, ~PlacementRef<~MObject> and MObjectRef is used on LayerSeparationInterfaces. This works, because it is possible to re-discover the more fine-grained target type. 
* ''active type rediscovery'' happens when a [[using visitors|VisitorUse]], which requires support by the pointee types (~MObject subclasses), so the visitor implementation is able to build a trampoline table to dispatch into a specifically typed context.
* ''passive type rediscovery'' is possible whenever the usage context //is already specifically typed.// Because in this case we can check the type (by RTTI) and filter out any placement not convertible to the type requested within the given context.

!downcasting and slicing
Deliberately, all Placements have the same runtime size. Handling them value-like under certain circumstances is intended and acceptable. Of course then slicing on the level of the Placement will happen. But because the Placement actually is a smart-pointer, the pointee remains unaffected, and can be used later to re-gain the fully typed context.

On the other hand, care has to be taken when ''downcasting'' a placement. When possible, this should be preceded by a {{{Placement::isCompatible<TY>()}}}-call, which checks based on the pointee's RTTI. Client code is encouraged to avoid explicit casting and rather rely on the provided facilities:
* invoking one of the templated access functions of the PlacementIndex
* using the QueryFocus to issue a specifically typed {{{ScopeQuery<TY>}}} (which yields an suitable iterator)
* create an specifically typed MObjectRef and bind it to some reference source (~Placement-ID, LUID, Placement instance within the session)
* implementing a visitor (~BuilderTool)

//This page is a scrapbook for working out the implementation of how to (re)build the [[Fixture]]//
Structurally, (re)building the Fixture rather belongs to [[Session]], but it is implemented very similar to the render engine build process: by treating all ~MObjects found in the various [[sequences|Sequence]] with a common [[visiting tool|VisitorUse]], this tool collects a simplified view with everyting made explicit, which can be pulled of as Fixture, i.e. (special kind of sequence list) afterwards.
* there is a //gathering phase// and a //solving phase//, the gathering is done by visiting.
* during the gathering phase, there ''need to be a lock'' preventing any other edit operation.
* the solving is delegated to the individual ~Placements. It is effectively a {{{const}}} operation creating a ExplicitPlacement (copy)
* thus the Fixture contains these newly created ~ExplicitPlacements, refering to ~MObjects shared with the original Placements within the sequences

!!!prerequisites
* Session and sequences exist.
* Pipes exist and are configured

!!!postconditions
* the Fixture contains one sorted timeline of ExplicitPlacement instances
* Anything in this list is actually to be rendered
* {{red{TODO: how to store and group the effects?}}}
* any meta-clips or other funny things have been resolved to normal clips with placement
* any multichannel clips has been broken down to elementary clips {{red{TODO: what is "elementary". e.g. stereo sound streams?}}}
* any globally or otherwise strangely placed effects have been attached either to a clip or to some pipe
* we have one unified list of tracks

<<tasksum start>>
<<taskadder below>>
<<task >> work out how to get the processing of effects chained to some clip right
<<task >> work out how to handle multichannel audio (and stereo video)

!gathering phase
!!preparing
<<task>>what data collections to build?

!!treating a Track
<<task>>work out how to refer to pipes and do other config
<<task>>get some uniqe identifier and get relevant properties

!!treating a {{{Placement<Clip>}}}
<<task>>check the direct enablement status
<<task>>asses the compound status, maybe process recursively

!!treating an {{{Placement<Effect>}}}
<<task>>find out the application point {{red{really?}}}

!solving phase
<<task>>trigger solving on all placements
<<task>>sort the resulting ~ExplicitPlacements

<<tasksum end>>
//This page is a scrapbook for working out the implementation of the builder//

* NodeCreatorTool is a [[visiting tool|VisitorUse]]
* the render engine to be built is contained as state within this tool object while it is passed around
!!!prerequisites
* Session and sequences exist.
* Pipes exist and are configured
* Fixture contains ExplicitPlacement for every MObject to be rendered, and nothing else

<<tasksum start>>
<<taskadder>>

!!preparing
We need a way of addressing existing [[pipes|Pipe]]. Besides, as the Pipes and Tracks are referred by the Placements we are processing, they are guaranteed to exist.

!!treating a Pipe
<<task>>get the [[processing pattern|ProcPatt]] of the pipe by accessing the underlying pipe asset.
<<task>>process this ProcPatt recursively

!!treating a processing pattern
<<task>>{{red{finally go ahead and define what a ProcPatt need to be...}}}

!!treating a {{{Placement<Clip>}}}
<<task>>get the ProcPatt of the underlying media (asset)
<<task>>process the ProcPatt recursively
<<task>>access the ClipSourcePort (which may be created on-the-fly)
<<task>>enqueue an WiringRequest for connecting the source pipeline to the source port
<<task>>process the clip's render pipe recursively (thus adding the camera etc.)
<<task>>enqueue an WiringRequest for any placement to some pipe for this clip.
* __note__: we suppose
** all wiring requests will be done after the processing of entities
** all effects placed to this clip will be processed after this clip (but before the wiring requests)

!!treating an {{{Placement<Effect>}}}
<<task>>{{red{how to assure that effecs are processed after clips/pipes??}}}
<<task>>find out the application point
<<task>>build a transforming node for the effect and insert it there

!!postprocessing
<<task>>sort and group the assembled list of [[wiring requests|WiringRequest]] by pipes

<<tasksum end>>
With //play process//&nbsp; we denote an ongoing effort to calculate a stream of frames for playback or rendering.
The play process is an conceptual entity linking together several activities in the [[Backend]] and the RenderEngine. Creating a play process is the central service provided by the [[player subsystem|Player]]: it maintains a registration entry for the process to keep track of associated entities, resources allocated and calls [[planned|FrameDispatcher]] and [[invoked|RenderJob]] as a consequence, and it wires and exposes a PlayController to serve as an interface and information hub.

''Note'': the player is in no way engaged in any of the actual calculation and management tasks necessary to make this [[stream of calculations|CalcStream]] happen. The play process code contained within the player subsystem is largely comprised of organisational concerns and not especially performance critical.
* the [[engine backbone|RenderBackbone]] is responsible for [[dispatching|FrameDispatcher]] the [[calculation stream|CalcStream]] and preparing individual calculation jobs
* the [[Scheduler]] at the [[engine core|RenderEngine]] has the ability to trigger individual frame calculation carry out individual [[frame calculation jobs|RenderJob]].
* the OutputSlot exposed by the [[output manager|OutputManagement]] is responsible for accepting timed frame delivery

[>img[Anatomy of a Play Process|uml/fig144005.png]]
!Anatomy of a Play Process
The Controller is exposed to the client and acts as frontend handle, while the play process body groups and manages all the various parts cooperating to generate output. For each of the participating global pipes we get a [[feed|Feed]] to drive that pipeline to deliver media of a specific kind.

Right within the play process, there is a separation into two realms, relying on different programming paradigms. Obviously the play controller is a state machine, and similarily the body object (play process) has a distinct operation state. Moreover, the current collection of individual objects hooked up at any given instance is a stateful variable. To the contrary, when we enter the realm of actual processing, operations are carried out in parallel, relying on stateless descriptor objects, wired into individual calculation jobs, to be scheduled as non-blocking units of operation. For each series of consecutive frames to be calculated, there is a descriptor object, the CalcStream, which also links to a specificaly tailored dispatcher table, allowing to schedule the individual frame jobs. Whenever the controller determines a change in the playback plan (speed change, skip, scrubbing, looping, ...), a new CalcStream is created, while the existing one is just used to mark any not-yet processed job as superseded.
The [[Player]] is an independent [[Subsystem]] within Lumiera, located at Proc-Layer level. A more precise term would be "rendering and playback coordination subsystem". It provides the capability to generate media data, based on a high-level model object, and send this generated data to an OutputDesignation, creating an continuous and timing controlled output stream. Clients may utilise these functionality through the ''play service'' interface.

!subject of performance
Every play or render process will perfrom a part of the session. This part can be specified in varios ways, but in the end, every playback or render boils down to //performing some model ports.// While the individual model port as such is just an identifier (actually implemented as ''pipe-ID''), it serves as a common identifier used at various levels and tied into several related contexts. For one, by querying the [[Fixture]], the ModelPort leads to the actual ExitNode -- the stuff actually producing data when being pulled. Besides that, the OutputManager used for establishing the play process is able to resolve onto a real OutputSlot -- which, as a side effect, also yields the final data format and data implementation type to use for rendering or playback.


!provided services
* creating a PlayProcess
* managing existing play processes
* convenience short-cuts for //performing//&nbsp; several kinds of high-level model objects

!creating a play process
This is the core service provided by the player subsystem. The purpose is to create a collaboration between several entities, creating media output
;data producers
:a set of ModelPort elements to ''pull'' for generating output. They can either be handed in direcly, or resolved from a set of OutputDesignation elements
;data sinks
:to be able to create output, the PlayProcess needs to cooperate with [[output slots|OutputSlot]]
:physical outputs are never handled directly, rather, the playback or rendering needs an OutputManager to resolve the output designations into output slots
;controller
:when provided with these two prerequisites, the play service is able to build a PlayProcess.
:for clients, this process can be accessed and maintained through a PlayController, which acts as (copyable) handle and front-end.
;engine
:the actual processing is done by the RenderEngine, which in itself is a compound of several services within [[Backend]] and Proc-Layer
:any details of this processing remain opaque for the clients; even the player subsystem just accesses the EngineFaçade
Within Lumiera, &raquo;Player&laquo; is the name for a [[Subsystem]] responsible for organising and tracking //ongoing playback and render processes.// &rarr; [[PlayProcess]]
The player subsystem does not perform or even manage any render operations, nor does it handle the outputs directly.
Yet it addresses some central concerns:

;uniformity
:all playback and render processes are on equal footing, handled in a similar way.
;integration
:the player cares for the necessary integration with the other subsystems
:it consults the OutputManagement, retrieves the necessary information from the [[Session]] and coordinates the forwarding of [[Backend]] calls.
;time quantisation
:the player translates continuous time values into discrete frame counts.
:to perform this [[quantisation|TimeQuant]], the help of the session for building a TimeGrid for each output channel is required.

!{{red{WIP 5/2011}}} under construction
The player subsystem is currently about to be designed and built up; some time ago, __Joel Holdsworth__ and __Ichthyo__ did a design study with a PlayerDummy, which is currently hooked up with the TransportControl in the Lumiera GUI. Starting from these experiences, and the general requirements of an NLE, the [[design of the Player subsystem|DesignPlayerSubsystem]] is being worked out.
__Joelholdsworth__ and __Ichthyo__ created this player mockup in 1/2009 to find out about the implementation details regarding integration and colaboration between the layers. There is no working render engine yet, thus we use a ~DummyImageGenerator for creating faked yuv frames to display. Within the GUI, there is a ~PlaybackController hooked up with the transport controls on the timeline pane. 
# first everything was contained within ~PlaybackController, which spawns a thread for periodically creating those dummy frames
# then, a ~PlayerService was factored out, now implemented within ~Proc-Layer (later to delegate to the emerging real render engine implementation).<br/>A new LayerSeparationInterface called ''~DummyPlayer'' was created and set up as a [[Subsystem]] within main().
# the next step was to support multiple playback processes going on in parallel. Now, the ~PlaybackController holds an smart-handle to the ~PlayProcess currently generating output for this viewer, and invokes the transport control functions and the pull frame call on this handle.
# then, also the tick generation (and thus the handling of the thread which pulls the frames) was factored out and pushed down into the mentioned ~PlayProcess. For this to work, the ~PlaybackController now makes a display slot available on the public GUI DisplayFacade interface, so the ~PlayProcessImpl can push up the frames for display within the GUI
[img[Overview to the dummy player operation|draw/PlayerArch1.png]]

!when playing...
As a prerequisite, a viewer has to be prepared within the GUI. A XV video display widget is wired up to a sigc++ signal slot, using the Glib::Dispatcher to forward calls from the play process thread to the GTK main event loop thread. All of this wiring actually is encapsulated as a DisplayerSlot, created and registered with the DisplayService.

When starting playback, the display slot handle created by these preparations is used to create a ~PlayProcess on the ~DummyPlayer interface. Actually
* a ~PlayProcessImpl object is created down within the player implementation
* this uses the provided slot handle to actually //allocate// the display slot via the Display facade. Here, //allocating// means registering and preparing it for output by //one single// ~PlayProcess. For the latter, this allocation yields an actually opened display handle.
* moreover, the ~PlayProcessImpl aquires an TickService instance, which is still trotteling (not calling the periodic callback)
* probably, a real player at this point would initiate a rendering process, so he can fetch the actual output frames periodically.
* on the "upper" side of the ~DummyPlayer facade, a lib::Handle object is created to track and manage this ~PlayProcesImpl instance
The mentioned handle is returned to the ~PlaybackController within the GUI, which uses this handle for all further interactions with the Player. The handle is ref counting and has value semantics, so it can be stored away, passed as parameter and so on. All such handles corresponding to one ~PlayProcess form a family; when the last goes out of scope, the ~PlayProcess terminates and deallocates any resources. Conceptually, this corresponds to pushing the "stop" button. Handles can be deliberately disconnected by calling {{{handle.close()}}} &mdash; this has the same effect as deleting a handle (when all are closed or deleted the process ends).

All the other play control operations are simply forwarded via the handle and the ~PlayProcessImpl. For example, "pause" corresponds to setting the tick frequency to 0 (thus temporarily discontinuing the tick callbacks). When allocating the display slot in the course of creating the ~PlayProcessImpl, the latter only accesses the Display facade. It can't access the display or viewer directly, because the GUI lives within an plugin; lower layers aren't allowed to call GUI implementation functions directly. Thus, within the Display facade a functor (proxy) is created to represent the output sink. This (proxy) Displayer can be used within the implementation of the perodic callback function. As usual, the implementation of the (proxy) Displayer can be inlined and doesn't create runtime overhead. Thus, each frame output call has to pass though two indirections: the function pointer in the Display facade interface, and the Glib::Dispatcher.

!rationale
There can be multiple viewer widgets, to be connected dynamically to multiple play-controllers. (the latter are associated with the timeline(s)). Any playback can require multiple playback processes to work in parallel. The playback controller(s) should not be concerned with managing the play processes, which in turn should neither care for the actual rendering, nor manage the display frequency and synchronisation issues. Moreover, the mentioned parts live in different layers and especially the GUI needs to remain separated from the core. And finally, in case of a problem within one play process, it should be able to unwind automatically, without interfering with other ongoing play processes.
Playlist is a sequence of individual Render Engine Processors able to render a segment of the timeline. So, together these Processors are able to render the whole timeline (or part of the timeline if only a part has to be rendered).

//Note, we have yet to specify how exactly the building and rendering will work together with the backend. There are several possibilities how to structure the Playlist//
Open issues, Things to be worked out, Problems still to be solved... 

!!Parameter Handling
The requirements are not quite clear; obviously Parameters are the foundation for getting automation right and for providing effect editing interfaces, so it seems to me we need some sort of introspection, i.e. Parameters need to be discovered, enumerated and described at runtime. (&rarr; see [[tag:automation|automation]])

''Automation Type'': Directly connected is the problem of handling the //type// of parameters sensible, including the value type of automation data. My first (somewhat naive) approach was to "make everything a double". But this soon leads into quite some of the same problems haunting the automation solution implemented in the current Cinelerra codebase. What makes the issue difficult is the fact we both need static diversity as well as dynamic flexibility. Usually, when combining hierarchies and templates, one has to be very careful; so I just note the problem down at the moment and will revisit it later, when I have a more clear understanding of the demands put onto the [[ProcNode]]s

!!Treatment of Time (points) and Intervals
At the moment we have no clear picture what is needed and what problems we may face in that domain.
From experience, mainly with other applications, we can draw the following conclusions
* drift and rounding errors are dangerous, because time in our context usually is understood as a fixed grid (Frames, samples...)
* fine grained time values easily get very large
* Cinelerra currently uses the approach of simply counting natural values for each media type separately. In an environment mixing several different media types freely, this seems a bit too simplistic (because it actually brings in the danger of rounding errors, just think at drop frame TC)

!!Organizing of Output Channels
How to handle the simultaneous rendering of several output streams (video, audio channels). Shall we treat the session as one entity containing different output channels, or should it rather be seen as a composite of several sub-sessions, each for only one output channel? This decision will be reflected in the overall structure of the network of render nodes: We could have a list of channel-output generating pipelines in each processor (for every segment), or we could have independently segmented lists of Processors for every output channel/type. The problem is, //it is not clear what approach to prefer at the moment//  because we are just guessing.

!!Tracks, Channels, Layers
Closely related to this is the not-so-obvious problem how to understand the common global structures found in most audio and video editing applications. Mostly, they stem from imitating hardware recording and editing solutions, thus easing the transition for professionals grown up with analogue hardware based media. But as digital media are the de-facto standard nowadays, we could rethink some of this accidental complexity introduced by sticking to the hardware tool metaphor.
* is it really necessary to have fixed global tracks?
* is it really helpful to feed "source tracks" into global processing busses/channels?
Users accustomed with modern GUI applications typically expect that //everything is a object//  and can be pulled around and  manipulated individually. This seems natural at start, but raises the problem of providing a efficient workflow for handling larger projects and editing tasks. So, if we don't have a hard wired multitrack+bus architecture, we need some sort of templating to get the standard editing use case done efficiently.

!!Compound and Multiplicity
Simple relations can be hard wired. But, on the contrary, it would be as naive to define a Clip as having a Video track and two audio tracks, as it would be naive to overlook the problem of holding video and corresponding audio together. And, moreover, the default case has to be processed in a //straight forward// fashion, with as few tests and decisions as possible. So, basically each component participating in getting the core processing done has to mirror the structure pattern of the other parts, so that  processing can be done without testing and forking. But this leaves us with the problem where to put the initial knowledge about the structural pattern used for building up the compound structures and &mdash; especially &mdash; the problem how to treat different kinds of structural patterns, how to detect the pattern to be applied and how to treat multiple instances of the same structural pattern.

One example of this problem is the [[handling of multichannel media|MultichannelMedia]]. Following the above reasoning, we end with having a [["structural processing pattern"|ProcPatt]], typically one video stream with MPEG decoder and a pair of audio streams which need either to be routed to some "left" and "right" output pipes, or have to be passed through a panning filter accordingly. Now the problem is: //create a new instance of this structure for each new media, or detect which media to subsume under a existing pattern instance.//

!!Parallelism
We need to work out guidelines for dealing with operations going on simultaneously. Certainly, this will divide the application in several different regions. As always, the primary goal is to avoid multithread problems altogether. Typically, this can be achieved by making matters explicit: externalizing state, make the processing subsystems stateless, queue and schedule tasks, use isolation layers.
* the StateProxy is a key for the individual render processes state, which is managed in separate [[StateFrame]]s in the backend. The [[processing network|ProcNode]] is stateless.
* the [[Fixture]] provides an isolation layer between the renderengine and the Session / high-level model
* all EditingOperations are not threadsafe intentionally, because they are [[scheduled|ProcLayerScheduler]]
The current Lumiera architecture separates functionality into three Layers: __GUI__, __Proc__ and __Backend__.

While the Backend is responsible for Data access and management and for carrying out the computation intensive media opteratons, the middle Layer or ~Proc-Layer contains [[assets|Asset]] and [[Session]], i.e. the user-visible data model and provides configuration and behaviour for these entities. Besides, he is responsible for [[building and configuring|Builder]] the [[render engine|RenderEngine]] based on the current Session state.
All Assets of kind asset::Proc represent //processing algorithms// in the bookkeeping view. They enable loading, browsing and maybe even parametrizing all the Effects, Plugins and Codecs available for use within the Lumiera Session.

Besides, they provide an __inward interface__ for the [[ProcNode]]s, enabling them to dispatch the actual processing call while rendering. Actually, this interface is always accessed via an ~Effect-MObject; mostly it is investigated and queried in the build process when creating the corresponding processor nodes. &rarr; see EffectHandling for details

{{red{todo: the naming scheme??}}}

[img[Asset Classess|uml/fig131077.png]]
{{red{Note 3/2010}}} it is very unlikely we'll organise the processing nodes as a class hierarchy. Rather it looks like we'll get several submodules/special capabilities configured in within the Builder
The middle Layer of our current Architecture plan, i.e. the layer managing all processing and manipulation, while the actual data handling is done in the backend and the user interaction belongs to the GUI Layer.

&rarr; see the [[Overview]]

The Render Engine is the part of the application doing the actual video calculations. Relying on system level services and retrieving raw audio and video data through [[Lumiera's Backend|Backend]], its operations are guided by the objects and parameters edited by the user in [[the session|Session]]. The middle layer of the Lumiera architecture, known as the Proc-Layer, spans the area between these two exteremes, providing the the (abstract) edit operations available to the user, the representation of [["editable things"|MObjects]] and the translation of those into structures and facilities allowing to [[drive the rendering|Rendering]].

!About this wiki page
|background-color:#E3F3F1;width:96ex;padding:2ex; This TiddlyWiki is the central location for design, planning and documentation of the Lumiera Proc-Layer. Some parts are used as //extended brain// &mdash; collecting ideas, considerations and conclusions &mdash; while other tiddlers contain the decisions and document the planned or implemented facilities. The intention is to move over the more mature parts into the emerging technical documentation section on the [[Lumiera website|http://www.lumiera.org]] eventually. <br/><br/>Besides cross-references, content is largely organised through [[Tags|TabTags]], most notably <br/><<tag overview>> &middot; <<tag def>> &middot; <<tag decision>> &middot; <<tag Concepts>> <br/> <<tag Model>> &middot; <<tag SessionLogic>> &middot; <<tag GuiIntegration>> &middot; <<tag Builder>> &middot; <<tag Rendering>> &middot; <<tag Player>> &middot; <<tag Rules>> &middot; <<tag Types>> |

!~Proc-Layer Summary
When editing, the user operates several kinds of //things,// organized as [[assets|Asset]] in the AssetManager, like media, clips, effects, codecs, configuration templates. Within the context of the [[Project or Session|Session]], we can use these as &raquo;[[Media Objects|MObjects]]&laquo; &mdash; especially, we can [[place|Placement]] them in various kinds within the session and relative to one another.

Now, from any given configuration within the session, we create sort or a frozen- and tied-down snapshot, here called &raquo;[[Fixture|Fixture]]&laquo;, containing all currently active ~MObjects, broken down to elementary parts and made explicit if necessary. This Fixture acts as a isolation layer towards the Render Engine. We will hand it over to the  [[Builder]], which in turn will transform it into a network of connected [[render nodes|ProcNode]]. This network //implements//&nbsp; the [[Render Engine|OverviewRenderEngine]].

The system is ''open'' inasmuch every part mirrors the structure of corresponding parts in adjacent subsystems, and the transformation of any given structure from one subsystem (e.g. Asset) to another (e.g. Render Engine) is done with minimal "magic". So the whole system should be able to handle completely new structures mostly by adding new configurations and components, without much need of rewriting basic workings.


!!see also
&rarr; [[Overview]] of Subsystems and Components, and DesignGoals
&rarr; [[An Introduction|WalkThrough]] discussing the central points of this design
&rarr; [[Overview Session (high level model)|SessionOverview]]
&rarr; [[Overview Render Engine (low level model)|OverviewRenderEngine]]
&rarr; BuildProcess and RenderProcess
&rarr; [[Two Examples|Examples]] (Object diagrams) 
&rarr; how [[Automation]] works
&rarr; [[Problems|ProblemsTodo]] to be solved and notable [[design decisions|DesignDecisions]]
&rarr; [[Concepts, Abstractions and Formalities|Concepts]]
&rarr; [[Implementation Details|ImplementationDetails]] {{red{WIP}}}
A data processing node within the Render Engine. Its key feature is the possibility to pull from it one (freely addressable) [[Frame]] of calculated data. Further, each ~ProcNode has the ability to be wired with other nodes and [[Parameter Providers|ParamProvider]]

!! {{red{open questions}}}
* how to address a node
* how to type them
* how to discover the number and type of the ports
* how to discover the possible parameter ports
* how to define and query for additional capabilities

&rarr; see also the [[open design process draft|http://www.pipapo.org/pipawiki/Lumiera/DesignProcess/DesignRenderNodesInterface]]
&rarr; see [[mem management|ManagementRenderNodes]]
&rarr; see RenderProcess
This special type of [[structural Asset|StructAsset]] represents information how to build some part of the render engine's processing nodes network. Processing patterns can be thought of as a blueprint or micro program for construction. Most notably, they are used for creating nodes reading, decoding and delivering source media material to the render network, and they are used for building the output connection via faders, summation or overlay nodes to the global pipes (busses). Each [[media Asset|MediaAsset]] has associated processing patterns describing the codecs and other transformations needed to get at the media data of this asset. (and because media assets are typically compound objects, the referred ~ProcPatt will be compound too). Similarily, for each stream kind, we can retrieve a processing pattern for making output connections. Obviously, the possibilities opened by using processing patterns go far beyond.

Technically, a processing pattern is a list of building instructions, which will be //executed// by the [[Builder]] on the render node network under construction. This implies the possibility to define further instruction kinds when needed in future; at the moment the relevant sorts of instructions are
* attach the given sequence of nodes to the specified point
* recursively execute a nested ~ProcPatt
More specifically, a sequence of nodes is given by a sequence of prototypical effect and codec assets (and from each of them we can create the corresponding render node). And the point to attach these nodes is given by an identifier &mdash; in most cases just "{{{current}}}", denoting the point the builder is currently working at, when treating some placed ~MObject which in turn yielded this processing pattern in question.

Like all [[structural assets|StructAsset]], ~ProcPatt employs a special naming scheme within the asset name field, which directly mirrors its purpose and allows to bind to existing processing pattern instances when needed. {{red{TODO: that's just the general idea, but really it will rather use some sort of tags. Yet undefined as of 5/08}}} The idea is letting all assets in need of a similar processing pattern refer to one shared ~ProcPatt instance. For example, within a MPEG video media asset, at some point there will be a ~ProcPatt labeled "{{{stream(mpeg)}}}". In consequence, all MPEG video will use the same pattern of node wiring. And, of course, this pattern could be changed, either globally, or by binding a single clip to some other processing pattern (for making a punctual exception from the general rule) 

!!defining Processing Patterns
The basic working set of processing patterns can be expected to be just there (hard wired or default configuration, mechanism for creating an sensible fallback). Besides, the idea is that new processing patterns can be added via rules in the session and then referred to by other rules controlling the build process. Any processing pattern is assembled by adding individual build instructions, or by including another (nested) processing pattern.

!!retrieving a suitable Processing Pattern
For a given situation, the necessary ProcPatt can be retrieved by issuing a [[configuration query|ConfigQuery]]. This query should include the needed capabilities in predicate form (technically this query is a Prolog goal), but it can leave out some pieces of information by just requesting "the default" &rarr; see DefaultsManagement

!!how does this actually work?
Any processing pattern needs the help of a passive holder tool suited for a specific [[building situation|BuilderPrimitives]]; we call these holder tools [[building moulds|BuilderMould]]. Depending on the situation, the mould has been armed up by the builder with the involved objects to be connected and extended. So, just by issuing the //location ID// defined within the individual build instruction (in most cases simply {{{"current"}}}), the processing pattern can retrieve the actual render object to use for building from the mould it is executed in.

!!errors and misconfiguration
Viewed as a micro program, the processing patterns are ''weak typed'' &mdash; thus providing the necessary flexibility within an otherwise strong typed system. Consequently, the builder assumes they are configured //the right way// &mdash; and will just bail out when this isn't the case, marking the related part of the high-level model as erroneous.
&rarr; see BuilderErrorHandling for details

a given Render Engine configuration is a list of Processors. Each Processor in turn contains a Graph of ProcNode.s to do the acutal data processing. In order to cary out any calculations, the Processor needs to be called with a StateProxy containing the state information for this RenderProcess
The Quantiser implementation works by determining the grid interval containing a given raw time.
These grid intervals are denoted by ordinal numbers (frame numbers), with interval #0 starting at the grid's origin and negative ordinals allowed.

!frame quantisation convention
Within Lumiera, there is a fixed convention how these frame intervals are to be defined (&rArr; [[time handling RfC|http://lumiera.org/documentation/devel/rfc/TimeHandling.html]])
[img[Lumiera's frame quantisation convention|draw/FramePositions1.png]]
Especially, this convention is agnostic of the actual zero-point of the scale and allows direct length calculations and seamless sequences of intervals.
The //nominal coordinate// of an interval is also the starting point -- for automation keys frames we'll utilise special provisions.

!range limitation problems
because times are represented as 64bit integers, the time points addressable within a given scale grid can be limited, compared with time points addressable through raw (internal) time values. As an extreme example, consider a time scale with origin at {{{Time::MAX}}} -- such a scale is unable to represent any of the original scale's value above zero, because the resulting coordinates would exceed the range of the 64bit integer. Did I mention that 64bit micro ticks can represent about 300000 years?

Now the actual problem is that using 64bit integers already means pushing to the limit. There is no easy escape hatch, like using a larger scale data type for intermediaries -- it //is// the largest built-in type. Basically we're touching the topic of ''safe integer arithmetics'' here, which is frequently discussed as a security concern. The situation is as follows:
* every programmer promises "I'll do the checks when necessary" -- just to forget doing so in practice then.
* for C programming, the situation is hopeless. Calling functions for simple arithmetics is outright impractical -- that won't happen volountarily
* it is possible to build a ~SafeInt datatype in C++ though. While theoretically fine, in practice this also creates a host of problems:
** actually detecting all cases and coding the checks is surprisingly hard and intricate.
** the attempt to create a smooth integration with the built-in data types drives us right into one of the most problematic areas of C++
** the performance hit is considerable (factor 2 - 4)  -- again luring people into "being clever".

There is an existing [[SafeInt class by David LeBlanc|http://safeint.codeplex.com/]], provided by Microsoft through the ~CodePlex platform. It is part of the libraries shipped with ~VisualStudio and extensively used in Office 2010 and Windows, but also provided under a somewhat liberal license (you may use it but any derived work has to reproduce copyright and usage terms). Likely this situation also hindered the further [[development|http://thread.gmane.org/gmane.comp.lib.boost.devel/191010]] of a comparable library in boost (&rarr; [[vault|https://svn.boost.org/trac/boost/wiki/LibrariesUnderConstruction#Boost.SafeInt]]).

!!!solution possibilities
;abort operation
:an incriminating calculation need to be detected somehow (see below).
:the whole usage context gets aborted by excaption in case of an alarm, similar to an out of memory...
:thus immediate corruption is avoided, but the user has to realise and avoid the general situation.
;limit values
:provide a limiter to kick in after detecting an alarm (see below).
:time values will just stick to the maximum/minimum boundary value....
:the rest of the application needs to be prepared for timing calculations to return degenerate results
;~SafeInt
:base ~TimeValue on a ~SafeInt<gavl_time_t>
:this way we could easily guarantee for detecting any //situation.//
:of course the price is the performance hit on all timing calculations.
:Moreover -- if we want to limit values instead of raising an exception, we'd need to write our own ~SaveInt.
;check some operations
:time values are mostly immutable, thus it would likely be sufficient only to check some strategically important points
:#parsing or de-serialising
:#calculations in ~TimeVar
:#quantisation
:#build Time from framecount
;limit time range
:because the available time range is so huge, it wouldn't hurt to reduce it by one or two decimals
:this way both the implementation of the checks could be simplified and the probability of an overflow reduced
;ignore the problem
:again, because of the huge time range, the problem could generally be deemed irrelevant
:we could apply a limiter to enforce a reduced time range just in the quantisation and evaluation of timecodes

A combination of the last approaches seems to be most appropriate here:
Limit the officially allowed time range and perform simple checks when quantising a time value and when applying a scale factor.
We limit {{{Time::MAX}}} by factor 1/30 and define the minimum //symmetrically to this.// This still leaves us with ''±9700 years allowed timerange''.
{{red{WIP as of 10/09}}}...//brainstorming about the first ideas towards a query subsystem//

!use case: discovering the contents of a container in the HighLevelModel
In the course of shaping the session API, __joel__ and __ichthyo__ realised that we're moving towards some sort of discovery or introspection. This gives rise to the quest for a //generic// pattern how to issue and run these discovery operations. The idea is to understand such a discovery as running a query &mdash; using this specific problem to shape the foundation of a query subsystem to come.
* a ''query'' is a polymorphic, noncopyable, non-singleton type; a query instance corresponds to one distinctly issued query
* issuing a query yields a result set, which is hidden within the concrete query implementation.
* the transactional behaviour needs still to be defined: how to deal with concurrent modifications? COW?
* the query instance remains property of the entity exposing the query capability.
* client code gets a result iterator, which can be explored //only once until exhaustion.//
* the handed out result iterator is used to manage the allocation for the query result set by sideeffect (smart handle). &rarr; Ticket #353

For decoupling the query invocation from the facility actually processing the query, we need to come up with common pattern. In 10/09, there is an immediate demand for such a solution pattern for implementing the QueryFocus and PlacementScope framework, which is crucial for contents discovery in general on the session interface. &rarr; QueryResolver was shaped to deal with this situation, but has the potential to evolve into a general solution for issuing queries.

!use case: retrieving object to fulfil a condition
The requirement is to retrieve one (or multiple) objects of a specific kind, located within a scope, and fulfilling some additional condition. For example: find a sub-track with {{{id(blubb)}}}.
This is a special case of the general discovery (described in the previous use case), but it is also a common situation in a general ConfigQuery (&rArr; an object of a specific type and with additional capabilities...). The tricky question seems to be how to specify and resolve these additional conditions or capabilities.
* in a generic query handled by a resolution engine, we might represent these capabilities as a nested //goal.//
* if we approach the problem as a filter pipeline, then the condition becomes a functor (or closure). &rarr; QueryResolver
On second consideration, these two approaches don't contradict each other, because they live in different contexts and levels of abstraction. Performance-wise, both are bad and degenerate on large models, because both effectively cause a full table scan. Only specialised search functions for hardcoded individual properties could improve that situation, by backing them with an additional sub-index.
__Conclusion__: no objection against providing the functor/filter solution right now, even on the QueryFocus API &mdash;
notwithstanding the fact we need a better solution later.

----
See also the notes on
&nbsp; &rarr; QueryImplProlog
&nbsp; &rarr; QueryRegistration
While there are various //specialised queries// to be issued and resolved efficiently, as a common denominator we use a common ''syntactic representation'' of queries based on predicate logic. This allows for standardised processing when applicable, since a //generic query// can be used as a substitute of any given specialised kind of query. Moreover, using the predicate logic format as common definition format allows for programmatic extension, reshaping, combining and generic meta processing of queries within the system.

!the textual syntactic form
{{red{WIP 12/2012}}} for the moment we don't use any kind of  "real" resolution engine, thus the syntactic representation is kind of a design draft.
The plan is to use a syntax which can be fed directly to a Prolog interpreter or similar existing rules based systems.

!the necessity of using an internal representation
Since the intention is to use queries pervasively as a means of orchestrating the interplay of the primary controlling facilities, the internal representation of this syntactic exchange format might become a performance bottleneck. The typical usage pattern is just to create and issue a query to retrieve some result value -- thus, in practice, we rarely need the syntactic representation, while being bound to create this representation for sake of consistency.

!textual vs parsed-AST representation
For the initial version of the implementation, just storing a string in Prolog syntax is enough to get us going. But on the long run, for the real system, a pre-parsed representation in the style of an __A__bstract __S__yntax __T__ree seems more appropriate: through the use of some kind of symbol table, actual queries can be tagged with the common syntactic representation just by attaching some symbol numbers, without the overhead of creating, attaching and parsing a string representation in most cases
There is a common denominator amongst all queries: they describe a pattern of relations, which can be //satisfied// by providing a //solution.// But queries can be used in a wide variety of situations, each binding them to a more specific meaning and each opening the possibility to use a specialised resolution mechanism. For example, a query //might// actually mean to fetch and filter the contents of some sub-section of the session model. Or it might cause the fabrication an registration of some new model content object. In order to tie those disjoint kinds of queries together, we need a mechanism to exchange one kind of query with another one, or to derive one from another one. This ''exchange of queries'' actually is a way of ''remoulding'' and rebuilding a query -- which is based on a [[common syntactic representation|QueryDefinition]] of all queries, cast into terms of predicate logic. Besides, there is a common, unspecific, generic resolution mechanism ({{red{planned 11/12}}}) , which can delegate to more specialised resolvers when applicable.

!handling of queries
Queries are are always exposed or revealed at some point or facility allowing to pose queries. This facility remains the owner of the query instances and knows its concrete flavour (type). But queries can be referred to and exchanged through the {{{Query<TY>}}}-abstraction.
When querying contents of the session or sub-containers within the session, the QueryFocus follows the current point-of-query. As such queries can be issued to explore the content of container-like objects holding other MObjects, the focus is always attached to a container, which also acts as [[scope|PlacementScope]] for the contained objects. QueryFocus is an implicit state (the current point of interrest). This sate especially remembers the path down from the root of the HighLevelModel, which was used to access the current scope. Because this path constitutes a hierarchy of scopes, it can be relevant for querying and resolving placement properties. (&rarr; SessionStructureQuery)

!provided operations
* attach to a given scope-like object. Causes the current focus to //navigate//
* open a new focus, thereby pushing the existing focus onto a [[focus stack|QueryFocusStack]]
* return (pop) to the previous focus
* get the current scope, represented by the "top" Placement of this scope
* get the current ScopePath from root (session globals) down to the current scope
* (typed) content discovery query on the current scope
[>img[Scope Locating|uml/fig136325.png]]
&rarr; [[more|SessionContentsQuery]] regarding generic scope queries
!!!relation to Scope
There is a tight integration with PlacementScope through the ScopeLocator, which establishes the //current focus.// But while the [[scope|PlacementScope]] just decorates the placement defining a scope (called //&raquo;scope top&laquo;//), QueryFocus is more of a //binding// &mdash; it links or focusses the current state into a specific scope with a ScopePath in turn depending on this current state. Thus, while Scope is just a passive container allowing to locate and navigate, QueryFocus by virtue of this binding allows to [[Query]] at this current location.

!implementation notes
we provide a static access API, meaning that there is a singleton (the ScopeLocator) behind the scenes, which holds the mentioned scope stack. The current focus stack top, i.e. the current ScopePath is managed through an ref-counting handle embedded into each QueryFocus instance. Thus, effectively QueryFocus is an frontend object for accessing this state. Moreover, embedded into ScopeLocator, there is an link to the current session. But this link is kept opaque; it works by the current session exposing an [[query service|QueryResolver]], while QueryFocus doesn't rely on knowledge about the session, allowing the focus to be unit tested.

The stack of scopes must not be confused with the ScopePath. Each single frame on the stack can be seen and accessed as a QueryFocus and as such relates to a current ScopePath. The purpose of the stack is to make the scope handling mostly transparent; especially this stack allows to write dedicated query functions directed at a given object: they work by pushing and then navigating to the object to use as starting point for the query, i.e. the //current scope.//

!!!simplifications
The full implementation of this scope navigation is tricky, especially when it comes to determining the relation of two positions. It should be ''postponed'' and replaced by a ''dummy'' (no-op) implementation for the first integration round.
The ScopeLocator uses a special stack of ScopePath &raquo;frames&laquo; to maintain the //current focus.//
What is the ''current'' QueryFocus and why is it necessary? There is a state-dependent part involved, inasmuch the effective ScopePath depends on how the invoking client has navigated the //current location// down into the HighLevelModel structures. Especially, when a VirtualClip is involved, there can be discrepancies between the paths resulting when descending down through different paths. (See &rarr; BindingScopeProblem).

Thus, doing something with the current location, and especially descending or querying adjacent scopes can modify this current path state. Thus we need a means of invoking a query in a way not interfering with the current path state, otherwise we wouldn't be able to provide side-effect free query operations accessible on individual objects within the model.

!maintaining the current QueryFocus
As long as client code is just interested to use the current query location, we can provide a handle referring to it. But when a query needs to be run without side effect on the current location, we //push//&nbsp; it aside and start using a new QueryFocus on top, which starts out at a new initial location. Client code again gets a handle (smart-ptr) to this location, and additionally may access the new //current location.// When all references are out of scope and gone, we'll drop back to the focus put aside previously.

!implementation of ref-counting and clean-up
Actually, client code should use QueryFocus instances as frontend to access this &raquo;current focus&laquo;. Each ~QueryFocus instance incorporates a smart-ptr. But as in this case we're not managing objects allocated somewhere, we use an {{{boost::intrusive_ptr}}} and maintain the ref-count immediately within the target objects to be managed. These target objects are ScopePath instances and are living within the QueryFocusStack, which in turn is managed by the ScopeLocator singleton (see the UML diagram &rarr;[[here|QueryFocus]]). We use an (hand-written) stack implementation to ensure the memory locations of these ScopePath &raquo;frames&laquo; remain valid (and also to help with providing strong exception guarantees). The stack is aware of these ref-count and takes it into account on performing the {{{pop_unused()}}} operation: any unused frame on top will be evicted, stopping at the first frame still in use (which may be even just the old top). This cleanup also happens automatically when accessing the current top, re-initialising an potentially empty stack with a default-constructed new frame if necessary. This way, just accessing the stack top always yields the ''current focus location'', which thereby is //defined as the most recently used focus location still referred.//

!concurrency
This concept deliberately ignores parallelism. But, as the current path state is already encapsulated (and ref-counting is in place), the only central access point is to reach the current scope. Instead of using a plain-flat singleton here, this access can easily be routed through thread local storage.
{{red{As of 10/09 it is not clear if there will be any concurrent access to this discovery API}}} &mdash; but it seems not unlikely to happen...
//obviously, getting this one to work requires quite a lot of technical details to be planned and implemented.// This said...
The intention is to get much more readable ("declarative") and changeable configuration as by programming the decision logic literately within the implementation of some object.

!Draft
As an example, specifying how a Track can be configured for connecting automatically to some "mpeg" bus (=pipe)
{{{
resolve(O, Cap) :- find(O), capabilities(Cap).
resolve(O, Cap) :- make(O), capabilities(Cap).
capabilities(Q) :- call(Q).

stream(T, mpeg) :- type(T, track), type(P, pipe), resolve(P, stream(P,mpeg)), placed_to(P, T).
}}}

Then, running the goal {{{:-resolve(T, stream(T,mpeg)).}}} would search a Track object, try to retrieve a pipe object with stream-type=mpeg and associate the track with this pipe. This relies on a predicate "stream(P,mpeg)" implemented (natively) for the pipe object. So, "Cap" is the query issued from calling code &mdash; here {{{stream(T,mpeg)}}}, the type guard {{{type(T, track)}}} will probably be handled or inserted automatically, while the predicate implementations for find/1, make/1, stream/2, and placed_to/2 are to be provided by the target types.
* __The supporting system__ had to combine several code snippets into one rule system to be used for running queries, with some global base rules, rules injected by each individual participating object kind and finally user provided rules added by the current session. The actual query is bound to "Cap" (and consequently run as a goal by {{{call(Q)}}}). The implementation needs to provide a symbol table associating variable terms (like "T" or "P") to C/C++ object types, enabling the participating object kinds to register their specific predicate implementations. This is crucial, because there can be no general scheme of object-provided predicates (for each object kind different predicates make sense, e.g. [[pipes|PipeHandling]] have other possibilities than [[wiring requests|WiringRequest]]). Basically, a query issues a Prolog goal, which in turn evaluates domain specific predicates provided by the participating objects and thus calls back into C/C++ code. The supporting system maintains the internal connection (via the "type" predicate) such that from Prolog viewpoint it looks as if we were binding Variables directly to object instances. (there are some nasty technical details because of the backtracking nature of Prolog evaluations which need to be hidden away)
* Any __participating object kind__ needs a way to declare domain specific predicates, thus triggering the registration of the necessary hooks within the supporting system. Moreover, it should be able to inject further prolog code (as shown in the example above with the {{{strem(T, mpeg)}}} predicate. For each of these new domain specific predicates, there needs to be a functor which can be invoked when the C implementation of the predicate is called from Prolog (in some cases even later, when the final solution is "executed", e.g. a new instance has been created and now some properties need to be set).

!!a note on Plugins
In the design of the Lumiera Proc Layer done thus far, we provide //no possibility to introduce a new object kind// into the system via plugin interface. The system uses a fixed collection of classes intended to cover all needs (Clip, Effect, Track, Pipe, Label, Automation, ~Macro-Clips). Thus, plugins will only be able to provide new parametrisations of existing classes. This should not be any real limitation, because the whole system is designed to achieve most of its functionality by freely combining rather basic object kinds. As a plus, it plays nicely with any plain-C based plugin interface. For example, we will have C++ adapter classes for the most common sorts of effect plugin (pull system and synchronous frame-by-frame push with buffering) with a thin C adaptation layer for the specific external plugin systems used. Everything beyond this point can be considered "configuration data" (including the actual plugin implementation to be loaded)
//Querying for some suitable element,// instead of relying on hard wired dependencies, is considered a core pattern within Lumiera.
But we certainly can't expect to subsume every query situation to one single umbrella interface, so we need some kind of ''query dispatch'' and a registration mechanism to support this indirection. This could lead to a situation, where the term &raquo;query&laquo; is just a vaguely defined umbrella, holding together several disjoint subsistems. Yet, while in fact this describes the situation in the code base as of this writing, {{red{11/2012}}}, actually all kinds of queries are intended to share a //common semantic space.// So, in the end, we need a mechanism to [[exchange queries|QueryExchange]] with one another, and this mechanism ought to be based on the common //syntactic representation through terms of predicate logic.//
Common definition forma &rarr; QueryDefinition
Closely related is the &rarr; TypedQueryProblem

!Plans and preliminary implementation
As of 6/10, the intention is to be able just to //pose queries eventually.// Behind the scenes, a suitable QueryResolver should then be picked to process the query and yield a resultset. Thus the {{{Goal}}} and {{{Query<TY>}}} interfaces are to become the access point to a generic dispatching service and a bundle of specialised resolution mechanisms.

But implementing this gets a bit involved, for several reasons
* we don't know the kinds of queries, their frequency and performance requirements
* we don't know the exact usage pattern with respect to memory management of the resultsets.
* we can't asses the relevance of //lock contention,// created by using a central dispatcher facility.
We might end up with a full blown subsystem, and possibly with a hierarchy of dispatchers.

But for now the decision is to proceed with isolated and specialised QueryResolver subclasses, and to pass a basically suitable resolver to the query explicitly when it comes to retrieving results. This resolver should be obtained by some system service suitable for the concrete usage situation, like e.g. the ~SessionServiceExploreScope, which exposes a resolver to query session contents through the PlacementIndex. Nonetheless, the actual dispatch mechanism is already implemented (by using an ~MultiFact instance), and each concrete resolution mechanism is required to do an registration within the ctor, by calling the inherited {{{QueryResolver::installResolutionCase(..)}}}.
Within the Lumiera Proc-Layer, there is a general preference for issuing [[queries|Query]] over hard wired configuration (or even mere table based configuration). This leads to the demand of exposing a //possibility to issue queries// &mdash; without actually disclosing much details of the facility implementing this service. For example, for shaping the general session interface (in 10/09), we need a means of exposing a hook to discover HighLevelModel contents, without disclosing how the model is actually organised internally (namely by using an PlacementIndex).

!Analysis of the problem
The situation can be decomposed as follows.[>img[QueryResolver|uml/fig137733.png]]
* first off, we need a way to state //what kind of query we want to run.// This includes stipulations on the type of the expected result set contents
* as the requirement is to keep the facility actually implementing the query service hidden behind an interface, we're forced to erase specific type information and pass on an encapsulated version of the query
* providing an iterator for exploring the results poses the additional constraint of having an fairly generic iterator type, while still being able to communicate with the actual query implementation behind the interface.


!!!Difficulties
*the usage pattern is not clear &mdash; mostly it's just //planned//
*# client might create a specific {{{Query<TY>}}} and demand resolution
*# client might create just a goal, which is then translated into a specific query mechanism behind the invocation interface
*# client issues a query and expect it just to be handled by //some//&nbsp; suitable resolver
* thus it's difficult to determine, //what// part of the issued query needs automatic management. More specifically, is it possible for the client to dispose the query after issuing it, but keeping and exploring the iterator obtained as result of the query?
* and then there is the notorious problem of re-gaining the specifically typed context //behind//&nbsp; the invocation interface. Especially, the facility processing the query needs to know both the expected result type and details about the concrete query and its parametrisation. <br/>&rarr; TypedQueryProblem

!!!Entities and Operations
The //client// &nbsp;(code using query-resolver.hpp) either wants a ''goal'' or ''query'' to be resolved; the former is just implicitly typed and usually given in predicate logic from ({{red{planned as of 11/09}}}), while the latter may be a specialised subclass templated to yield objects of a specific type as results. A ''query resolver'' is an (abstracted) entity capable of //resolving//&nbsp; such a goal. Actually, behind the scenes there is somehow a registration of the concrete resolving facilities, which are asumed to decide about their ability of handling a given goal. Issuing a goal or query yields a ''resolution'' &mdash; practically speaking, a set of indivitual solutions. These individual solution ''results'' can be explored by ''iteration'', thereby moving an embedded ''cursor'' through the ''result set''. Any result can be retrieved at most once &mdash; after that, the resolution is ''exhausted'' and will be released automatically when the expolration iterator goes out of scope.

!!!Decisions
* while, in the use case currently at hand, the query instance is created by the client on the stack, the possibility of managing the queries internally is deliberately kept open. Because otherwise, we had to commit to a specific way of obtaining results, for example by assuming always to use an embedded STL iterator.
* we endorse that uttermost performance is less important than clean separation an extensibility. Thus we accept accessing the current position pointer through reference and we use a ref-counting mechanism alongside with the iterator to be handed out to the client
* the result set is not tied to the query &mdash; at least not by design. The query can be discarded while further exploring the result set.
* for dealing with the TypedQueryProblem, we require the concrete resolving facilities to register with a system startup hook, to build a dispatcher table on the implementation side. This allows us to downcast to the concrete Cursor type on iteration and results retrieval.
* the intention is to employ a mix of generic processing (through a common generic [[syntactic query representation|QueryDefinition]]) and optimised processing of specialised queries relying on concrete query subtypes. The key for achieving this goal is the registration menchanism, which could evolve into a generic query dispatch system -- but right now the exact balance of this two approaches remains a matter of speculation...

/***
|''Name:''|RSSReaderPlugin|
|''Description:''|This plugin provides a RSSReader for TiddlyWiki|
|''Version:''|1.1.1|
|''Date:''|Apr 21, 2007|
|''Source:''|http://tiddlywiki.bidix.info/#RSSReaderPlugin|
|''Documentation:''|http://tiddlywiki.bidix.info/#RSSReaderPluginDoc|
|''Author:''|BidiX (BidiX (at) bidix (dot) info)|
|''Credit:''|BramChen for RssNewsMacro|
|''License:''|[[BSD open source license|http://tiddlywiki.bidix.info/#%5B%5BBSD%20open%20source%20license%5D%5D ]]|
|''~CoreVersion:''|2.2.0|
|''OptionalRequires:''|http://www.tiddlytools.com/#NestedSlidersPlugin|
***/
//{{{
version.extensions.RSSReaderPlugin = {
	major: 1, minor: 1, revision: 1,
	date: new Date("Apr 21, 2007"),
	source: "http://TiddlyWiki.bidix.info/#RSSReaderPlugin",
	author: "BidiX",
	coreVersion: '2.2.0'
};

config.macros.rssReader = {
	dateFormat: "DDD, DD MMM YYYY",
	itemStyle: "display: block;border: 1px solid black;padding: 5px;margin: 5px;", //useed  '@@'+itemStyle+itemText+'@@'
	msg:{
		permissionDenied: "Permission to read preferences was denied.",
		noRSSFeed: "No RSS Feed at this address %0",
		urlNotAccessible: " Access to %0 is not allowed"
	},
	cache: [], 	// url => XMLHttpRequest.responseXML
	desc: "noDesc",
	
	handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		var desc = params[0];
		var feedURL = params[1];
		var toFilter = (params[2] ? true : false);
		var filterString = (toFilter?(params[2].substr(0,1) == ' '? tiddler.title:params[2]):'');
		var place = createTiddlyElement(place, "div", "RSSReader");
		wikify("^^<<rssFeedUpdate "+feedURL+" [[" + tiddler.title + "]]>>^^\n",place);
		if (this.cache[feedURL]) {
			this.displayRssFeed(this.cache[feedURL], feedURL, place, desc, toFilter, filterString);
		}
		else {
			var r = loadRemoteFile(feedURL,config.macros.rssReader.processResponse, [place, desc, toFilter, filterString]);
			if (typeof r == "string")
				displayMessage(r);
		}
		
	},

	// callback for loadRemoteFile 
	// params : [place, desc, toFilter, filterString]
	processResponse: function(status, params, responseText, url, xhr) { // feedURL, place, desc, toFilter, filterString) {	
		if (window.netscape){
			try {
				if (document.location.protocol.indexOf("http") == -1) {
					netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");
				}
			}
			catch (e) { displayMessage(e.description?e.description:e.toString()); }
		}
		if (xhr.status == httpStatus.NotFound)
		 {
			displayMessage(config.macros.rssReader.noRSSFeed.format([url]));
			return;
		}
		if (!status)
		 {
			displayMessage(config.macros.rssReader.noRSSFeed.format([url]));
			return;
		}
		if (xhr.responseXML) {
			// response is interpreted as XML
			config.macros.rssReader.cache[url] = xhr.responseXML;
			config.macros.rssReader.displayRssFeed(xhr.responseXML, params[0], url, params[1], params[2], params[3]);
		}
		else {
			if (responseText.substr(0,5) == "<?xml") {
				// response exists but not return as XML -> try to parse it 
				var dom = (new DOMParser()).parseFromString(responseText, "text/xml"); 
				if (dom) {
					// parsing successful so use it
					config.macros.rssReader.cache[url] = dom;
					config.macros.rssReader.displayRssFeed(dom, params[0], url, params[1], params[2], params[3]);
					return;
				}
			}
			// no XML display as html 
			wikify("<html>" + responseText + "</html>", params[0]);
			displayMessage(config.macros.rssReader.msg.noRSSFeed.format([url]));
		}
	},

	// explore down the DOM tree
	displayRssFeed: function(xml, place, feedURL, desc, toFilter, filterString){
		// Channel
		var chanelNode = xml.getElementsByTagName('channel').item(0);
		var chanelTitleElement = (chanelNode ? chanelNode.getElementsByTagName('title').item(0) : null);
		var chanelTitle = "";
		if ((chanelTitleElement) && (chanelTitleElement.firstChild)) 
			chanelTitle = chanelTitleElement.firstChild.nodeValue;
		var chanelLinkElement = (chanelNode ? chanelNode.getElementsByTagName('link').item(0) : null);
		var chanelLink = "";
		if (chanelLinkElement) 
			chanelLink = chanelLinkElement.firstChild.nodeValue;
		var titleTxt = "!![["+chanelTitle+"|"+chanelLink+"]]\n";
		var title = createTiddlyElement(place,"div",null,"ChanelTitle",null);
		wikify(titleTxt,title);
		// ItemList
		var itemList = xml.getElementsByTagName('item');
		var article = createTiddlyElement(place,"ul",null,null,null);
		var lastDate;
		var re;
		if (toFilter) 
			re = new RegExp(filterString.escapeRegExp());
		for (var i=0; i<itemList.length; i++){
			var titleElm = itemList[i].getElementsByTagName('title').item(0);
			var titleText = (titleElm ? titleElm.firstChild.nodeValue : '');
			if (toFilter && ! titleText.match(re)) {
				continue;
			}
			var descText = '';
			descElem = itemList[i].getElementsByTagName('description').item(0);
			if (descElem){
				try{
					for (var ii=0; ii<descElem.childNodes.length; ii++) {
						descText += descElem.childNodes[ii].nodeValue;
					}
				}
				catch(e){}
				descText = descText.replace(/<br \/>/g,'\n');
				if (desc == "asHtml")
					descText = "<html>"+descText+"</html>";
			}
			var linkElm = itemList[i].getElementsByTagName("link").item(0);
			var linkURL = linkElm.firstChild.nodeValue;
			var pubElm = itemList[i].getElementsByTagName('pubDate').item(0);
			var pubDate;
			if (!pubElm) {
				pubElm = itemList[i].getElementsByTagName('date').item(0); // for del.icio.us
				if (pubElm) {
					pubDate = pubElm.firstChild.nodeValue;
					pubDate = this.formatDateString(this.dateFormat, pubDate);
					}
					else {
						pubDate = '0';
					}
				}
			else {
				pubDate = (pubElm ? pubElm.firstChild.nodeValue : 0);
				pubDate = this.formatDate(this.dateFormat, pubDate);
			}
			titleText = titleText.replace(/\[|\]/g,'');
			var rssText = '*'+'[[' + titleText + '|' + linkURL + ']]' + '' ;
			if ((desc != "noDesc") && descText){
				rssText = rssText.replace(/\n/g,' ');
				descText = '@@'+this.itemStyle+descText + '@@\n';				
				if (version.extensions.nestedSliders){
					descText = '+++[...]' + descText + '===';
				}
				rssText = rssText + descText;
			}
			var story;
			if ((lastDate != pubDate) && ( pubDate != '0')) {
				story = createTiddlyElement(article,"li",null,"RSSItem",pubDate);
				lastDate = pubDate;
			}
			else {
				lastDate = pubDate;
			}
			story = createTiddlyElement(article,"div",null,"RSSItem",null);
			wikify(rssText,story);
		}
	},
	
	formatDate: function(template, date){
		var dateString = new Date(date);
		// template = template.replace(/hh|mm|ss/g,'');
		return dateString.formatString(template);
	},
	
	formatDateString: function(template, date){
		var dateString = new Date(date.substr(0,4), date.substr(5,2) - 1, date.substr(8,2)
			);
		return dateString.formatString(template);
	}
	
};

config.macros.rssFeedUpdate = {
	label: "Update",
	prompt: "Clear the cache and redisplay this RssFeed",
	handler: function(place,macroName,params) {
		var feedURL = params[0];
		var tiddlerTitle = params[1];
		createTiddlyButton(place, this.label, this.prompt, 
			function () {
				if (config.macros.rssReader.cache[feedURL]) {
					config.macros.rssReader.cache[feedURL] = null; 
			}
			story.refreshTiddler(tiddlerTitle,null, true);
		return false;});
	}
};

//}}}
What is the Role of the asset::Clip and how exactly are Assets and (Clip)-MObjects related?

First of all: ~MObjects are the dynamic/editing/manipulation view, while Assets are the static/bookkeeping/searching/information view of the same entities. Thus, the asset::Clip contains the general configuration, the ref to the media and descriptive properties, while all parameters being "manipulated" belong to the session::Clip (MObject). Besides that, the practical purpose of asset::Clip is that you can save and remember some selection as a Clip (Asset), maybe even attach some information or markup to it, and later be able to (re)create a editable representation in the Session (the GUI could implement this by allowing to drag from the asset::Clip GUI representation to the timeline window)

!!dependencies
The session::Clip (frequently called "clip-MO", i.e. the MObject) //depends on the Asset.// It can't exist without the Asset, because the Asset is needed for rendering. The other direction is different: the asset::Clip knows that there is a dependant clip-MO, there could be //at most one// such clip-MO depending on the Asset, but the Asset can exist without the clip-MO (it gives the possibility to re-create the clip-MO).

!!deletions
When the Asset or the corresponding asset::Media is deleted, the dependant clip-MO has to disappear. And the opposite direction?
* Model-1: asset::Clip has a weak ref to the clip-MO. Consequently, the clip-MO can go out of scope and disappear, so the asset::Clip has to maintain the information of the clip's dimensions (source position and length) somewhere. Because of MultichannelMedia, this is not so simple as it may look at first sight.
* Model-2: asset::Clip holds a smart ptr to the clip-MO, thus effectively keeping it alive. __obviously the better choice__
In either case, we have to solve the ''problem of clip asset proliferation''

!!multiplicity and const-ness
The link between ~MObject and Asset should be {{{const}}}, so the clip can't change the media parameters. Because of separation of concerns, it would be desirable that the Asset can't //edit// the clip either (meaning {{{const}}} in the opposite direction as well). But unfortunately the asset::Clip is in power to delete the clip-MO and, moreover, handles out a smart ptr ([[Placement]]) referring to the clip-MO, which can (and should) be used to place the clip-MO within the session and to manipulate it consequently...
[>img[Outline of the Build Process|uml/fig131333.png]]
At first sight the link between asset and clip-MO is a simple logical relation between entities, but it is not strictly 1:1 because typical media are [[multichannel|MultichannelMedia]]. Even if the media is compound, there is //only one asset::Clip//, because in the logical view we have only one "clip-thing". On the other hand, in the session, we have a compound clip ~MObject comprised of several elementary clip objects, each of which will refer to its own sub-media (channel) within the compound media (and don't forget, this structure can be tree-like)
{{red{open question:}}} do the clip-MO's of the individual channels refer directly to asset::Media? does this mean the relation is different from the top level, where we have a relation to a asset::Clip??
Conceptually, the Render Engine is the core of the application. But &mdash; surprisingly &mdash; we don't even have a distinct »~RenderEngine« component in our design. Rather, the engine is formed by the cooperation of several components spread out over two layers (Backend and Proc-Layer): The [[Builder]] creates a network of [[render nodes|ProcNode]], the [[Scheduler]] triggers individual [[calculation jobs|RenderJob]], which in turn pull data from the render nodes, thereby relying on the [[Backend's services|Backend]] for data access and using plug-ins for the actual media calculations.
&rarr; OverviewRenderEngine
&rarr; EngineFaçade 
The [[Render Engine|Rendering]] only carries out the low-level and performance critical tasks. All configuration and decision concerns are to be handled by [[Builder]] and [[Controller]]. While the actual connection of the Render Nodes can be highly complex, basically each Segment of the Timeline with uniform characteristics is handled by one Processor, which is a graph of [[Processing Nodes|ProcNode]] discharging into a ExitNode. The Render Engine Components as such are //stateless// themselves; for the actual calculations they are combined with a StateProxy object generated by and connected internally to the [[Controller]], while at the same time holding the Data Buffers (Frames) for the actual calculations.

{{red{Warning: obsolete as of 9/11}}}
Currently the Render/Playback is beeing targetted for implementation; almost everything in this diagram will be implemented in a slightly differently way....
[img[Entities comprising the Render Engine|uml/fig128389.png]]
Below are some notes regarding details of the actual implementation of the render process and processing node operation. In the description of the [[render node operation protocol|NodeOperationProtocol]] and the [[mechanics of the render process|RenderMechanics]], these details were left out deliberately.
{{red{WIP as of 9/11 -- need to mention the planning phase more explicitly}}}

!Layered structure of State
State can be seen as structured like an onion. All the [[StateAdapter]]s in one call stack are supposed to be within one layer: they all know of a "current state", which in turn is a StateProxy (and thus may refer yet to another state, maybe accros the network or in the backend or whatever). The actual {{{process()}}} function "within" the individual nodes just sees a single StateAdapter and thus can be thought to be a layer below.

!Buffer identification
For the purpose of node operation, Buffers are identified by a [[buffer-handle|BuffHandle]], which contains both the actual buffer pointer and an internal indes and classification of the source providing the buffer; the latter information is used for deallocation. Especially for calling the {{{process()}}} function (which is supposed to be plain C) the node invocation needs to prepare and provide an array containing just the output and input buffer pointers. Typically, this //frame pointer array//&nbsp; is allocated on the call stack.

!Problem of multi-channel nodes
Some data processors simply require to work on multiple channels simultanously, while others work just on a single channel and will be replicated by the builder for each channel invoved. Thus, we are struck with the nasty situation that the node graph may go through some nodes spanning the chain of several channels. Now the decision is //not to care for this complexity within a single chain calculating a single channel.// We rely solely on the cache to avoid duplicated calculations. When a given node happens to produce multiple output buffers, we are bound to allocate them for the purpose of this node's {{{process()}}} call, but afterwards we'r just "letting go", releasing the buffers not needed immediately for the channel acutally to be processed. For this to work, it is supposed that the builder has wired in a caching, and that the cache will hit when we touch the same node again for the other channels.

Closely related to this is the problem how to number and identify nodes and thus to be able to find calculated frames in cache (&rarr; [[here|NodeFrameNumbering]])

!Configuration of the processing nodes
[>img[uml/fig132357.png]]
Every node is actually decomposed into three parts
* an interface container of a ProcNode subclass
* an {{{const}}} WiringDescriptor, which is actually parametrized to a subtype encoding details of how to carry out the intended operation
* the Invocation state created on the stack for each {{{pull()}}} call. It is comprised of references to an StateAdapter object  and the current overall process state, the WiringDescriptor, and finally a table of suitable buffer handles
Thus, the outer container can be changed polymorphically to support the different kinds of nodes (large-scale view). The actual wiring of the nodes is contained in the WiringDescriptor, including the {{{process()}}} function pointer. Additionally, this WiringDescriptor knows the actual type of the operation Strategy, and this actual type has been chosen by the builder such as to select details of the desired operation of this node, for example caching / no caching or maybe ~OpenGL rendering or the special case of a node pulling directly from a source reader. Most of this configuration is done by selecting the right template specialisation within the builder; thus in the critical path most of the calls can be inlined

!!!! composing the actual operation Strategy
As shown in the class diagram to the right, the actual implementation is assembled by chaining together the various policy classes governing parts of the node operation, like Caching, in-Place calculation capability, etc. (&rarr; see [[here|WiringDescriptor]] for details). The rationale is that the variable part of the Invocation data is allocated at runtime directly on the stack, while a precisely tailored call sequence for "calculating the predecessor nodes" can be defined out of a bunch of simple building blocks. This helps avoiding "spaghetti code", which would be especially dangerous and difficult to get right because of the large number of different execution paths. Additionally, a nice side effect of this implementation technique is that a good deal of the implementation is eligible to inlining.
We //do employ//&nbsp; some virtual calls for the buffer management in order to avoid coupling the policy classes to the actual number of in/out buffers. (As of 6/2008, this is mainly a precaution to be able to control the number of generated template instances. If we ever get in the region of several hundred individual specialisations, we'd need to separate out further variable parts to be invoked through virtual functions.)

!Rules for buffer allocation and freeing
* only output buffers are allocated. It is //never necessary//&nbsp; to allocate input buffers!
* buffers are to be allocated as late as possible, typically just before invoking {{{process()}}}
* buffers are allways allocated by activating a [[buffer handle|BuffHandle]], preconfigured already during the planning phase
* {{{pull()}}} returns a handle at least for the single output requested by this call, allowing the caller to retrieve the result data
* any other buffers filled with results during the same {{{process()}}} call can be released immediately before returning from {{{pull()}}}
* similar, any input buffers are to be released immediately after the {{{process()}}} call, but before returing from this {{{pull()}}}
* while any handle contains the necessary information for releasing or "committing" this buffer, this has to be triggered explicitly.

@@clear(right):display(block):@@
An unit of operation, to be [[scheduled|Scheduler]] for calculating media frame data just in time.
Within each CalcStream, render jobs are produced by the associated FrameDispatcher, based on the corresponding JobTicket used as blue print (execution plan).

!Anatomy of a render job
Basically, each render job is a //closure// -- hiding all the prepared, extended execution context and allowing the scheduler to trigger the job as a simple function.
When activated, by virtue of this closure, the concrete ''node invocation'' is constructed, which is a private and safe execution environment for the actual frame data calculations. This (mostly stack based) environment embodies a StateProxy, acting as communication hub for accessing anything possibly stateful within the larger scope of the currently ongoing render process. The node invocation sequence is what actually implements the ''pulling of data'': on exit, all cacluated data is expected to be available in the output buffers. Typically (but not necessarily) each node embodies a ''calculation function'', holding the actual data processing algorithm.

!{{red{open questions 2/12}}}
* what are the job's actual parameters?
* how is prerequisite data passed?  &rarr; maybe by an //invocation key?//

!Input and closure
each job gets only the bare minimum information required to trigger the execution: the really variable part of the node's invocation. The job uses this pieces of information to re-activate a pre-calculated closure, representing a wider scope of environment information. Yet the key point is for this wider scope information to be //quasi static.// It is shared by a whole [[segment|Segmentation]] of the timeline in question, and it will be used and re-used, possibly concurrently. From the render job's point of view, the engine framework just ensures the availability and accessibility of all this wider scope information.

Prerequisite data for the media calculations can be considered just part of that static environment, as far as the node is concerned. Actually, this prerequisite data is dropped off by other nodes, and the engine framework and the builder ensure the availability of this data just in time.

!observations
* the job's scope represents a strictly local view
* the job doesn't need to know about its output
* the job doesn't need to know anything about the frame grid or frame number
* all it needs to know is the ''effective nominal time'' and an ''invocation instance ID''
While the render process, with respect to the dependencies, the builder and the processing function is sufficiently characterized by referring to the ''pull principle'' and by defining a [[protocol|NodeOperationProtocol]] each node has to adhere to &mdash; for actually get it coded we have to care for some important details, especially //how to manage the buffers.// It may well be that the length of the code path necessary to invoke the individual processing functions is finally not so important, compared with the time spent at the inner pixel loop within these functions. But my guess is (as of 5/08), that the overall number of data moving and copying operations //will be//&nbsp; of importance.
{{red{WIP as of 9/11 -- need to mention the planning phase more explicitly}}}

!requirements
* operations should be "in place" as much as possible
* because caching necessitates a copy, the points where this happens should be controllable.
* buffers should accommodate automatically to provide the necessary space without clipping the image.
* the type of the media data can change while passing through the network, and so does the type of the buffers.
On the other hand, the processing function within the individual node needs to be shielded from these complexities. It can expect to get just //N~~I~~// input buffers and //N~~O~~// output buffers of required type. And, moreover, as the decision how to organize the buffers certainly depends on non-local circumstances, it should be preconfigured while building.

!data flow
[>img[uml/fig131973.png]]
Not everything can be preconfigured though. The pull principle opens the possibility for the node to decide on a per call base what predecessor(s) to pull (if any). This decision may rely on automation parameters, which thus need to be accessible prior to requesting the buffer(s). Additionally, in a later version we plan to have the node network calculate some control values for adjusting the cache and backend timings &mdash; and of course at some point we'll want to utilize the GPU, resulting in the need to feed data from our processing buffers into some texture representation.

!buffer management
{{red{NOTE 9/11: the following is partially obsolete and needs to be rewritten}}} &rarr; see the BufferTable for details regarding new buffer management...

Besides the StateProxy representing the actual render process and holding a couple of buffer (refs), we employ a lightweight adapter object in between. It is used //for a single {{{pull()}}}-call// &mdash; mapping the actual buffers to the input and output port numbers of the processing node and for dealing with the cache calls. While the StateProxy manages a pool of frame buffers, this interspersed adapter allows us to either use a buffer retrieved from the cache as an input, possibly use a new buffer located within the cache as output, or (in case no caching happens) to just use the same buffer as input and output for "in-place"-processing. The idea is that most of the configuration of this adapter object is prepared in the wiring step while building the node network.

The usage patern of the buffers can be stack-like when processing nodes require multiple input buffers. In the standard case, which also is the simplest case, a pair of buffers (or a single buffer for "in-place" capable nodes) suffices to calculate a whole chain of nodes. But &mdash; as the recursive descent means depth-first processing &mdash; in case multiple input buffers are needed, we may encounter a situation where some of these input buffers already contain processed data, while we have to descend into yet another predecessor node chain to pull the data for the remaining buffers. Care has to be taken //to allocate the buffers as late as possible,// otherwise we could end up holding onto a buffer almost for each node in the network. Effectively this translates into the rule to allocate output buffers only after all input buffers are ready and filled with data; thus we shouldn't allocate buffers when //entering// the recursive call to the predecessor(s), rather we have to wait until we are about to return from the downcall chain.
Besides, these considerations also show we need a means of passing on the current buffer usage pattern while calling down. This usage pattern not only includes a record of what buffers are occupied, but also the intended use of these occupied buffers, especially if they can be modified in-place, and at which point they may be released and reused.
__note__: this process outlined here and below is still an simplification. The actual implementation has some additional [[details to care for|RenderImplDetails]]

!!Example: calculating a 3 node chain
# Caller invokes calculation by pulling from exit node, providing the top-level StateProxy
# node1 (exit node) builds StateAdapter and calls retrieve() on it to get the desired output result
# this StateAdapter (ad1) knows he could get the result from Cache, so he tries, but it's a miss
# thus he pulls from the predecessor node2 according to the [[input descriptor|ProcNodeInputDescriptor]] of node1
# node2 builds its StateAdapter and calls retrieve()
# but because StateAdapter (ad2) is configured to directly forward the call down (no caching), it pulls from node3
# node3 builds its StateAdapter and calls retrieve()
# this StateAdapter (ad3) is configured to look into the Cache...
# this time producing a Cache hit
# now StateAdapter ad2 has input data, but needs a output buffer location, which re requests from its //parent state// (ad1)
# and, because ad1 is configured for Caching and is "in-place" capable, it's clear that this output buffer will be located within the cache
# thus the allocation request is forwarded to the cache, which provides a new "slot"
# now node2 has both a valid input and a usable output buffer, thus the process function can be invoked
# and after the result has been rendered into the output buffer, the input is no longer needed
# and can be "unlocked" in the Cache
# now the input data for node1 is available, and as node1 is in-place-capable, no further buffer allocation is necessary prior to calculating
# the finished result is now in the buffer (which happens to be also the input buffer and is actually located within the Cache)
# thus it can be marked as ready for the Cache, which may now provide it to other processes (but isn't allowed to overwrite it)
# finally, when the caller is done with the data, it signalles this to the top-level State object
# which forwards this information to the cache, which in turn may now do with the released Buffer as he sees fit.
[img[uml/fig132229.png]]

@@clear(right):display(block):@@
__see also__
&rarr; the [[Entities involved in Rendering|RenderEntities]]
&rarr; additional [[implementation details|RenderImplDetails]]
&rarr; [[Memory management for render nodes|ManagementRenderNodes]]
&rarr; the protocol [[how to operate the nodes|NodeOperationProtocol]]
For each segment (of the effective timeline), there is a Processor holding the exit node(s) of a processing network, which is a "Directed Acyclic Graph" of small, preconfigured, stateless [[processing nodes|ProcNode]]. This network is operated according to the ''pull principle'', meaning that the rendering is just initiated by "pulling" output from the exit node, causing a cascade of recursive downcalls or prerequisite calculations to be scheduled as individual [[jobs|RenderJob]]. Each node knows its predecessor(s), thus the necessary input can be pulled from there. Consequently, there is no centralized "engine object" which may invoke nodes iteratively or table driven &mdash; rather, the rendering can be seen as a passive service provided for the backend, which may pull from the exit nodes at any time, in any order (?), and possibly multithreaded.
All State necessary for a given calculation process is encapsulated and accessible by a StateProxy object, which can be seen as the representation of "the process". At the same time, this proxy provides the buffers holding data to be processed and acts as a gateway to the backend to handle the communication with the Cache. In addition to this //top-level State,// each calculation step includes a small [[state adapter object|StateAdapter]] (stack allocated), which is pre-configured by the builder and serves the purpose to isolate the processing function from the detals of buffer management.


__see also__
&rarr; the [[Entities involved in Rendering|RenderEntities]]
&rarr; the [[mechanics of rendering and buffer management|RenderMechanics]]
&rarr; the protocol [[how to operate the nodes|NodeOperationProtocol]]
The rendering of input sources to the desired output ports happens within the &raquo;''Render Engine''&laquo;, which can be seen as a collaboration of Proc-Layer, Backend together with external/library code for the actual data manipulation. In preparation of the RenderProcess, the [[Builder]] as wired up a network of [[processing nodes|ProcNode]] called the ''low-level model'' (in contrast to the high-level model of objects placed within the session). Generally, this network is a "Directed Acyclic Graph" starting at the //exit nodes// (output ports) and pointing down to the //source readers.// In Lumiera, rendering is organized according to the ''pull principle'': when a specific frame of rendered data is requested from an exit node, a recursive calldown happens, as each node asks his predecessor(s) for the necessary input frame(s). This may include pulling frames from various input sources and for several time points, thus pull rendering is more powerful (but also more difficult to understand) than push rendering, where the process would start out with a given source frame.

Rendering can be seen as a passive service available to the Backend, which remains in charge what to render and when. Render processes may be running in parallel without any limitations. All of the storage and data management falls into the realm of the Backend. The render nodes themselves are ''completely stateless'' &mdash; if some state is necessary for carrying out the calculations, the backend will provide a //state frame// in addition to the data frames.
a special kind of [[render job|RenderJob]], used to retrieve input data relying on external IO.
Since a primary goal for the design of Lumiera's render engine is to use the available resources effectively, we try to avoid the situation where a working thread gets blocked waiting  for external IO to deliver data. Thus we create special marker jobs to keep track of any prerequisites and start the actual render calculations only if all these prerequisites are already fulfilled and all the required input data is available in RAM.
A distinct property of the Lumiera application is to rely on a rules based approach rather then on hard wired logic. When it comes to deciding and branching, a [[Query]] is issued, resulting either immediately in a {{{bool}}} result, or creating a //binding// for the variables used within the query. Commonly, there is more than one solution for a given query, allowing the result set to be enumerated.




!current state {{red{WIP as of 10/09}}}
We are still fighting to get the outline of the application settled down.
For now, the above remains in the status of a general concept and typical solution pattern: ''create query points instead of hard wiring things''.
Later on we expect a distinct __query subsystem__ to emerge, presumably embedding a YAP Prolog interpreter.
A facility allowing the Proc-Layer to work with abstracted [[media stream types|StreamType]], linking (abstract or opaque) [[type tags|StreamTypeDescriptor]] to an [[library|MediaImplLib]], which provides functionality for acutally dealing with data of this media stream type. Thus, the stream type manager is a kind of registry of all the external libraries which can be bridged and accessed by Lumiera (for working with media data, that is). The most basic set of libraries is instelled here automatically at application start, most notably the [[GAVL]] library for working with uncompressed video and audio data. //Later on, when plugins will introduce further external libraries, these need to be registered here too.//
A scale grid controls the way of measuring and aligining a quantity the application has to deal with. The most prominent example is the way to handle time in fixed atomic chunks (''frames'') addressed through a fixed format (''timecode''): while internally the application uses time values of sufficiently fine grained resolution, the acutally visible timing coordinates of objects within the session are ''quantised'' to some predefined and fixed time grid.

&rarr; QuantiserImpl
The [[Scheduler]] is responsible for geting the individual [[render jobs|RenderJob]] to run. The basic idea is that individual render jobs //should never block// -- and thus the calculation of a single frame might be split into several jobs, including resource fetching. This, together with the data exchange protocol defined for the OutputSlot, and the requirements of storage management (especially releasing of superseded render nodes &rarr; FixtureStorage), leads to certain requirements to be ensured by the scheduler:
;ordering of jobs
:the scheduler has to ensure all prerequisites of a given job are met
;job time window
:when it's not possible to run a job within the defined target time window, it must be marked as failure
;failure propagation
:when a job fails, either due to an job internal error, or by timing glitch, any dependent jobs need to receive that failure state
;guaranteed execution
:some jobs are marked as "ensure run". These need to run reliable, even when prerequisite jobs fail -- and this failure state needs to be propagated

!detecting termination
The way other parts of the system are built, requires us to obtain a guaranteed knowledge of some job's termination. It is possible to obtain that knowledge with some limited delay, but it nees to be absoultely reliable (violations leading to segfault). The requirements stated above assume this can be achieved through //jobs with guaranteed execution.// Alternatively we could consider installing specific callbacks -- in this case the scheduler itself has to guarantee the invocation of these callbacks, even if the corresponding job fails or is never invoked. It doesn't seem there is any other option.
A link to relate a compound of [[nested placement scopes|PlacementScope]] to the //current// session and the //current//&nbsp; [[focus for querying|QueryFocus]] and exploring the structure. ScopeLocator is a singleton service, allowing to ''explore'' a [[Placement]] as a scope, i.e. discover any other placements within this scope, and allowing to locate the position of this scope by navigating up the ScopePath finally to reach the root scope of the HighLevelModel.

In the general case, this user visible high-level-model of the [[objects|MObject]] within the session allows for more than tree-like associations, as a given [[Sequence]] might be bound into multiple [[timelines|Timeline]]. Effectively, this makes the ScopePath context dependent. The ScopeLocator is the point where the strictly tree-like hierarchy of placements is connected to this more elaborate scope and path structure. To this end, ScopeLocator maintaines a QueryFocusStack, to keep track of the current location in focus, in cooperation with the QueryFocus objects used by client code.
&rarr; see BindingScopeProblem
&rarr; see TimelineSequences

!!a note about concurrency
While there //is// a "current state" involved, the effect of concurrent access deliberately remains unspecified, because access is expected to be serialised on a higher level. If this assumption were to break, then probably the ScopeLocator would involve some thread local state.
The sequence of nested [[placement scopes|PlacementScope]] leading from the root (global) scope down to a specific [[Placement]] is called ''scope path''. Ascending this path yields all the scopes to search or query in proper order to be used when resolving some attribute of placement. Placements use visibility rules comparable to visibility of scoped definitions in common programming languages or in cascading style sheets, where a local definition can shadow a global one. In a similar way, properties not defined locally may be resolved by querying up the sequence of nested scopes.

A scope path is a sequence of scopes, where each scope is implemented by a PlacementRef pointing to the &raquo;scope top&laquo;, i.e. the placement in the session //constituting this scope.// Each Placement is registered with the session as belonging to a scope, and each placement can contain other placements and thus form a scope. Thus, the ''leaf'' of this path can be considered the current scope. In addition to some search and query functions, a scope path has the ability to ''navigate'' to a given target scope, which must be reachable by ascending and descending into the branches of the overall tree or DAG. Navigating changes the current path. ({{red{WIP 11/09}}} navigation to scopes outside the current path and the immediate children of the current leaf is left out for now. We'll need it later, when actually implementing meta-clips. &rarr; see BindingScopeProblem)

!access path and session structure
ScopePath represents an ''effective scoping location'' within the model &mdash; it is not necessarily identical to the storage structure (&rarr; PlacementIndex) used to organise the session. While similar in most cases, binding a sequence into multiple timelines or meta-clips will cause the internal and the logical (effective) structure to digress (&rarr; BindingScopeProblem). An internal singleton service, the ScopeLocator is used to constitute the logical (effective) position for a given placement (which in itself defines a position in the session datastructure). This translation involves a [[current focus|QueryFocus]] remembering the access path used to reach the placement in question. Actually, this translation is built on top of the //navigation//-Operation of ScopePath, which thus forms the foundation to provide such a logical view on the "current" location.

!Operations
* the default scope path contains just the root (of the implicit PlacementIndex, i.e. usually the root of the model in the session)
* a scope path can be created starting from a given scope. This convenience shortcut uses the ScopeLocator to establish the position of the given start scope. This way, effectively the PlacementIndex within the current session is queried for parentship relations until reaching the root of the HighLevelModel.
* paths are ''copyable value objects'' without identity on their own
* there is a special //invalid//&nbsp; path token, {{{ScopePath::INVALID}}}
* length, validity and empty check
* paths are equality comparable
* relations
** if a scope in question is contained in the path
** if a scope in question is at the leaf position of the path
** if a path in question is prefix (contained) in the given path
** if two paths share a common prefix
** if two paths are disjoint (only connected at root)
* navigation
** move up one step
** move up to the root
** navigate to a given scope
** clear a path (reset to default)
An implementation mechanism used within the PlacementIndex to detect some specific kinds of object connections(&raquoMagicAttachment&laquo;), which then need to trigger a special handling.

{{red{planned feature as of 6/2010)}}}

//It is not clear yet, if that's just an implementation facility, or something which is exposed through the ConfigRules.//

!preliminary requirements
We need to detect attaching and detaching of
* root &harr; BindingMO
* root &harr; [[Track]]
//Segmentation of timeline// denotes a data structure and a step in the BuildProcess.
When [[building the fixture|BuildFixture]], ~MObjects -- as handled by their Placements -- are grouped below each timeline using them; Placements are then to be resolved into [[explicit Placements|ExplicitPlacement]], resulting in a single well defined time interval for each object. This allows to cut this effective timeline into slices of constant wiring structure, which are represented through the ''Segmentation Datastructure'', a time axis with segments holding object placements and [[exit nodes|ExitNode]]. &nbsp;&rarr; see [[structure of the Fixture|Fixture]]
* for each Timeline we get a Segmentation
** which in turn is a list of non-overlapping segments
*** each holding
**** an ExplicitPlacement for each object touching that time interval
**** an ExitNode for each ModelPort of the corresponding timeline

!Usage pattern
;(1) build process
:&rarr; a tree walk yields the placements per timeline, which then get //resolved//
:&rarr; after //sorting,// the segmentation can be established, thereby copying placements spanning multiple segments
:&rarr; only //after running the complete build process for each segment,// the list of model ports and exit nodes can be established
;(2) commit stage
: -- after the build process(es) are completed, the new fixture gets ''committed'', thus becoming the officially valid state to be rendered. As render processes might be going on in parallel, some kind of locking or barrier is required. It seems advisable to make the change into a single atomic hot-swap. Meaning we'd get a single access point to be protected. But there is another twist: We need to find out which render processes to cancel and restart, to pick up the changes introduced by this build process -- which might include adding and deleting of timelines as a whole, and any conceivable change to the segmentation grid. Because of the highly dynamic nature of the placements, on the other hand it isn't viable to expect the high-level model to provide this information. Thus we need to find out about a ''change coverage'' at this point. We might expand on that idea to //prune any new segments which aren't changed.// This way, only a write barrier would be necessary on switching the actually changed segments, and any render processes touching these would be //tainted.// Old allocations could be released after all tainted processes are known to be terminated.
;(3) rendering use
:Each play/render process employs a ''frame dispatch step'' to get the right exit node for pulling a given frame (&rarr; [[Dispatcher|FrameDispatcher]]). From there on, the process proceeds into the [[processing nodes|ProcNode]], interleaved with backend/scheduler actions due to splitting into individually scheduled jobs. The storage of these processing nodes and accompanying wiring descriptors is hooked up behind the individual segments, by sharing a common {{{AllocationCluster}}}. Yet the calculation of individual frames also depends on ''parameters'' and especially ''automation'' linked with objects in the high-level model. It is likely that there might be some sharing or some kind of additional communication interface, as the intention was to allow ''live changes'' to automated values. <br/>{{red{WIP 12/2010}}} details need to be worked out. &rarr; [[parameter wiring concept|Wiring]]
!!!observations
* Storage and initialisation for explicit placements is an issue. We should strive at making that inline as much as possible.
* the overall segmentation emerges from a sorting of time points, which are start points of explicit placements
* after the segmentation has been built, the usage pattern changes entirely into a lookup of segment by time
* the individual segments act as umbrella for a lot of further objects hooked up behind.
* we need the ability to exchange or swap-in whole segments
* each segment controls an AllocationCluster
* we need to track processes for tainting
* access happens per ModelPort

!!!conclusions
The Fixture is mostly comprised of the Segementation datastructure, but some other facilities are involved too
# at top level, access is structured by groups of model ports, actually grouped by timeline. This first access level is handled by the Fixture
# during the build process, there is a collecting and ordering of placements; these intermediaries as well as the initial collection can be discarded afterwards
# the backbone of the segmentation is closely linked to an ordering by time. Initially it should support sorting, access by time interval search later on.
# discarding a segment (or failing to do so) has an high impact on the whole application. We should employ a reliable mechanism for that.
# the frame dispatch and the tracking of processes can be combined; data duplication is a virtue when it comes to parallel processes
# the process of comparing and tainting is broken out into a separate data structure to be used just once

Largely the storage of the render nodes network is hooked up behind the Fixture &rarr; [[storage considerations|FixtureStorage]]
A sequence is a collection of media objects, arranged onto a track tree. Sequences are the building blocks within the session. To be visible and editable, a session needs to be bound into a top-level [[Timeline]]. Alternatively, it may be used as a VirtualClip nested within another sequence.

The sequences within the session establish a //logical grouping//, allowing for lots of flexibility. Actually, we can have several sequences within one session, and these sequences can be linked together or not, they may be arranged in temporal order or may constitute a logical grouping of clips used simultaneously in compositional work etc. The data structure comprising a sequence is always a sub-tree of tracks, attached allways directly below root (Sequences at sub-nodes are deliberately disallowed). Through the sequence as frontend, this track tree might be used at various places in the model simultaneously. Tracks in turn are only an organisational (grouping) device, like folders &mdash; so this structure of sequences and track trees referred through them allows to use the contents of such a track or folder at various places within the model. But at any time, we have exactly one [[Fixture]], derived automatically from all sequences and containing the content actually to be rendered.
&rarr; see considerations about [[the role of Tracks and Pipes in conjunction with the sequences|TrackPipeSequence]]

!!Implementation and lifecycle
Actually, sequences are façade objects to quite some extent, delegating the implementation of the exposed functionality to the relevant placements and ~MObjects within the model. But they're not completely shallow; each sequence has an distinguishable identity and may hold additional meta-data. Thus, stressing this static aspect, sequences are implemented as StructAsset, attached to the [[model root|ModelRootMO]] through the AssetManager, simultaneously registered with the session, then accessed and owned by ref-counting smart-ptr.

A sequence is always tied to a root-placed track, it can't exist without such. When moving this track by changing it's [[Placement]], thus disconnecting it from the root scope, the corresponding sequence will be automatically removed from the session and discarded. On the other hand, sequences aren't required to be //bound// &mdash; a sequence might just exist in the model (attached by its track placed into root scope) and thereby remain passive and invisible. Such an unbound sequence can't be edited, displayed in the GUI or rendered, it is only accessible as asset. Of course it can be re-activated by linking it to a new or existing timeline or VirtualClip.
&rarr; see detailed [[discussion of dependent objects' behaviour|ModelDependencies]]
The Session contains all information, state and objects to be edited by the User. From a users view, the Session is synonymous to the //current Project//. It can be [[saved and loaded|SessionLifecycle]]. The individual Objects within the Session, i.e. Clips, Media, Effects, are contained in one (or several) collections within the Session, which we call [[Sequence]].
&rarr; [[Session design overview|SessionOverview]]

!Session structure
The Session object is a singleton &mdash; actually it is a »~PImpl«-Facade object (because the actual implementation object can be swapped for (re)loading Sessions).<br/>The Session is the access point to the HighLevelModel; it is comprised of
* a number of (independent) top-level [[time-lines|Timeline]]
* some [[sequences|Sequence]] to be used within these timelines
* a [[scope structure|PlacementScope]] backed by an index, and a current QueryFocus
* a set of ConfigRules to guide default behaviour {{red{planned as of 10/09}}}
* the ''Fixture'' with a possibility to [[(re)build it|PlanningBuildFixture]] {{red{just partially designed as of 01/09}}}
* the [[Asset subsystem|AssetManager]] is tightly integrated; besides, there are some SessionServices for internal use

&rarr; see [[relation of timeline, sequences and objects|TimelineSequences]]
While the core of the persistent session state corresponds just to the HighLevelModel, there is additionaly attached state, annotations and specific bindings, which allow to connect the session model to the local application configuration on each system. A typical example would be the actual output channels, connections and drivers to use on a specific system. In a Studio setup, these setup and wiring might be quite complex, it may be specific to just a single project, and the user might want to work on the same project on different systems. This explains why we can't just embody these configuration information right into the actual model.
Querying and retrieving objects within the session model is always bound to a [[scope|PlacementScope]]. When using the //dedicated API,// this scope is immediately defined by the object used to issue the query, like e.g. when searching the contents of a track. But when using the //generic API,// this scope is rather implicit, because in this case a (stateful) QueryFocus object is used to invoke the queries. Somewhat in-between, the top-level session API itself exposes dedicated query functions working on the whole-session scope (model root).
Based on the PlacementIndex, the treelike scope structure can be explored efficiently; each Placement attached to the session knows its parent scope. But any additional filtering currently is implemented on top of this basic scope exploration, which obviously may degenerate when searching large scopes and models. Filtering may happen implicitly; all scope queries are parametrised to a specific kind of MObject, while the PlacementIndex deals with all kinds of {{{Placement<MObject>}}} uniformly. Thus, more specifically typed queries automatically have to apply a type filter based on the RTTI of the discovered placements. The plan is later to add specialised sub-indices and corresponding specific query functions to speed up the most frequently used kinds of queries.

!Ways to query
;contents query
:running a ~ContentsQuery<TY> means to traverse the given start scope depth-first, the order of the retrieved elements is implementation defined (hashtable).
;exploration
:this special parametrisation of the ~ContentsQuery is confined to retrieving the immediate children of the start scope, and intended for exploring and processing the model contents in specific ways. The implementation is based on the bucket iterators of the reverse index (aka scope table).
;parent path
:this query proceeds in the other direction, ascending from the given start scope up to the model root
;special contents
:a ~ContentsQuery<TY> may be combined with a predicate supplied by the client code; this predicate is applied after the general type filter, allowing to use the special interface of the subtype. In a large model this may degenerate considerably.
;picking
:to pick by query means to retrieve the first object returned by the underlying query. As the oder of retrieval is implementation defined, the results may be unpredictable. Typically, this is used under the assumption that the given query can only yield a single result. Beware, the {{{pick(predicate)}}}-functions exposed on the QueryFocus and top-level session API moreover assume there will be a result, and throw when this assumption is broken. The type filter in this case is derived automatically (at compile time) from the provided predicate.
The [[Session]] is interconnected with the GUI, the SessionStorage, [[Builder]] and the CommandHandling. The HighLevelModel is an conceptual view of the session. All these dependencies are isolated from the actual data layout in memory, but the latter is shaped by the intended interactions.

{{red{WIP...}}}Currently as of 3/10, this is an ongoing implementation and planning effort

!Objects, Placements, References
Media objects are attached to the session by [[placements|Placement]]. A Placement within the session gets an distinguishable identity (&rarr; ModelObjectIdentity) and behaves like being an instance of the attached object. Client code usually interacts with the compound of placement + ~MObject. In order to decouple this interaction from the actual implementation within the session, client code rather deals with //references.// These are implemented like a smart-ptr, but based on an opaque hash value, which is equivalent to the //object instance identity.//
&rarr; MObjectRef
&rarr; PlacementRef

!Index of placements attached to the session
For implementation, the HighLevelModel can be reduced to a compound of interconnected placements. These placement instances are owned and managed by the session; attaching a placement means copying it into the session, thereby creating a new placement-ID. A lookup mechanism for placements and placement relations (PlacementIndex) thus is the actual core of the session data structure; the actual object instances are maintained by a pooling custom allocator ({{red{planned as of 1/10}}}).

!Lifecycle
MObject lifetime is managed by reference counting; all placements and client side references to an MObject share ownership. The placement instances attached to the session are maintained by the index; thus, as long as an placement exists, the corresponding object automatically stays alive. Similarly, assets, as managed by shared-ptrs, stay alive when directly referenced, even after being dropped from the AssetManager. A bare PlacementRef on the other hand doesn't guarantee anything about the referred placement; when dereferencing this reference token, the index is accessed to re-establish a connection to the object, if possible. The full-fledged MObjectRef is built on top of such a reference token and additionally incorporates a smart-ptr. For the client code this means, that holding a ref ensures existence of the object, while the //placement// of this object still can get removed from the session.

!Updates and dependent objects
The session and the models rely on dependent objects beeing kept updated and consistent. This problem can't be solved in a generic fashion &mdash; at least not without using a far more elaborate scheme (e.g. a transaction manager), which is beyond the scope here. Thus, care has to be taken on the implementation level, especially in conjunction with error handling and threading considerations
&rarr; see [[details here...|ModelDependencies]]
"Session Interface", when used in a more general sense, denotes a compound of several interfaces and facilities, together forming the primary access point to the user visible contents and state of the editing project.
* the API of the session class
* the accompanying management interface (SessionManager API)
* LayerSeparationInterfaces allowing to access these interfaces from outside the Proc-Layer
* the primary public ~APIs exposed on the objects to be [[queried and retrieved|SessionStructureQuery]] via the session class API
** Timeline
** Sequence
** Placement
** Clip
** Track
** Effect
** Automation
* the [[command|CommandHandling]] interface, including the [[UNDO|UndoManager]] facility

!generic and explicit API
The HighLevelModel exposes two kinds of interfaces (which are interconnected and rely on each other): A generic, but somewhat low-level API, which is good for processing &mdash; like e.g. for the builder or de-serialiser &mdash; and a more explicit API providing access to some meaningful entities within the model. Indeed, the latter (explicit top level entities) can be seen as a ''façade interface'' to the generic structures:
* the [[Session]] object itself corresponds to the ModelRootMO
* the one (or multiple) [[Timeline]] objects correspond to the BindingMO instances attached immediately below the model root
* the [[sequences|Sequence]] bound into these timelines (by the ~BindingMOs) correspond to the top level [[Track]]-~MObjects within each of these sequences.
[<img[Object relations on the session façade|draw/SessionFacade1.png]]

Thus, there is a convenient and meaningful access path through these façade objects, which of course actually is implemented by forwarding to the actual model elements (root, bindings, tracks)

Following this access path down from the session means using the ''dedicated'' API on the objects retrieved.
To the contrary, the ''generic'' API is related to a //current location (state),// the QueryFocus.



!purpose of these ~APIs
* to discover
** by ID
** by type (filter)
** all contained
* to add
* to destroy

!!exploring session contents
Typically, the list of timelines serves as starting point for exploring the model. Basically, any kind of object could be attached everywhere, but both the GUI and the Builder rely on assumptions regarding the [[overall model structure|HighLevelModel]] &mdash; silently ignoring content not in line. This corresponds to the //dedicated API functions// on specific kinds of objects, which allow to retrieve content according to this assumed structure conveniently and with strong typing. From the timeline and the associated sequence you'll get the root track, and from there the sub-tracks and the clips located on them, which in turn may have attachments (effects, transitions, labels).
On the other hand, arbitrary structures can be retrieved using the //generic API:// Contents can be discovered on the QueryFocus, which automatically follows the //point of mutation,// but can be moved to any point using the {{{QueryFocus::attach(obj)}}} function.

!!queries and defaults
Queries can retrieve the immediate children, or the complete contents of a scope (depth-first). The results can be filtered by type. Additionally, the results of such a scoped query can be filtered by a client-provided predicate, which allows to pick objects with specific properties. The intention is to extend this later to arbitrary logical [[queries|Query]], using some kind of resolution engine. Besides these queries, [[default configured objects|DefaultsManagement]] can be retrieved or defined through the defaults manager, which is accessible as a self-contained component on the public session API. Defaults can be used to establish a standard way of doing things on a per-project base.
&rarr; SessionContentsQuery

{{red{WIP ... just emerging}}}
!!discovery and mutations
The discovery functions available on these ~APIs are wired such as to return suitably typed MObjectRef instances always. These are small value objects and can be used to invoke operations (both query and mutating) on the underlying object within the session. Raw placement (language)references aren't exposed on these outward interfaces.

While this protects against accessing dangling references, it can't prevent clients from invoking any mutating operation directly on these references. It would be conceivable, by using proxies, to create and invoke commands automatically. But we rather don't want to go this route, because
* Lumiera is an application focussed on very specific usage, not a general purpose library or framework
* regarding CommandHandling, the //design decision was to require a dedicated (and hand written) undo functor.//

!!!!protection against accidental mutation
{{red{WIP}}}As of 2/10, I am considering to add a protection against invoking an raw mutation operation accidentally, and especially bypassing the command frontend and the ProcDispatcher. This would not only be annoying (no UNDO), but potentially dangerous, because all of the session internals are not threadsafe by design.
The considered solution would be to treat this situation as if an authorisation is required; this authorisation for mutation could be checked by a &raquo;wormhole&laquo;-like context access (&rarr; aspect oriented programming). Of course, in our case we're not dealing with real access restrictions, just a safeguard: While command execution creates such an authorisation token automatically, a client actually wanting to invoke an mutation operations bypassing the command frontend, would need to set up such a token explicitly and manually.

!!adding and destroying
Objects can be added and destroyed directly on the top level session API. The actual memory management of the object instances works automatically, based on reference counts. (Even after purging an object from the session, it may still be indirectly in use by an ongoing render process).
When adding an object, a [[scope|PlacementScope]] needs to be specified. Thus it makes sense to provide {{{add()}}}-operations on the dedicated API of individual session parts, while the generic {{{attach()}}}-call on the top-level session API relies on the current QueryFocus to determine the location where to add an object. Besides, as we're always adding the [[Placement]] of an object into the session, this placement may specify an additional constraint or tie to a specific scope; resolving the placement thus may cause the object to move to another location


!!!{{red{Questions}}}
* what exactly is the relation of discovery and [[mutations|SessionMutation]]?
* what's the point of locating them on the same conceptual level?
* how to observe the requirement of ''dispatching'' mutations ([[Command]])?
* how to integrate the two possible search depths (children and all)?
* how is all of this related to the LayerSeparationInterfaces, here SessionFacade und EditFacade?

<<<
__preliminary notes__: {{red{3/2010}}} Discovery functions accessible from the session API are always written such as to return ~MObjectRefs. These expose generic functions for modifying the structure: {{{attach(MObjectRef)}}} and {{{purge()}}}. The session API exposes variations of these functions. Actually, all these functions do dispatch the respective commands automatically. To the contrary, the raw functions for adding and removing placements are located on the PlacementIndex; they are accessible as SessionServices &mdash; which are intended for Proc-Layer's internal use solely. This separation isn't meant to be airtight, just an reminder for proper use.

Currently, I'm planning to modify MObjectRef to return only a const ref to the underlying facilities by default. Then, there would be a subclass which is //mutation enabled.// But this subclass will check for the presence of a mutation-permission token &mdash; which is exposed via thread local storage, but //only within a command dispatch.// Again, no attempt is made to make this barrier airtight. Indeed, for tests, the mutation-permission token can just be created in the local scope. After all, this is not conceived as an authorisation scheme, rather as a automatic sanity check. It's the liability of the client code to ensure any mutation is dispatched.
<<<
The current [[Session]] is the root of any state found within Proc-Layer. Thus, events defining the session's lifecycle influence and synchronise the cooperative behaviour of the entities within the model, the ProcDispatcher, [[Fixture]] and any facility below.
* when ''starting'', on first access an empty session is created, which puts any related facility into a defined initial state.
* when ''closing'' the session, any dependent facilities are disabled, disconnected, halted or closed
* ''loading'' an existing session &mdash; after closing the previous session &mdash; sets up an empty (default) session and populates it with de-serialised content.
* when encountering a ''mutation point'', [[command processing|ProcDispatcher]] is temporarily halted to trigger off an BuildProcess.

!Role of the session manager
The SessionManager is responsible for conducting the session lifecycle. Accessible through the static interface {{{Session::current}}}, it exposes the actual session as a ~PImpl. Both session manager and session are indeed interfaces, backed by implementation classes belonging to ~Proc-Layer's internals. Loading, saving, resetting and closing are the primary public operations of the session manager, each causing the respective lifecycle event.

Beyond that, client code usually doesn't interact much with the lifecycle, which mostly is a pattern of events to happen in a well-defined sequence. So the //implementation// of the session management operations has to comply to this lifecycle, and does so by relying on a self-contained implementation service, the LifecycleAdvisor. But (contrary to an application framework) the lifecycle of the Lumiera session is rather fixed, the only possibility for configuration or extension being the [[lifecycle hooks|LifecycleEvent]], where other parts of the system (and even plug-ins) may install some callback methods.

!Synchronising access to session's implementation facilities
Some other parts and subsystems within the ~Proc-Layer need specialised access to implementation facilities within the session. Information about some conditions and configurations might be retrieved through [[querrying the session|Query]], and especially default configurations for many objects are [[bound to the session|DefaultsImplementation]]. The [[discovery of session contents|SessionStructureQuery]] relies on an [[index facility|PlacementIndex]] embedded within the session implementation. Moreover, some "properties" of the [[media objects|MObject]] are actually due to the respective object being [[placed|Placement]] in some way into the session; consequently, there might be an dependency on the actual [[location as visible to the placement|PlacementScope]], which in turn is constituted by [[querying the index|QueryFocus]].

Each of these facilities relies on a separate access point to session services, corresponding to distinct service interfaces. But &mdash; on the implementation side &mdash; all these services are provided by a (compound) SessionServices implementation object. This approach allows to switch the actual implementation of all these services simply by swapping the ~PImpl maintained by the session manager. A new implementation level service can thus be added to the ~SessionImpl just by hooking it into the ~SessionServices compound object. But note, this mechanism as such is ''not thread safe'', unless the //implementation// of the invoked functions is synchronised in some way to prevent switching to a new session implementation while another thread is still executing session implementation code.

Currently, the understanding is for some global mechanism to hold any command execution, script running and further object access by the GUI //prior//&nbsp; to invoking any of the session management operations (loading, closing, resetting). An alternative would be to change the top-level access to the session ~PImpl to go through an accessor value object, to acquire some lock automatically before any access can happen. C++ ensures the lifespan of any temporaries to surpass evaluation of the enclosing expression, which would be sufficient to prevent another thread to pull away the session during that timespan. Of course, any value returned from such an session API call isn't covered by this protection. Usually, objects are handed out as MObjectRef, which in turn means to resolve them (automatically) on dereferentiation by another session API access. But while it seems we could get locking to work automatically this way, still such a technique seems risky and involved; a plain flat lock at top level seems to be more appropriate.

!Interface and lifecycle hooks
As detailed above, {{{Session::current}}} exposes the management / lifecycle API, and at the same time can be dereferenced into the primary [[session API|SessionInterface]]. An default configured ~SessionImpl instance will be built automatically, in case no session implementation instance exists on executing this dereferentiation. At various stages within the lifecycle, specific LifecycleEvent hooks are activated, which serve as an extension point for other parts of the system to install callback functions to execute additional behaviour at the right moment.

!!!building (or loading) a session
# as a preparation step, a new implementation instance is created, alongside with any supporting facilities (especially the PlacementIndex)
# the basic default configuration is loaded into this new session instance
# when the new session is (technically) complete and usable, the switch on the ~PImpl happens
# the {{{ON_SESSION_START}}} LifecycleEvent is emitted
# content is loaded into the session, including hard wired content and any de-serialised data from persistent storage
# the {{{ON_SESSION_INIT}}} event is emitted
# additional initialisation, wiring and registration takes place; basically anything to make the session fully functional
# the session LayerSeparationInterface is opened and any further top-level blocking is released
# the {{{ON_SESSION_READY}}} event is emitted

!!!closing the session
# top-level facilities accessing the session (GUI, command processing, scripts) are blocked and the LayerSeparationInterface is closed
# any render processes are ensured to be terminated (or //disconnected// &mdash; so they can't do any harm)
# the {{{ON_SESSION_END}}} event is emitted
# the command processing log is tagged
# the command queue(s) are emptied, discarding any commands not yet executed
# the PlacementIndex is cleared, effectively releasing any object "instances"
# the [[asset registry|AssetManager]] is cleared, thereby releasing any remaining external resource references
# destruction of session implementation instances
{{red{none of the above is implemented as of 11/09}}}
The Session contains all information, state and objects to be edited by the User (&rarr;[[def|Session]]).
As such, the SessionInterface is the main entrance point to Proc-Layer functionality, both for the primary EditingOperations and for playback/rendering processes. Proc-Layer state is rooted within the session and guided by the [[session's lifecycle events|SessionLifecycle]].
Implementation facilities within the Proc-Layer may access a somewhat richer [[session service API|SessionServices]].

Currently (as of 3/10), Ichthyo is working on getting a preliminary implementation of the [[Session in Memory|SessionDataMem]] settled.

!Session, Model and Engine
The session is a SubSystem and acts as a frontend to most of the Proc-Layer. But it doesn't contain much operational logic; its primary contents are the [[model|Model]], which is closely [[interconnected to the assets|AssetModelConnection]].

!Design and handling of Objects within the Session
Objects are attached and manipulated by [[placements|Placement]]; thus the organisation of these placements is part of the session data layout. Effectively, such a placement within the session behaves like an //instances// of a given object, and at the same time it defines the "non-substantial" properties of the object, e.g. its positions and relations. [[References|MObjectRef]] to these placement entries are handed out as parameters, both down to the [[Builder]] and from there to the render processes within the engine, but also to external parts within the GUI and in plugins. The actual implementation of these object references is built on top of the PlacementRef tags, thus relying on the PlacementIndex the session maintains to keep track of all placements and their relations. While &mdash; using these references &mdash; an external client can access the objects and structures within the session, any actual ''mutations'' should be done based on the CommandHandling: a single operation of a sequence of operations is defined as [[Command]], to be [[dispatched|ProcDispatcher]] as [[mutation operation|SessionMutation]]. Following this policy ensures integration with the&nbsp;SessionStorage and provides (unlimited) [[UNDO|UndoManager]].

On the implementation level, there are some interdependencies to consider between the [[data layout|SessionDataMem]], keeping ModelDependencies updated and integrating with the BuildProcess. While the internals of the session are deliberately kept single-threaded, we can't make much assumptions regarding the ongoing render processes.
The session manager is responsible for maintaining session state as a whole and for conducting the session [[lifecycle|SessionLifecycle]]. The session manager API allows for saving, loading, closing and resetting the session. Accessible through the static interface {{{Session::current}}}, it exposes the actual session as a ~PImpl. Actually, both session and session manager are interfaces.
//Any modification of the session will pass through the [[command system|CommandHandling]].//
Thus any possible mutation comes in two flavours: a raw operation invoked directly on an object instance attached to the model, and a command taking an MObjectRef as parameter. The latter approach &mdash; invoking any mutation through a command &mdash; will pass the mutations trough the ProcDispatcher to ensure the're logged for [[UNDO|UndoManager]] and executed sequentially, which is important, because the session's internals are //not threadsafe by design.// Thus we're kind of enforcing the use of Commands: mutating operations include a check for a &raquo;permission to mutate&laquo;, which is automatically available within a command execution {{red{TODO as of 2/10}}}. Moreover, the session API and the corresponding LayerSeparationInterfaces expose MObjectRef instances, not raw (language) refs.

!!Questions to solve
* how to get from the raw mutation to the command?
* how to organise the naming and parametrisation of commands?
* should we provide the additional flexibility of a binding here?
* how to keep [[dependencies within the model|ModelDependencies]] up-to date?

!!who defines commands?
The answer is simple: the one who needs to know about their existence. Because basically commands are invoked //by-name// -- someone needs to be aware of that name and its meaning. Thus, for any given mutation, there is a place where it's meaningful to specify the details //and//&nbsp; to subsume them under a meaningful term. An entity responsible for this place could then act as the provider of the command in question.

Interestingly, there seems to be an alternative answer to this question. We could locate the setup and definition of all commands into a central administrative facility. Everyone in need of a command then ought to know the name and retrieve this command. Sounds like bureaucracy.
<<<
{{red{WARNING: Naming was discussed (11/08) and decided to be changed....}}}
* the term [[EDL]] was phased out in favour of ''Sequence''
* [[Session]] is largely synonymous to ''Project''
* there seems to be a new entity called [[Timeline]] which holds the global Pipes
<<<
The [[Session]] (sometimes also called //Project// ) contains all information and objects to be edited by the User. Any state within the Proc-Layer is directly or indirectly rooted in the session. It can be saved and loaded. The individual Objects within the Session, i.e. Clips, Media, Effects, are contained in one or multiple collections within the Session, which we call [[sequence(s)|Sequence]]. Moreover, the sesion contains references to all the Media files used, and it contains various default or user defined configuration, all being represented as [[Asset]]. At any given time, there is //only one current session// opened within the application. The [[lifecycle events|SessionLifecycle]] of the session define the lifecycle of ~Proc-Layer as a whole.

The Session is close to what is visible in the GUI. From a user's perspective, you'll find a [[Timeline]]-like structure, containing an [[Sequence]], where various Media Objects are arranged and placed. The available building blocks and the rules how they can be combined together form Lumiera's [[high-level data model|HighLevelModel]]. Basically, besides the [[media objects|MObjects]] there are data connections and all processing is organized around processing chains or [[pipes|Pipe]], which can be either global (in the Session) or local (in real or virtual clips).

!!!larger projects
For larger editing projects the simple structure of a session containing "the" timeline is not sufficient. Rather
* we may have several [[sequences|Sequence]], e.g. one for each scene. These sequences can be even layered or nested (compositional work).
* within one project, there may be multiple, //independant Timelines// &mdash; each of which may have an associated Viewer or Monitor
Usually, when working with this stucture, you'll drill down starting from a timeline, trough a (top-level) sequence, down into a track, a clip, maybe even a embedded Sequence (VirtualClip), and from there even more down into a single attached effect. This constitutes a set of [[nested scopes|PlacementScope]]. Operations are to be [[dispatched|ProcDispatcher]] through a [[command system|CommandHandling]], including the target object [[by reference|MObjectRef]]. [[Timelines|Timeline]] on the other hand are always top-level objects and can't be combined further. You can render a single given timeline to output.
&rarr; see [[Relation of Project, Timelines and Sequences|TimelineSequences]]

!!!the definitive state
With all the structural complexities possible within such a session, we need an isolation layer to provide __one__ definitive state where all configuration has been made explicit. Thus the session manages a special consolidated view (object list), called [[the Fixture|Fixture]], which can be seen as all currently active objects placed onto a single timeline.

!!!organisational devices
The possibility of having multiple Sequences helps organizing larger projects. Each [[Sequence]] is just a logical grouping; because all effective properties of any MObject within this sequence are defined by the ~MObject itself and the [[Placement]], by which the object is anchored to some time point, some track, can be connected to some pipe, or linked to another object. In a similar manner, [[Tracks|Track]] are just another organisational aid for grouping objects, disabling them and defining common output pipes.

!!!global pipes
[>img[draw/Proc.builder1.png]] Any session should contain a number of global [[(destination) pipes|Pipe]], typically video out and audio out. The goal is, to get any content producing or transforming object in some way connected to one of these outputs, either //by [[placing|Placement]] it directly// to some pipe, or by //placing it to a track// and having the track refer to some pipe. Besides the global destination pipes, we can use internal pipes to form busses or subgroups, either on a global (session) level, or by using the processing pipe within a [[virtual clip|VirtualClip]], which can be placed freely within the sequence(s). Normally, pipes just gather and mix data, but of course any pipe can have an attached effect chain.
&rarr; [[more on Tracks and Pipes within the Sequence|TrackPipeSequence]]

!!!default configuration
While all these possibilities may seem daunting, there is a simple default configuration loaded into any pristine new session:
It will contain a global video and audio out pipe, just one timeline holding a single sequence with a single track; this track will be configured with a fading device, to send any video and audio data encountered on enclosed objects to the global (master) pipes. So, by adding a clip with a simple absolute placement to this track and to some time position, the clip gets connected and rendered, after [[(re)building|PlanningBuildFixture]] the [[Fixture]] and passing the result to the [[Builder]] &mdash; and using the resulting render nodes network (Render Engine).

&rarr; [[anatomy of the high-level model|HighLevelModel]]
&rarr; considerations regarding [[Tracks and Pipes within the session|TrackPipeSequence]]
&rarr; see [[Relation of Project, Timelines and Sequences|TimelineSequences]]
Within Lumiera's Proc-Layer, there are some implementation facilities and subsystems needing more specialised access to implementation services provided by the session. Thus, besides the public SessionInterface and the [[lifecycle and state management API|SessionManager]], there are some additional service interfaces exposed by the session through a special access mechanism. This mechanism needs to be special in order to assure clean transactional behaviour when the session is opened, closed, cleared or loaded. Of course, there is the additional requirement to avoid direct dependencies of the mentioned Proc internals on session implementation details.

!Accessing session services
For each of these services, there is an access interface, usually through an class with only static methods. Basically this means access //by name.//
On the //implementation side//&nbsp; of this access interface class (i.e. within a {{{*.cpp}}} file separate from the client code), there is a (down-casting) access through the top-level session-~PImpl pointer, allowing to invoke functions on the ~SessionServices instance. Actually, this ~SessionServices instance is configured (statically) to stack up implementations for all the exposed service interfaces on top of the basic ~SessionImpl class. Thus, each of the individual service implementations is able to use the basic ~SessinImpl (becaus it inherits it) and the implementaion of the access functions (to the session service we're discussing here) is able to use this forwarding mechanism to get the actual implementation basically by one-liners. The upside of this (admittedly convoluted) technique is that we've gotten at runtime only a single indirection, which moreover is through the top-level session-~PImpl. The downside is that, due to the separation in {{{*.h}}} and {{{*.c}}} files, we can't use any specifically typed generic operations, which forces us to use type erasure in case we need such (an example being the content discovery queries utilised by all high-level model objects).
The frontside interface of the session allows to query for contained objects; it is used to discover the structure and contents of the currently opened session/project. Access point is the public API of the Session class, which, besides exposing those queries, also provides functionality for adding and removing session contents.

!discovering structure
The session can be seen as an agglomeration of nested and typed containers.
Thus, at any point, we can explore the structure by asking for //contained objects of a specific type.// For example, at top level, it may be of interest to enumerate the [[timelines within this session|Timeline]] and to ennumerate the [[sequences|Sequence]]. And in turn, on a given Sequence, it would be of interest to explore the tracks, and also maybe to iterate over all clips within this sequence.
So, clearly, there are two flavours of such an contents exploration query: it could either be issued as an dedicated member function on the public API of the respective container object, e.g. {{{Track::getClips()}}} &mdash; or it could be exposed as generic query function, relying on the implicit knowledge of the //current location//&nbsp; rather.

!problem of context and access path
The (planned) session structure of Lumiera allows for quite some flexibility, which, of course comes at a price tag. Especially, as there can be multiple independent top level timelines, where a given sequence can be used simultaneously within multiple timelines and even as virtual media within a [[meta-clip|VirtualClip]], and moreover, as properties of any Placement are rather queried and discovered within the PlacementScope of this object &mdash; consequently the discovered values may depend on //how you look at this object.// More specifically, it depends on the ''access path'' used to discover this object, because this path constitutes the actual scope visible to this object.

To give an example, let's assume a clip within a sequence, and this sequence is both linked to the top-level timeline, but also used within a meta-clip. (see the drawing &rarr; [[here|PlacementScope]])
In this case, the sequence has an 1:n [[binding|BindingMO]]. A binding is (by definition) also a PlacementScope, and, incidentally, in the case of binding the sequence into a timeline, the binding also translates //logical// output designations into global pipes found within the timeline, while otherwise they get mapped onto "channels" of the virtual media used by the virtual clip. Thus, the absolute time position as well as the output connection of a given clip within this sequence //depends on how we look at this clip.// Does this clip apear as part of the global timeline, or did we discover it as contained within the meta-clip? &rarr; see [[discussion of this scope-binding problem|BindingScopeProblem]]

!!solution requirements
The baseline of any solution to this problem is clear: at the point where the query is issued, a context information is necessary; this context yields an access path from top level down to the object to be queried, and this access path constitutes the effective scope this object can utilise for resolving the query.

!!introducing a QueryFocus
A secondary goal of the design here is to ease the use of the session query interface. Thus the proposal is to treat this context and access path as part of the current state. To do so, we can introduce a QueryFocus following the queries and remembering the access path; this focus should be maintained mostly automatically. It allows for stack-like organisation, to allow sub-queries without affecting the current focus, where the handling of such a temporary sub-focus is handled automatically by a scoped local (RAII) object. The focus follows the issued queries and re-binds as necessary.

!!using QueryFocus as generic query interface
Solving the problem this way has the nice side effect, that we get a quite natural location where to put an unspecific query interface: Just let the current QueryFocus expose the basic set of query API functions. Such an generic query interface can be seen as a complement to the query functions exposed on specific objects ("get me the clips within this track") according to the structure. Because a generic interface especially allows for writing simple diagnostics and discovery code, with only a weak link to the actual session structure.

!Implementation strategy
The solution is being built starting from the generic part, as the actual session structure isn't hard coded into the session implementation, but rather created by convention.
The query API on specific objects (i.e. Session, Timeline, Sequence, Track and Clip) is then easily coded up on top, mostly with inlined one-liners of the kind
{{{
ITER getClips() { return QueryFocus::push(this).query<Clip>(); } 
}}}
To make this work, QueryFocus exposes an static API (and especially the focus stack is a singleton). The //current// QueryFocus object can easily be re-bound to another point of interest (and ajusts the contained path info automatically), and the {{{push()}}}-function creates a local scoped object (which pops automatically).

And last but not least: the difficult part of this whole concept is encapsulated and ''can be left out for now''. Because, according to Lumiera's [[roadmap|http://issues.lumiera.org/roadmap]], meta-clips are to be postponed until well into beta! Thus, we can start with a trivial (no-op) implementation, but immediately gain the benefit of making the relevant parts of the session implementation aware of the problem.

{{red{WIP ... draft}}}
<<search>><<closeAll>><<permaview>><<newTiddler>><<saveChanges>><<slider chkSliderOptionsPanel OptionsPanel "options »" "Change TiddlyWiki advanced options">>
Building a Render Nodes Network from Objects in the Session
Engine
{{red{killme}}}
/***

''Inspired by [[TiddlyPom|http://www.warwick.ac.uk/~tuspam/tiddlypom.html]]''

|Name|SplashScreenPlugin|
|Created by|SaqImtiaz|
|Location|http://tw.lewcid.org/#SplashScreenPlugin|
|Version|0.21 |
|Requires|~TW2.08+|
!Description:
Provides a simple splash screen that is visible while the TW is loading.

!Installation
Copy the source text of this tiddler to your TW in a new tiddler, tag it with systemConfig and save and reload. The SplashScreen will now be installed and will be visible the next time you reload your TW.

!Customizing
Once the SplashScreen has been installed and you have reloaded your TW, the splash screen html will be present in the MarkupPreHead tiddler. You can edit it and customize to your needs.

!History
* 20-07-06 : version 0.21, modified to hide contentWrapper while SplashScreen is displayed.
* 26-06-06 : version 0.2, first release

!Code
***/
//{{{
var old_lewcid_splash_restart=restart;

restart = function()
{   if (document.getElementById("SplashScreen"))
        document.getElementById("SplashScreen").style.display = "none";
      if (document.getElementById("contentWrapper"))
        document.getElementById("contentWrapper").style.display = "block";
    
    old_lewcid_splash_restart();
   
    if (splashScreenInstall)
       {if(config.options.chkAutoSave)
			{saveChanges();}
        displayMessage("TW SplashScreen has been installed, please save and refresh your TW.");
        }
}


var oldText = store.getTiddlerText("MarkupPreHead");
if (oldText.indexOf("SplashScreen")==-1)
   {var siteTitle = store.getTiddlerText("SiteTitle");
   var splasher='\n\n<style type="text/css">#contentWrapper {display:none;}</style><div id="SplashScreen" style="border: 3px solid #ccc; display: block; text-align: center; width: 320px; margin: 100px auto; padding: 50px; color:#000; font-size: 28px; font-family:Tahoma; background-color:#eee;"><b>'+siteTitle +'</b> is loading<blink> ...</blink><br><br><span style="font-size: 14px; color:red;">Requires Javascript.</span></div>';
   if (! store.tiddlerExists("MarkupPreHead"))
       {var myTiddler = store.createTiddler("MarkupPreHead");}
   else
      {var myTiddler = store.getTiddler("MarkupPreHead");}
      myTiddler.set(myTiddler.title,oldText+splasher,config.options.txtUserName,null,null);
      store.setDirty(true);
      var splashScreenInstall = true;
}
//}}}
A small (in terms of storage) and specifically configured StateProxy object which is created on the stack {{red{Really on the stack? 9/11}}} for each individual {{{pull()}}} call. It is part of the invocation state of such a call and participates in the buffer management. Thus, in a calldown sequence of {{{pull()}}} calls we get a corresponding sequence of "parent" states. At each level, the &rarr; WiringDescriptor of the respective node defines a Strategy how the call is passed on.
An Object representing a //Render Process// and containing associated state information.
* it is created in the Player subsystem while initiating the RenderProcess
* it is passed on to the generated Render Engine, which in turn passes it down to the individual Processors
* moreover, it contains methods to communicate with other state relevant parts of the system, thereby shielding the rendering code from any complexities of state synchronisation and management if necessary. (thus the name Proxy)
* in a future version, it may also encapsulate the communication in a distributed render farm
Conversion of a media stream into a stream of another type is done by a processor module (plugin). The problem of finding such a module is closely related to the StreamType and especially [[problems of querying|StreamTypeQuery]] for such. (The builder uses a special Facade, the ConManager, to access this functionality). There can be different kinds of conversions, and the existance or non-existance of such an conversion can influence the stream type classification.
* different //kinds of media// can be ''transformed'' into each other
* stream types //subsumed// by a given prototype should be ''lossless convertible'' and thus can be considered //equivalent.//
* besides, between different stream //implementation types,// there can be a ''rendering'' (lossy conversion) &mdash; or no conversion at all.
The stream Prototype is part of the specification of a media stream's type. It is a semantic (or problem domain oriented) concept and should be distinguished from the actual implementation type of the media stream. The latter is provided by an [[library implementation|StreamTypeImplFacade]]. While there are some common predefined prototypes, mostly, they are defined within the concrete [[Session]] according to the user's needs. 

Prototypes form an open (extensible) collection, though each prototype belongs to a specific media kind ({{{VIDEO, IMAGE, AUDIO, MIDI,...}}}).
The ''distinguishing property'' of a stream prototype is that any [[Pipe]] can process //streams of a specific prototype only.// Thus, two streams with different prototype can be considered "something quite different" from the users point of view, while two streams belonging to the same prototype can be considered equivalent (and will be converted automatically when their implementation types differ). Note this definition is //deliberately fuzzy,// because it depends on the actual situation of the project in question.

Consequently, as we can't get away with an fixed Enum of all stream prototypes, the implementation must rely on a query interface. The intention is to provide a basic set of rules for deciding queries about the most common stream prototypes; besides, a specific session may inject additional rules or utilize a completely different knowledge base. Thus, for a given StreamTypeDescriptor specifying a prototype
* we can get a [[default|DefaultsManagement]] implementation type
* we can get a default prototype to a given implementation type by a similar query
* we can query if a implementation type in question can be //subsumed// by this prototype
* we can determine if another prototype is //convertible//

!!Examples
In practice, several things might be considered "quite different" and thus be distinguished by protorype: NTSC and PAL video, video versus digitized film, HD video versus SD video, 3D versus flat video, cinemascope versus 4:3, stereophonic versus monaural, periphonic versus panoramic sound, Ambisonics versus 5.1, data reduced ~MP3 versus full quality linear PCM...
//how to classify and describe media streams//
Media data is supposed to appear structured as stream(s) over time. While there may be an inherent internal structuring, at a given perspective ''any stream is a unit and homogeneous''. In the context of digital media data processing, streams are always ''quantized'', which means they appear as a temporal sequence of data chunks called ''frames''.

! Terminology
* __Media__ is comprised of a set of streams or channels
* __Stream__ denotes a homogeneous flow of media data of a single kind
* __Channel__ denotes a elementary stream, which can't be further separated in the given context
* all of these are delivered and processed in a smallest unit called __Frame__. Each frame corresponds to a //time interval.//
* a __Buffer__ is a data structure capable of holding a Frame of media data.
* the __~Stream-Type__ describes the kind of media data contained in the stream

! Problem of Stream Type Description
Media types vary largely and exhibit a large number of different properties, which can't be subsumed under a single classification scheme. On the other hand we want to deal with media objects in a uniform and generic manner, because generally all kinds of media behave somewhat similar. But the twist is, these similarities disappear when describing media with logical precision. Thus we are forced into specialized handling and operations for each kind of media, while we want to implement a generic handling concept.

! Lumiera Stream Type handling
!! Identification
A stream type is denoted by a StreamTypeID, which is an identifier, acting as an unique key for accessing information related to the stream type. It corresponds to an StreamTypeDescriptor record, containing an &mdash; //not necessarily complete// &mdash; specification of the stream type, according to the classification detailed below.

!! Classification
Within the Proc-Layer, media streams are treated largely in a similar manner. But, looking closer, not everything can be connected together, while on the other hand there may be some classes of media streams which can be considered //equivalent// in most respects. Thus separating the distinction between various media streams into several levels seems reasonable...
* Each media belongs to a fundamental ''kind'' of media, examples being __Video__, __Image__, __Audio__, __MIDI__, __Text__,... <br/>Media streams of different kind can be considered somewhat "completely separate" &mdash; just the handling of each of those media kinds follows a common //generic pattern// augmented with specialisations. Basically, it is //impossible to connect// media streams of different kind. Under some circumstances there may be the possibility of a //transformation// though. For example, a still image can be incorporated into video, sound may be visualized, MIDI may control a sound synthesizer.
* Below the level of distinct kinds of media streams, within every kind we have an open ended collection of ''prototypes'', which, when compared directly, may each be quite distinct and different, but which may be //rendered//&nbsp; into each other. For example, we have stereoscopic (3D) video and we have the common flat video lacking depth information, we have several spatial audio systems (Ambisonics, Wave Field Synthesis), we have panorama simulating sound systems (5.1, 7.1,...), we have common stereophonic and monaural audio. It is considered important to retain some openness and configurability within this level of distinction, which means this classification should better be done by rules then by setting up a fixed property table. For example, it may be desirable for some production to distinguish between digitized film and video NTSC and PAL, while in another production everything is just "video" and can be converted automatically. The most noticeable consequence of such a distinction is that any Bus or [[Pipe]] is always limited to a media stream of a single prototype. (&rarr; [[more|StreamPrototype]])
* Besides the distinction by prototypes, there are the various media ''implementation types''. This classification is not necessarily hierarchically related to the prototype classification, while in practice commonly there will be some sort of dependency. For example, both stereophonic and monaural audio may be implemented as 96kHz 24bit PCM with just a different number of channel streams, but we may as well get a dedicated stereo audio stream with two channels multiplexed into a single stream. For dealing with media streams of various implementation type, we need //library// routines, which also yield a //type classification system.// Most notably, for raw sound and video data we use the [[GAVL]] library, which defines a classification system for buffers and streams.
* Besides the type classification detailed thus far, we introduce an ''intention tag''. This is a synthetic classification owned by Lumiera and used for internal wiring decisions. Currently (8/08), we recognize the following intention tags: __Source__, __Raw__, __Intermediary__ and __Target__. Only media streams tagged as __Raw__ can be processed.

!! Media handling requirements involving stream type classification
* set up a buffer and be able to create/retrieve frames of media data.
* determine if a given media data source and sink can be connected, and how.
* determine and enumerate the internal structure of a stream.
* discover processing facilities
&rarr; see StreamTypeUse
&rarr; [[querying types|StreamTypeQuery]]
A description and classification record usable to find out about the properties of a media stream. The stream type descriptor can be accessed using an unique StreamTypeID. The information contained in this descriptor record can intentionally be //incomplete,// in which case the descriptor captures a class of matching media stream types. The following information is maintained:
* fundamental ''kind'' of media: {{{VIDEO, IMAGE, AUDIO, MIDI,...}}}
* stream ''prototype'': this is the abstract high level media type, like NTSC, PAL, Film, 3D, Ambisonics, 5.1, monaural,...
* stream ''implementation type'' accessible by virtue of an StreamTypeImplFacade
* the ''intended usage category'' of this stream: {{{SOURCE, RAW, INTERMEDIARY, TARGET}}}.
&rarr; see &raquo;[[Stream Type|StreamType]]&laquo; detailed specification
&rarr; notes about [[using stream types|StreamTypeUse]]
&rarr; more [[about prototypes|StreamPrototype]]
This ID is an symbolic key linked to a StreamTypeDescriptor. The predicate {{{stream(ID)}}} specifies a media stream with the StreamType as detailed by the corresponding descriptor (which may contain complete or partial data defining the type).
A special kind of media stream [[implementation type|StreamTypeImplFacade]], which is not fully specified. As such, it is supposed there //actually is// an concrete implementation type, while only caring for some part or detail of this implementation to exhibit a specific property. For example, using an type constraint we can express the requirement of the actual implementation of a video stream to be based on ~RGB-float, or to enforce a fixed frame size in pixels.

An implementation constraint can //stand-in// for a completely specified implementation type (meaning it's a sub interface of the latter). But actually using it in this way may cause a call to the [[defaults manager|DefaultsImplementation]] to fill in any missing information. An example would be to call {{{createFrame()}}} on the type constraint object, which means being able to allocate memory to hold a data frame, with properties in compliance with the given type constraint. Of cousre, then we need to know all the properties of this stream type, which is where the defaults manager is queried. This allows session customisation to kick in, but may fail under certain cicumstances.
Common interface for dealing with the implementation of media stream data. From a high level perspective, the various kinds of media ({{{VIDEO, IMAGE, AUDIO, MIDI,...}}}) exhibit similar behaviour, while on the implementation level not even the common classification can be settled down to a complete general and useful scheme. Thus, we need separate library implementations for deailing with the various sorts of media data, all providing at least a set of basic operations:
* set up a buffer
* create or accept a frame
* get an tag describing the precise implementation type
* ...?

&rarr; see also &raquo;[[Stream Type|StreamType]]&laquo;
//Note:// there is a sort-of "degraded" variant just requiring some &rarr; [[implementation constraint|StreamTypeImplConstraint]] to hold
Querying for media stream type information comes in various flavours
* you may want to find a structural object (pipe, output, processing patten) associated with / able to deal with a certain stream type
* you may need a StreamTypeDescriptor for an existing stream given as implementation data
* you may want to build or complete type information from partial specification.
Mostly, those queries involve the ConfigRules system in some way or the other. The [[prototype-|StreamPrototype]] and [[implementation type|StreamTypeImplFacade]]-interfaces themselves are mostly a facade for issuing appropriate queries. Some objects (especially [[pipes|Pipe]]) are tied to a certain stream type and thus store a direct link to type information. Others are just associated with a type by virtue of the DefaultsManagement.

The //problem// with this pivotal role of the config rules is that &mdash; from a design perspective &mdash; not much can be said specifically, besides //"you may be able to find out...", "...depends on the defaults and the session configuration".// This way, a good deal of crucial behaviour is pushed out of the core implementation (and it's quite intentionally being done this way). What can be done regarding the design of the core is mostly to setup a framework for the rules and determine possible ''query situations''.

!the kind of media
the information of the fundamental media kind (video, audio, text, MIDI,...) is assiciated with the prototype, for technical reasons. Prototype information is mandatory for each StreamType, and the impl facade provides a query function (because some implementation libraries, e.g. [[GAVL]], support multiple kinds of media).

!query for a prototype
__Situation__: given an implementation type, find a prototype to subsume it.
Required only for building a complete ~StreamType which isn't known at this point.
The general case of this query is //quite hairy,// because the solution is not necessary clear and unique. And, worse still, it is related to the semantics, requiring semantic information and tagging to be maintained somewhere. For example, while the computer can't "know" what stereopohinc audio is (only a human can, by listening to a stereophoic playback and deciding if it actually does convey a spatical sound image), in most cases we can overcome this problem by using the //heuristical rule// of assuming the prototype "stereophonic" when given two identically typed audio channels. This example also shows the necessity of ordering heuristic rules to be able to pick a best fit.

We can inject two different kinds of fallback solutions for this kind of query:
* we can always build a "catch-all" prototype just based on the kind of media (e.g. {{{prototype(video).}}}). This should match with lowest priority
* we can search for existing ~StreamTypes with the same impl type, or an impl type which is //equivalent convertible// (see &rarr; StreamConversion). 
The latter case can yield multiple solutions, which isn't any problem, because the match is limited to classes of equivalent stream implementation, which would be subsumed under the same prototype anyway. Even if the registry holds different prototypes linked to the same implementation type, they would be convertible and thus could //stand-in// for one another. Together this results in the implementation
# try to get a direct match to an existing impl type which has an associated (complete) ~StreamType, thus bypassing the ConfigRules system altogether
# run a {{{Query<Prototype>}}} for the given implementation type
# do the search within equivalence class as described above
# fall back to the media kind.
{{red{TODO: how to deal with the problem of hijacking a prototype?}}} &rarr; see [[here|StreamTypeUse]]

!query for an implementation
__Situation 1__: given an partially specified ~StreamType (just an [[constraint|StreamTypeImplConstraint]])
__Situation 2__: find an implementation for a given prototype (without any further impl type guidlines)
Both cases have to go though the [[defaults manager|DefaultsManagement]] in some way, in order to give any default configuration a chance to kick in. This is //one of the most important use cases// of the defaults system: the ability to configure a default fromat for all streams with certain semantic classification. {{{prototype(video)}}} by default is RGBA 24bit non-interlaced for example.
But after having queried the defaults system, there remains the problem to build a new solution (which will then automatically become default for this case). To be more precise: invoking the defaults system (as implemented in Lumiera) means first searching through existing objects encountered as default, and then issuing an general query with the capabilities in question. This general query in turn is conducted by the query type handler and usually consists of first searching existing objects and then creating a new object to match the capabilities. But, as said, the details depend on the type (and are defined by the query handler installed for this type). Translated to our problem here in question, this means //we have to define the basic operations from which a type query handler can be built.// Thus, to start with, it's completely sufficient to wire a call to the DefaultsManagement and assume the current session configuration contains some rules to cover it. Plus being prepared for the query to fail (throw, that is).

Later on this could be augmented by providing some search mechanisms:
* search through existing stream type implementations (or a suitable pre filtered selection) and narrow down the possible result(s) by using the constraint as a filter. Obviously this requires support by the MediaImplLib facade for the implementation in question. (This covers __Situation 1__)
* relate a protoype in question to the other existing prototypes and use the convertibility / subsumption as a filter mechanism. Finally pick an existing impl type which is linked to one of the prototypes found thus far.
Essentially, we need a search mechanism for impltypes and prototypes. This search mechanism is best defined by rules itself, but needs some primitive operations on types, like ennumerating all registered types, filter those selections and match against a constraint.

!query for an (complete) StreamType
All situations discussed thus far can also occur wrapped into and triggered by a query for a complete type. Depending on what part is known, the missing bits will be queried. 
Independent from these is __another Situation__ where we query for a type ''by ID''.
* a simple symbolic ID can be found by searching through all existing stream types (Operation supported by the type registry within STypeManager)
* a special ''classificating'' ID can be parsed into the components (media kind, prototype, impltype), resulting in sub searches for these.
{{red{not sure if we want to support queries by symboic ID}}}...problem is the impl type, because probably the library needs to support describing any implementation type by a string. Seemingly GAVL does, but requiring it for every lib?
Questions regarding the use of StreamType within the Proc-Layer.
* what is the relation between Buffer and Frame?
* how to get the required size of a Buffer?
* who does buffer allocations and how?

Mostly, stream types are used for querying, either to decide if they can be connected, or to find usable processing modules.
Even building a stream type from partial information involves some sort of query.
&rarr; more on [[media stream type queries|StreamTypeQuery]]

!creating stream types
seemingly stream types are created based on an already existing media stream (or a Frame of media data?). {{red{really?}}}
The other use case seems to be that of an //incomplete// stream type based on a [[Prototype|StreamPrototype]]

!Prototype
According to my current understanding, a prototype is merely a classification entity. But then &mdash; how to bootstrap a Prototype?
And how to do the classification of an existing implementation type.

Besides, there is the problem of //hijacking a prototype:// when a specific implementation type gets tied to a rather generic protoype, like {{{protoype(video)}}}, how to comply to the rule of prototypes subsuming a class of equivalent implementations?

!Defaults and partial specification
A StreamType need not be defined completely. It is sufficient to specify the media kind and the Prototype. The implementation type may be just given as a constraint, thus defining some properties and leaving out others. When creating a frame buffer based upon such an //incomplete type,// [[defaults|DefaultsManagement]] are queried to fill in the missing parts.
Constraints are objects provided by the Lumiera core, but specialized to the internals of the actual implementation library.
For example there might be a constraint implementation to force a specific {{{gavl_pixelformat_t}}}.

!the ID problem
Basically I'd prefer the ~IDs to be real identifiers. So they can be used directly within rules. At least the Prototypes //can// have such a textual identifier. But the implementation type is problematic, and consequently the ID of the StreamType as well. Because the actual implementation should not be nailed down to a fixed set of possibilities. And, generally, we can't expect an implementation library to yield textual identifiers for each implementation type. //Is this really a problem? {{red{what are the use cases?}}}//
As far as I can see, in most cases this is no problem, as the type can be retrieved or derived from an existing media object. Thus, the only problematic case is when we need to persist the type information without being able to guarantee the persistence of the media object this type was derived from. For example this might be a problem when working with proxy media. But at least we should be able to create a constraint (partial type specification) to cover the important part of the type information, i.e. the part which is needed to re-create the model even when the original media isn't there any longer.
Thus, //constraints may be viewed as type constructing functors.//

--------------
!use cases
* pulling data from a media file
* connecting pipes and similar wiring problems
* describing the properties of an processor plugin

!! pulling data from a media file
To open the file, we need //type discovery code,// resulting in a handle to some library module for accessing the contents, which is in compliance with the Lumiera application. Thus, we can determine the possible return values of this type discovery code and provide code which wires up a corresponding StreamTypeImplFacade. Further, the {{{control::STypeManager}}} has the ability to build a complete or partial StreamType from
* an ~ImplFacade
* a Prototype
* maybe even from some generic textual ~IDs?
Together this allows to associate a StreamType to each media source, and thus to derive the Prototype governing the immediately connected [[Pipe]]
A pipe can by design handle data of one Prototype solely.

!! wiring problems
When deciding if a connection can be made, we can build up the type information starting out from the source. (this requires some work, but it's //possible,// generally speaking.). Thus, we can allways get an ~ImplType for the "lower end" of the connection, and at least a Prototype for the "output side" &mdash; which should be enough to use the query functions provided by the stream type interfaces

!! describing properties
{{red{currently difficult to define}}} as of 9/2008, because the property description of plugins is not planned yet.
My Idea was to use [[type implementation constraints|StreamTypeImplConstraint]] for this, which are a special kind of ~ImplType
This design lays great emphasis on separating all those components and subsystems, which are considered not to have a //natural link// of their underlying concepts. This often means putting some additional constraints on the implementation, so basically we need to rely on the actual implementation to live up to this goal. In many cases it may seem to be more natural to "just access the necessary information". But on the long run this coupling of not-directly related components makes the whole codebase monolithic and introduces lots of //accidental complexity.//

Instead, we should try to just connect the various subsystems via Interfaces and &mdash; instead of just using some information, rather use some service to be located on an Interface to query other components for this information. The best approach of course is always to avoid the dependency altogether.

!Examples
* There is a separation between the __high level [[Session]] view__ and the [[Fixture]]: the latter only accesses the MObjects and the Placement Interfaces.
* same holds true for the Builder: it just uses the same Interfaces. The actual coupling is done rather //by type//, i.e. the Builder relies on several types of MObjects to exist and treats them via overloaded methods. He doesn't rely on a actual object structure layout in the session besides the requirement of having a [[Playlist]]
* the Builder itself is a separation layer. Neither do the Objects in the sessionL access directly [[Render Nodes|ProcNode]], nor do the latter call back into the session. Both connections seem to be necessary at first sight, but both can be avoided by using the Builder Pattern
* another separation exists between the Render Engine and the individual Nodes: The Render Engine doesn't need to know the details of the data types processed by the Nodes. It relies on the Builder having done the correct connections and just pulls out the calculated results. If there needs to be additional control information to be passed, then I would prefer to do a direct wiring of separate control connections to specialized components, which in turn could instruct the controller to change the rendering process.
* to shield the rendering code of all complexities of thread communication and synchronization, we use the StateProxy
Structural Assets are intended mainly for internal use, but the user should be able to see and query them. They are not "loaded" or "created" directly, rather they //leap into existence // by creating or extending some other structures in the session, hence the name. Some of the structural Asset parametrisation can be modified to exert control on some aspects of the Proc Layer's (default) behaviour.
* [[Processing Patterns|ProcPatt]] encode information how to set up some parts of the render network to be created automatically: for example, when building a clip, we use the processing pattern how to decode and pre-process the actual media data.
* [[Tracks|Track]] are one of the dimensions used for organizing the session data. They serve as an Anchor to attach parametrisation of output pipe, overlay mode etc. By [[placing|Placement]] to a track, a media object inherits placement properties from this track.
* [[Pipes|Pipe]] form &mdash; at least as visible to the user &mdash; the basic building block of the render network, because the latter appears to be a collection of interconnected processing pipelines. (this is the //outward view; // in fact the render network consists of [[nodes|ProcNode]] and is [[built|Builder]] from the Pipes, clips, effects...)[>img[Asset Classess|uml/fig131205.png]]
* [[Sequence]] assets act as a façade to the fundamental compound building blocks within the model, a sequence being a collection of clips placed onto a tree of tracks. Sequences, as well as the top-level tracks enclosed will be created automatically on demand. Of course you may create them deliberately. Without binding it to a timeline or meta-clip, a sequence remains invisible.
* [[Timeline]] assets are the top level structures to access the model; similar to the sequences, they act as façade to relevant parts of the model (BindingMO) and will be created on demand, alongside with a new session if necessary, bound to the new timeline. Likewise, they can be referred by their name-ID
* [[Viewer|ViewerAsset]] assets correspond to the available viewer elements in the GUI. By [[connecting to a viewer|ViewConnection]], session elements derive a concrete (physical) output and gain the ability to be [[played|PlayService]].





!querying
Structural assets can be queried by specifying the specific type (Pipe, Track, ProcPatt) and a query goal. For example, you can {{{Query<Pipe> ("stream(mpeg)")}}}, yielding the first pipe found which declares to have stream type "mpeg". The access point for this querying facility is on the ~StructFactory, which (as usual within Lumiera) can be invoked as static member {{{Struct::retrieve(Query<TY> ...}}}. Given such a query, first an attempt is made to satisfy it by retrieving an existing object of type TY (which might bind variables as a side effect). On failure, a new structural asset of the requested type will be created to satisfy the given goal. In case you want to bypass the resolution step and create a new asset right away, use {{{Struct::retrieve.newInstance<TY>}}}

{{red{Note:}}} in the current implementation no real resolution engine is used (as of 2/2010); rather, we're just able to retrieve a hard-wired answer or create a new asset, simply pattern matching on parts of the query.

&rarr; [[Assets in general|Asset]]
/*{{{*/
/* a contrasting background so I can see where one tiddler ends and the other begins */
body {
	background: [[ColorPalette::TertiaryLight]];
}

/* sexy colours and font for the header */
.headerForeground {
	color: [[ColorPalette::PrimaryPale]];
}
.headerShadow, .headerShadow a {
	color: [[ColorPalette::PrimaryMid]];
}
.headerForeground, .headerShadow {
	padding: 1em 1em 0;
	font-family: 'Trebuchet MS' sans-serif;
	font-weight:bold;
}
.headerForeground .siteSubtitle {
	color: [[ColorPalette::PrimaryLight]];
}
.headerShadow .siteSubtitle {
	color: [[ColorPalette::PrimaryMid]];
}

/* make shadow go and down right instead of up and left */
.headerShadow {
	left: 2px;
	top: 3px;
}

/* prefer monospace for editing */
.editor textarea {
	font-family: 'Consolas' monospace;
}

/* sexy tiddler titles */
.title {
	font-size: 250%;
	color: [[ColorPalette::PrimaryLight]];
	font-family: 'Trebuchet MS' sans-serif;
}

/* more subtle tiddler subtitle */
.subtitle {
	padding:0px;
	margin:0px;
	padding-left:0.5em;
	font-size: 90%;
	color: [[ColorPalette::TertiaryMid]];
}
.subtitle .tiddlyLink {
	color: [[ColorPalette::TertiaryMid]];
}

/* a little bit of extra whitespace */
.viewer {
	padding-bottom:3px;
}

/* don't want any background color for headings */
h1,h2,h3,h4,h5,h6 {
	background: [[ColorPalette::Background]];
	color: [[ColorPalette::Foreground]];
}

/* give tiddlers 3d style border and explicit background */
.tiddler {
	background: [[ColorPalette::Background]];
	border-right: 2px [[ColorPalette::TertiaryMid]] solid;
	border-bottom: 2px [[ColorPalette::TertiaryMid]] solid;
	margin-bottom: 1em;
	padding-bottom: 2em;
}

/* make options slider look nicer */
#sidebarOptions .sliderPanel {
	border:solid 1px [[ColorPalette::PrimaryLight]];
}


/* the borders look wrong with the body background */
#sidebar .button {
	border-style: none;
}

/* displays the list of a tiddler's tags horizontally. used in ViewTemplate */
.tagglyTagged li.listTitle {
	display:none
}
.tagglyTagged li {
	display: inline; font-size:90%;
}
.tagglyTagged ul {
	margin:0px; padding:0px;
}

/* this means you can put line breaks in SidebarOptions for readability */
#sidebarOptions br {
	display:none;
}
/* undo the above in OptionsPanel */
#sidebarOptions .sliderPanel br {
	display:inline;
}

/* horizontal main menu stuff */
#displayArea {
	margin: 1em 15.7em 0em 1em; /* use the freed up space */
}
#topMenu br {
	display: none;
}
#topMenu {
	background: [[ColorPalette::PrimaryMid]];
	color:[[ColorPalette::PrimaryPale]];
}
#topMenu {
	padding:2px;
}
#topMenu .button, #topMenu .tiddlyLink, #topMenu a {
	margin-left: 0.5em;
	margin-right: 0.5em;
	padding-left: 3px;
	padding-right: 3px;
	color: [[ColorPalette::PrimaryPale]];
	font-size: 115%;
}
#topMenu .button:hover, #topMenu .tiddlyLink:hover {
	background: [[ColorPalette::PrimaryDark]];
}

/* make it print a little cleaner */
@media print {
	#topMenu {
		display: none ! important;
	}
	/* not sure if we need all the importants */
	.tiddler {
		border-style: none ! important;
		margin:0px ! important;
		padding:0px ! important;
		padding-bottom:2em ! important;
	}
	.tagglyTagging .button, .tagglyTagging .hidebutton {
		display: none ! important;
	}
	.headerShadow {
		visibility: hidden ! important;
	}
	.tagglyTagged .quickopentag, .tagged .quickopentag {
		border-style: none ! important;
	}
	.quickopentag a.button, .miniTag {
		display: none ! important;
	}
}

/* *** Additions by Ichthyostega *** */
.red {
	background: #ffcc99;
	color: #ff2210;
	padding: 0px 0.8ex;
}

.viewer th {
        background: #91a6af;
}
/*}}}*/
Viewers for displaying output are connected to the timelines or other elements to be displayed through a ViewConnection. This creates an additional mapping step (&rarr; BindingMO), similar to the attachment of a sequence or virtual clip to the [[global busses|GlobalBus]]. Yet the viewers don't provide such an elaborate mixing desk like view as the timelines -- rather there is only a simplified version of the global busses, exposed to the user as ''switch board''.

The corresponding GUI control will allow to choose among possible data sources (esp. ProbePoint) for any given StreamType (more precisely, for every //prototype,// e.g. image, sound). Moreover, there are some (limited) overlay capabilities (split image, mix sound). By default, this additional control will likely be hidden, as there is a default 1:1 association between the master busses of the timeline and the usable output destinations.

!Model and Interfaces
Regarding the internal wiring of components, the Viewer with a switch board behaves similar as a timeline: At the output side, there are a small number of standard OutputDesignation elements for the selection of primary kinds of media handled within this session (typically Video and Audio). But contrary to a timeline, a Viewer element also exposes an OutputManager interface, which is backed by a specific OutputMapping element, which internally connects to the main OutputManager for the whole Application. This way, a Viewer has the capability to get a PlayController
<<timeline better:true maxDays:55 maxEntries:45>>
/***
|Name|TaskMacroPlugin|
|Author|<<extension TaskMacroPlugin author>>|
|Location|<<extension TaskMacroPlugin source>>|
|License|<<extension TaskMacroPlugin license>>|
|Version|<<extension TaskMacroPlugin versionAndDate>>|
!Description
A set of macros to help you keep track of time estimates for tasks.

Macros defined:
* {{{task}}}: Displays a task description and makes it easy to estimate and track the time spent on the task.
* {{{taskadder}}}: Displays text entry field to simplify the adding of tasks.
* {{{tasksum}}}: Displays a summary of tasks sandwiched between two calls to this macro.
* {{{extension}}}: A simple little macro that displays information about a TiddlyWiki plugin, and that will hopefully someday migrate to the TW core in some form.
Core overrides:
* {{{wikify}}}: when wikifying a tiddler's complete text, adds refresh information so the tiddler will be refreshed when it changes
* {{{config.refreshers}}}: have the built-in refreshers return true; also, add a new refresher ("fullContent") that redisplays a full tiddler whenever it or any nested tiddlers it shows are changed
* {{{refreshElements}}}: now checks the return value from the refresher and only short-circuits the recursion if the refresher returns true
!Plugin Information
***/
//{{{
version.extensions.TaskMacroPlugin = {
	major: 1, minor: 1, revision: 0,
	date: new Date(2006,5-1,13),
	author: "LukeBlanshard",
	source: "http://labwiki.sourceforge.net/#TaskMacroPlugin",
	license: "http://labwiki.sourceforge.net/#CopyrightAndLicense"
}
//}}}
/***
A little macro for pulling out extension info.  Use like {{{<<extension PluginName datum>>}}}, where {{{PluginName}}} is the name you used for {{{version.extensions}}} and {{{datum}}} is either {{{versionAndDate}}} or a property of the extension description object, such as {{{source}}}.
***/
//{{{
config.macros.extension = {
	handler: function( place, macroName, params, wikifier, paramString, tiddler ) {
		var info  = version.extensions[params[0]]
		var datum = params[1]
		switch (params[1]) {
		case 'versionAndDate':
			createTiddlyElement( place, "span", null, null,
				info.major+'.'+info.minor+'.'+info.revision+', '+info.date.formatString('DD MMM YYYY') )
			break;
		default:
			wikify( info[datum], place )
			break;
		}
	}
}
//}}}
/***
!Core Overrides
***/
//{{{
window.wikify_orig_TaskMacroPlugin = window.wikify
window.wikify = function(source,output,highlightRegExp,tiddler)
{
	if ( tiddler && tiddler.text === source )
		addDisplayDependency( output, tiddler.title )
	wikify_orig_TaskMacroPlugin.apply( this, arguments )
}
config.refreshers_orig_TaskMacroPlugin = config.refreshers
config.refreshers = {
	link: function() {
		config.refreshers_orig_TaskMacroPlugin.link.apply( this, arguments )
		return true
	},
	content: function() {
		config.refreshers_orig_TaskMacroPlugin.content.apply( this, arguments )
		return true
	},
	fullContent: function( e, changeList ) {
		var tiddlers = e.refreshTiddlers
		if ( changeList == null || tiddlers == null )
			return false
		for ( var i=0; i < tiddlers.length; ++i )
			if ( changeList.find(tiddlers[i]) != null ) {
				var title = tiddlers[0]
				story.refreshTiddler( title, null, true )
				return true
			}
		return false
	}
}
function refreshElements(root,changeList)
{
	var nodes = root.childNodes;
	for(var c=0; c<nodes.length; c++)
		{
		var e = nodes[c],type;
		if(e.getAttribute)
			type = e.getAttribute("refresh");
		else
			type = null;
		var refresher = config.refreshers[type];
		if ( ! refresher || ! refresher(e, changeList) )
			{
			if(e.hasChildNodes())
				refreshElements(e,changeList);
			}
		}
}
//}}}
/***
!Global Functions
***/
//{{{
// Add the tiddler whose title is given to the list of tiddlers whose
// changing will cause a refresh of the tiddler containing the given element.
function addDisplayDependency( element, title ) {
	while ( element && element.getAttribute ) {
		var idAttr = element.getAttribute("id"), tiddlerAttr = element.getAttribute("tiddler")
		if ( idAttr && tiddlerAttr && idAttr == story.idPrefix+tiddlerAttr ) {
			var list = element.refreshTiddlers
			if ( list == null ) {
				list = [tiddlerAttr]
				element.refreshTiddlers = list
				element.setAttribute( "refresh", "fullContent" )
			}
			list.pushUnique( title )
			return
		}
		element = element.parentNode
	}
}

// Lifted from Story.prototype.focusTiddler: just return the field instead of focusing it.
Story.prototype.findEditField = function( title, field )
{
	var tiddler = document.getElementById(this.idPrefix + title);
	if(tiddler != null)
		{
		var children = tiddler.getElementsByTagName("*")
		var e = null;
		for (var t=0; t<children.length; t++)
			{
			var c = children[t];
			if(c.tagName.toLowerCase() == "input" || c.tagName.toLowerCase() == "textarea")
				{
				if(!e)
					e = c;
				if(c.getAttribute("edit") == field)
					e = c;
				}
			}
		return e
		}
}

// Wraps the given event function in another function that handles the
// event in a standard way.
function wrapEventHandler( otherHandler ) {
	return function(e) {
		if (!e) var e = window.event
		e.cancelBubble = true
		if (e.stopPropagation) e.stopPropagation()
		return otherHandler( e )
	}
}
//}}}
/***
!Task Macro
Usage:
> {{{<<task orig cur spent>>description}}}
All of orig, cur, and spent are optional numbers of days.  The description goes through the end of the line, and is wikified.
***/
//{{{
config.macros.task = {
	NASCENT:	0, // Task not yet estimated
	LIVE:		1, // Estimated but with time remaining
	DONE:		2, // Completed: no time remaining
	bullets:	["\u25cb", // nascent (open circle)
			 "\u25ba", // live (right arrow)
			 "\u25a0"],// done (black square)
	styles:		["nascent", "live", "done"],

	// Translatable text:
	lingo: {
		spentTooBig:	"Spent time %0 can't exceed current estimate %1",
		noNegative:	"Times may not be negative numbers",
		statusTips:	["Not yet estimated", "To do", "Done"], // Array indexed by state (NASCENT/LIVE/DONE)
		descClickTip:	" -- Double-click to edit task description",
		statusClickTip:	" -- Double-click to mark task complete",
		statusDoneTip:	" -- Double-click to adjust the time spent, to revive the task",
		origTip:	"Original estimate in days",
		curTip:		"Current estimate in days",
		curTip2:	"Estimate in days", // For when orig == cur
		clickTip:	" -- Click to adjust",
		spentTip:	"Days spent on this task",
		remTip:		"Days remaining",
		curPrompt:	"Estimate this task in days, or adjust the current estimate by starting with + or -.\n\nYou may optionally also set or adjust the time spent by putting a second number after the first.",
		spentPrompt:	"Enter the number of days you've spent on this task, or adjust the current number by starting with + or -.\n\nYou may optionally also set or adjust the time remaining by putting a second number after the first.",
		remPrompt:	"Enter the number of days it will take to finish this task, or adjust the current estimate by starting with + or -.\n\nYou may optionally also set or adjust the time spent by putting a second number after the first.",
		numbersOnly:	"Enter numbers only, please",
		notCurrent:	"The tiddler has been modified since it was displayed, please redisplay it before doing this."
	},

	// The macro handler
	handler: function( place, macroName, params, wikifier, paramString, tiddler )
	{
		var start = wikifier.matchStart, end = wikifier.nextMatch

		var origStr	= params.length > 0? params.shift() : "?"
		var orig	= +origStr // as a number
		var cur		= params.length > 1? +params.shift() : orig
		var spent	= params.length > 0? +params.shift() : 0
		if ( spent > cur )
			throw Error( this.lingo.spentTooBig.format([spent, cur]) )
		if ( orig < 0 || cur < 0 || spent < 0 )
			throw Error( this.lingo.noNegative )
		var rem		= cur - spent
		var state	= isNaN(orig+rem)? this.NASCENT : rem > 0? this.LIVE : this.DONE
		var table	= createTiddlyElement( place, "table", null, "task "+this.styles[state] )
		var tbody	= createTiddlyElement( table, "tbody" )
		var row		= createTiddlyElement( tbody, "tr" )
		var statusCell	= createTiddlyElement( row,   "td", null, "status", this.bullets[state] )
		var descCell	= createTiddlyElement( row,   "td", null, "description" )

		var origCell	= state==this.NASCENT || orig==cur? null
				: createTiddlyElement( row, "td", null, "numeric original" )
		var curCell	= createTiddlyElement( row, "td", null, "numeric current" )
		var spentCell	= createTiddlyElement( row, "td", null, "numeric spent" )
		var remCell	= createTiddlyElement( row, "td", null, "numeric remaining" )

		var sums = config.macros.tasksum.tasksums
		if ( sums && sums.length ) {
			var summary = [(state == this.NASCENT? NaN : orig), cur, spent]
			summary.owner = tiddler
			sums[0].push( summary )
		}

		// The description goes to the end of the line
		wikifier.subWikify( descCell, "$\\n?" )
		var descEnd = wikifier.nextMatch

		statusCell.setAttribute( "title", this.lingo.statusTips[state] )
		descCell.setAttribute(   "title", this.lingo.statusTips[state]+this.lingo.descClickTip )
		if (origCell) {
			createTiddlyElement( origCell, "div", null, null, orig )
			origCell.setAttribute( "title", this.lingo.origTip )
			curCell.setAttribute( "title", this.lingo.curTip )
		}
		else {
			curCell.setAttribute( "title", this.lingo.curTip2 )
		}
		var curDivContents = (state==this.NASCENT)? "?" : cur
		var curDiv = createTiddlyElement( curCell, "div", null, null, curDivContents )
		spentCell.setAttribute( "title", this.lingo.spentTip )
		var spentDiv = createTiddlyElement( spentCell, "div", null, null, spent )
		remCell.setAttribute( "title", this.lingo.remTip )
		var remDiv = createTiddlyElement( remCell, "div", null, null, rem )

		// Handle double-click on the description by going
		// into edit mode and selecting the description
		descCell.ondblclick = this.editDescription( tiddler, end, descEnd )

		function appTitle( el, suffix ) {
			el.setAttribute( "title", el.getAttribute("title")+suffix )
		}

		// For incomplete tasks, handle double-click on the bullet by marking the task complete
		if ( state != this.DONE ) {
			appTitle( statusCell, this.lingo.statusClickTip )
			statusCell.ondblclick = this.markTaskComplete( tiddler, start, end, macroName, orig, cur, state )
		}
		// For complete ones, handle double-click on the bullet by letting you adjust the time spent
		else {
			appTitle( statusCell, this.lingo.statusDoneTip )
			statusCell.ondblclick = this.adjustTimeSpent( tiddler, start, end, macroName, orig, cur, spent )
		}

		// Add click handlers for the numeric cells.
		if ( state != this.DONE ) {
			appTitle( curCell, this.lingo.clickTip )
			curDiv.className = "adjustable"
			curDiv.onclick = this.adjustCurrentEstimate( tiddler, start, end, macroName,
				orig, cur, spent, curDivContents )
		}
		appTitle( spentCell, this.lingo.clickTip )
		spentDiv.className = "adjustable"
		spentDiv.onclick = this.adjustTimeSpent( tiddler, start, end, macroName, orig, cur, spent )
		if ( state == this.LIVE ) {
			appTitle( remCell, this.lingo.clickTip )
			remDiv.className = "adjustable"
			remDiv.onclick = this.adjustTimeRemaining( tiddler, start, end, macroName, orig, cur, spent )
		}
	},

	// Puts the tiddler into edit mode, and selects the range of characters
	// defined by start and end.  Separated for leak prevention in IE.
	editDescription: function( tiddler, start, end ) {
		return wrapEventHandler( function(e) {
			story.displayTiddler( null, tiddler.title, DEFAULT_EDIT_TEMPLATE )
			var tiddlerElement = document.getElementById( story.idPrefix + tiddler.title )
			window.scrollTo( 0, ensureVisible(tiddlerElement) )
			var element = story.findEditField( tiddler.title, "text" )
			if ( element && element.tagName.toLowerCase() == "textarea" ) {
				// Back up one char if the last char's a newline
				if ( tiddler.text[end-1] == '\n' )
					--end
				element.focus()
				if ( element.setSelectionRange != undefined ) { // Mozilla
					element.setSelectionRange( start, end )
					// Damn mozilla doesn't scroll to visible.  Approximate.
					var max = 0.0 + element.scrollHeight
					var len = element.textLength
					var top = max*start/len, bot = max*end/len
					element.scrollTop = Math.min( top, (bot+top-element.clientHeight)/2 )
				}
				else if ( element.createTextRange != undefined ) { // IE
					var range = element.createTextRange()
					range.collapse()
					range.moveEnd("character", end)
					range.moveStart("character", start)
					range.select()
				}
				else // Other? Too bad, just select the whole thing.
					element.select()
				return false
			}
			else
				return true
		} )
	},

	// Modifies a task macro call such that the task appears complete.
	markTaskComplete: function( tiddler, start, end, macroName, orig, cur, state ) {
		var macro = this, text = tiddler.text
		return wrapEventHandler( function(e) {
			if ( text !== tiddler.text ) {
				alert( macro.lingo.notCurrent )
				return false
			}
			if ( state == macro.NASCENT )
				orig = cur = 0
			// The second "cur" in the call below bumps up the time spent
			// to match the current estimate.
			macro.replaceMacroCall( tiddler, start, end, macroName, orig, cur, cur )
			return false
		} )
	},

	// Asks the user for an adjustment to the current estimate, modifies the macro call accordingly.
	adjustCurrentEstimate: function( tiddler, start, end, macroName, orig, cur, spent, curDivContents ) {
		var macro = this, text = tiddler.text
		return wrapEventHandler( function(e) {
			if ( text !== tiddler.text ) {
				alert( macro.lingo.notCurrent )
				return false
			}
			var txt = prompt( macro.lingo.curPrompt, curDivContents )
			if ( txt != null ) {
				var a = macro.breakInput( txt )
				cur = macro.offset( cur, a[0] )
				if ( a.length > 1 )
					spent = macro.offset( spent, a[1] )
				macro.replaceMacroCall( tiddler, start, end, macroName, orig, cur, spent )
			}
			return false
		} )
	},

	// Asks the user for an adjustment to the time spent, modifies the macro call accordingly.
	adjustTimeSpent: function( tiddler, start, end, macroName, orig, cur, spent ) {
		var macro = this, text = tiddler.text
		return wrapEventHandler( function(e) {
			if ( text !== tiddler.text ) {
				alert( macro.lingo.notCurrent )
				return false
			}
			var txt = prompt( macro.lingo.spentPrompt, spent )
			if ( txt != null ) {
				var a = macro.breakInput( txt )
				spent = macro.offset( spent, a[0] )
				var rem = cur - spent
				if ( a.length > 1 ) {
					rem = macro.offset( rem, a[1] )
					cur = spent + rem
				}
				macro.replaceMacroCall( tiddler, start, end, macroName, orig, cur, spent )
			}
			return false
		} )
	},

	// Asks the user for an adjustment to the time remaining, modifies the macro call accordingly.
	adjustTimeRemaining: function( tiddler, start, end, macroName, orig, cur, spent ) {
		var macro = this
		var text  = tiddler.text
		var rem   = cur - spent
		return wrapEventHandler( function(e) {
			if ( text !== tiddler.text ) {
				alert( macro.lingo.notCurrent )
				return false
			}
			var txt = prompt( macro.lingo.remPrompt, rem )
			if ( txt != null ) {
				var a = macro.breakInput( txt )
				var newRem = macro.offset( rem, a[0] )
				if ( newRem > rem || a.length > 1 )
					cur += (newRem - rem)
				else
					spent += (rem - newRem)
				if ( a.length > 1 )
					spent = macro.offset( spent, a[1] )
				macro.replaceMacroCall( tiddler, start, end, macroName, orig, cur, spent )
			}
			return false
		} )
	},

	// Breaks input at spaces & commas, returns array
	breakInput: function( txt ) {
		var a = txt.trim().split( /[\s,]+/ )
		if ( a.length == 0 )
			a = [NaN]
		return a
	},

	// Adds to, subtracts from, or replaces a numeric value
	offset: function( num, txt ) {
		if ( txt == "" || typeof(txt) != "string" )
			return NaN
		if ( txt.match(/^[+-]/) )
			return num + (+txt)
		return +txt
	},

	// Does some error checking, then replaces the indicated macro
	// call within the text of the given tiddler.
	replaceMacroCall: function( tiddler, start, end, macroName, orig, cur, spent )
	{
		if ( isNaN(cur+spent) ) {
			alert( this.lingo.numbersOnly )
			return
		}
		if ( spent < 0 || cur < 0 ) {
			alert( this.lingo.noNegative )
			return
		}
		if ( isNaN(orig) )
			orig = cur
		if ( spent > cur )
			cur = spent
		var text = tiddler.text.substring(0,start) + "<<" + macroName + " " +
			orig + " " + cur + " " + spent + ">>" + tiddler.text.substring(end)
		var title = tiddler.title
		store.saveTiddler( title, title, text, config.options.txtUserName, new Date(), undefined )
		//story.refreshTiddler( title, null, true )
		if ( config.options.chkAutoSave )
			saveChanges()
	}
}
//}}}
/***
!Tasksum Macro
Usage:
> {{{<<tasksum "start" ["here" [intro]]>>}}}
or:
> {{{<<tasksum "end" [intro]>>}}}
Put one of the {{{<<tasksum start>>}}} lines before the tasks you want to summarize, and an {{{end}}} line after them.  By default, the summary goes at the end; if you include {{{here}}} in the start line, the summary will go at the top.  The intro argument, if supplied, replaces the default text introducing the summary.
***/
//{{{
config.macros.tasksum = {

	// Translatable text:
	lingo: {
		unrecVerb:	"<<%0>> requires 'start' or 'end' as its first argument",
		mustMatch:	"<<%0 end>> must match a preceding <<%0 start>>",
		defIntro:	"Task summary:",
		nascentSum:	"''%0 not estimated''",
		doneSum:	"%0 complete (in %1 days)",
		liveSum:	"%0 ongoing (%1 days so far, ''%2 days remaining'')",
		overSum:	"Total overestimate: %0%.",
		underSum:	"Total underestimate: %0%.",
		descPattern:	"%0 %1. %2",
                origTip:	"Total original estimates in days",
		curTip:		"Total current estimates in days",
		spentTip:	"Total days spent on tasks",
		remTip:		"Total days remaining"
	},

	// The macro handler
	handler: function( place, macroName, params, wikifier, paramString, tiddler )
	{
		var sums = this.tasksums
		if ( params[0] == "start" ) {
			sums.unshift([])
			if ( params[1] == "here" ) {
				sums[0].intro = params[2] || this.lingo.defIntro
				sums[0].place = place
				sums[0].placement = place.childNodes.length
			}
		}
		else if ( params[0] == "end" ) {
			if ( ! sums.length )
				throw Error( this.lingo.mustMatch.format([macroName]) )
			var list = sums.shift()
			var intro = list.intro || params[1] || this.lingo.defIntro
			var nNascent=0, nLive=0, nDone=0, nMine=0
			var totLiveSpent=0, totDoneSpent=0
			var totOrig=0, totCur=0, totSpent=0
			for ( var i=0; i < list.length; ++i ) {
				var a = list[i]
				if ( a.length > 3 ) {
					nNascent 	+= a[0]
					nLive 		+= a[1]
					nDone 		+= a[2]
					totLiveSpent 	+= a[3]
					totDoneSpent 	+= a[4]
					totOrig 	+= a[5]
					totCur 		+= a[6]
					totSpent 	+= a[7]
					if ( a.owner == tiddler )
						nMine	+= a[8]
				}
				else {
					if ( a.owner == tiddler )
						++nMine
					if ( isNaN(a[0]) ) {
						++nNascent
					}
					else {
						if ( a[1] > a[2] ) {
							++nLive
							totLiveSpent += a[2]
						}
						else {
							++nDone
							totDoneSpent += a[2]
						}
						totOrig  += a[0]
						totCur   += a[1]
						totSpent += a[2]
					}
				}
			}

			// If we're nested, push a summary outward
                        if ( sums.length ) {
				var summary = [nNascent, nLive, nDone, totLiveSpent, totDoneSpent,
						totOrig, totCur, totSpent, nMine]
				summary.owner = tiddler
				sums[0].push( summary )
			}

			var descs = [], styles = []
			if ( nNascent > 0 ) {
				descs.push( this.lingo.nascentSum.format([nNascent]) )
				styles.push( "nascent" )
			}
			if ( nDone > 0 )
				descs.push( this.lingo.doneSum.format([nDone, totDoneSpent]) )
			if ( nLive > 0 ) {
				descs.push( this.lingo.liveSum.format([nLive, totLiveSpent, totCur-totSpent]) )
				styles.push( "live" )
			}
			else
				styles.push( "done" )
			var off = ""
			if ( totOrig > totCur )
				off = this.lingo.overSum.format( [Math.round(100.0*(totOrig-totCur)/totCur)] )
			else if ( totCur > totOrig )
				off = this.lingo.underSum.format( [Math.round(100.0*(totCur-totOrig)/totOrig)] )

			var top		= (list.intro != undefined)
			var table	= createTiddlyElement( null, "table", null, "tasksum "+(top?"top":"bottom") )
			var tbody	= createTiddlyElement( table, "tbody" )
			var row		= createTiddlyElement( tbody, "tr", null, styles.join(" ") )
			var descCell	= createTiddlyElement( row,   "td", null, "description" )

			var description = this.lingo.descPattern.format( [intro, descs.join(", "), off] )
			wikify( description, descCell, null, tiddler )

			var origCell	= totOrig == totCur? null
					: createTiddlyElement( row, "td", null, "numeric original", totOrig )
			var curCell	= createTiddlyElement( row, "td", null, "numeric current", totCur )
			var spentCell	= createTiddlyElement( row, "td", null, "numeric spent", totSpent )
			var remCell	= createTiddlyElement( row, "td", null, "numeric remaining", totCur-totSpent )

			if ( origCell )
				origCell.setAttribute( "title", this.lingo.origTip )
			curCell  .setAttribute( "title", this.lingo.curTip )
			spentCell.setAttribute( "title", this.lingo.spentTip )
			remCell  .setAttribute( "title", this.lingo.remTip )

			// Discard the table if there are no tasks
			if ( list.length > 0 ) {
				var place = top? list.place : place
				var placement = top? list.placement : place.childNodes.length
				if ( placement >= place.childNodes.length )
					place.appendChild( table )
				else
					place.insertBefore( table, place.childNodes[placement] )
			}
		}
		else
			throw Error( this.lingo.unrecVerb.format([macroName]) )

		// If we're wikifying, and are followed by end-of-line, swallow the newline.
		if ( wikifier && wikifier.source.charAt(wikifier.nextMatch) == "\n" )
			++wikifier.nextMatch
	},

	// This is the stack of pending summaries
	tasksums: []
}
//}}}
/***
!Taskadder Macro
Usage:
> {{{<<taskadder ["above"|"below"|"focus"|"nofocus"]...>>}}}
Creates a line with text entry fields for a description and an estimate.  By default, puts focus in the description field and adds tasks above the entry fields.  Use {{{nofocus}}} to not put focus in the description field.  Use {{{below}}} to add tasks below the entry fields.
***/
//{{{
config.macros.taskadder = {

	// Translatable text:
	lingo: {
		unrecParam:	"<<%0>> doesn't recognize '%1' as a parameter",
		descTip:	"Describe a new task",
		curTip:		"Estimate how much days the task will take",
		buttonText:	"add task",
		buttonTip:	"Add a new task with the description and estimate as entered",
		notCurrent:	"The tiddler has been modified since it was displayed, please redisplay it before adding a task this way.",

		eol:		"eol"
	},

	// The macro handler
	handler: function( place, macroName, params, wikifier, paramString, tiddler )
	{
		var above = true
		var focus = false

		while ( params.length > 0 ) {
			var p = params.shift()
			switch (p) {
			case "above": 	above = true;  break
			case "below": 	above = false; break
			case "focus": 	focus = true;  break
			case "nofocus":	focus = false; break
			default:	throw Error( this.lingo.unrecParam.format([macroName, p]) )
			}
		}

		// If we're followed by end-of-line, swallow the newline.
		if ( wikifier.source.charAt(wikifier.nextMatch) == "\n" )
			++wikifier.nextMatch

		var where	= above? wikifier.matchStart : wikifier.nextMatch

		var table	= createTiddlyElement( place, "table", null, "task" )
		var tbody	= createTiddlyElement( table, "tbody" )
		var row		= createTiddlyElement( tbody, "tr" )
		var statusCell	= createTiddlyElement( row,   "td", null, "status" )
		var descCell	= createTiddlyElement( row,   "td", null, "description" )
		var curCell	= createTiddlyElement( row,   "td", null, "numeric" )
		var addCell	= createTiddlyElement( row,   "td", null, "addtask" )

		var descId	= this.generateId()
		var curId	= this.generateId()
		var descInput	= createTiddlyElement( descCell, "input", descId )
		var curInput	= createTiddlyElement( curCell,  "input", curId  )

		descInput.setAttribute( "type", "text" )
		curInput .setAttribute( "type", "text" )
		descInput.setAttribute( "size", "40")
		curInput .setAttribute( "size", "6" )
		descInput.setAttribute( "autocomplete", "off" );
		curInput .setAttribute( "autocomplete", "off" );
		descInput.setAttribute( "title", this.lingo.descTip );
		curInput .setAttribute( "title", this.lingo.curTip  );

		var addAction	= this.addTask( tiddler, where, descId, curId, above )
		var addButton	= createTiddlyButton( addCell, this.lingo.buttonText, this.lingo.buttonTip, addAction )

		descInput.onkeypress = this.handleEnter(addAction)
		curInput .onkeypress = descInput.onkeypress
		addButton.onkeypress = this.handleSpace(addAction)
		if ( focus || tiddler.taskadderLocation == where ) {
			descInput.focus()
			descInput.select()
		}
	},

	// Returns a function that inserts a new task macro into the tiddler.
	addTask: function( tiddler, where, descId, curId, above ) {
		var macro = this, oldText = tiddler.text
		return wrapEventHandler( function(e) {
			if ( oldText !== tiddler.text ) {
				alert( macro.lingo.notCurrent )
				return false
			}
			var desc	= document.getElementById(descId).value
			var cur		= document.getElementById(curId) .value
			var init	= tiddler.text.substring(0,where) + "<<task " + cur + ">> " + desc + "\n"
			var text	= init + tiddler.text.substring(where)
			var title	= tiddler.title
			tiddler.taskadderLocation = (above? init.length : where)
			try {
				store.saveTiddler( title, title, text, config.options.txtUserName, new Date(), undefined )
				//story.refreshTiddler( title, null, true )
			}
			finally {
				delete tiddler.taskadderLocation
			}
			if ( config.options.chkAutoSave )
				saveChanges()
		} )
	},

	// Returns an event handler that delegates to two other functions: "matches" to decide
	// whether to consume the event, and "addTask" to actually perform the work.
	handleGeneric: function( addTask, matches ) {
		return function(e) {
			if (!e) var e = window.event
			var consume = false
			if ( matches(e) ) {
				consume = true
				addTask( e )
			}
			e.cancelBubble = consume;
			if ( consume && e.stopPropagation ) e.stopPropagation();
			return !consume;
		}
	},

	// Returns an event handler that handles enter keys by calling another event handler
	handleEnter: function( addTask ) {
		return this.handleGeneric( addTask, function(e){return e.keyCode == 13 || e.keyCode == 10} ) // Different codes for Enter
	},

	// Returns an event handler that handles the space key by calling another event handler
	handleSpace: function( addTask ) {
		return this.handleGeneric( addTask, function(e){return (e.charCode||e.keyCode) == 32} )
	},

	counter: 0,
	generateId: function() {
		return "taskadder:" + String(this.counter++)
	}
}
//}}}
/***
!Stylesheet
***/
//{{{
var stylesheet = '\
.viewer table.task, table.tasksum {\
	width: 100%;\
	padding: 0;\
	border-collapse: collapse;\
}\
.viewer table.task {\
	border: none;\
	margin: 0;\
}\
table.tasksum, .viewer table.tasksum {\
	border: solid 2px #999;\
	margin: 3px 0;\
}\
table.tasksum td {\
	text-align: center;\
	border: 1px solid #ddd;\
	background-color: #ffc;\
	vertical-align: middle;\
	margin: 0;\
	padding: 0;\
}\
.viewer table.task tr {\
	border: none;\
}\
.viewer table.task td {\
	text-align: center;\
	vertical-align: baseline;\
	border: 1px solid #fff;\
	background-color: inherit;\
	margin: 0;\
	padding: 0;\
}\
td.numeric {\
	width: 3em;\
}\
table.task td.numeric div {\
	border: 1px solid #ddd;\
	background-color: #ffc;\
	margin: 1px 0;\
	padding: 0;\
}\
table.task td.original div {\
	background-color: #fdd;\
}\
table.tasksum td.original {\
	background-color: #fdd;\
}\
table.tasksum td.description {\
	background-color: #e8e8e8;\
}\
table.task td.status {\
	width: 1.5em;\
	cursor: default;\
}\
table.task td.description, table.tasksum td.description {\
	width: auto;\
	text-align: left;\
	padding: 0 3px;\
}\
table.task.done td.status,table.task.done td.description {\
	color: #ccc;\
}\
table.task.done td.current, table.task.done td.remaining {\
	visibility: hidden;\
}\
table.task.done td.spent div, table.tasksum tr.done td.current,\
table.tasksum tr.done td.spent, table.tasksum tr.done td.remaining {\
	background-color: #eee;\
	color: #aaa;\
}\
table.task.nascent td.description {\
	color: #844;\
}\
table.task.nascent td.current div, table.tasksum tr.nascent td.numeric.current {\
	font-weight: bold;\
	color: #c00;\
	background-color: #def;\
}\
table.task.nascent td.spent, table.task.nascent td.remaining {\
	visibility: hidden;\
}\
td.remaining {\
	font-weight: bold;\
}\
.adjustable {\
	cursor: pointer; \
}\
table.task input {\
	display: block;\
	width: 100%;\
	font: inherit;\
	margin: 2px 0;\
	padding: 0;\
	border: 1px inset #999;\
}\
table.task td.numeric input {\
	background-color: #ffc;\
	text-align: center;\
}\
table.task td.addtask {\
	width: 6em;\
	border-left: 2px solid white;\
	vertical-align: middle;\
}\
'
setStylesheet( stylesheet, "TaskMacroPluginStylesheet" )
//}}}
/***
''TextAreaPlugin for TiddlyWiki version 2.0''
^^author: Eric Shulman - ELS Design Studios
source: http://www.elsdesign.com/tiddlywiki/#TextAreaPlugin
license: [[Creative Commons Attribution-ShareAlike 2.5 License|http://creativecommons.org/licenses/by-sa/2.5/]]^^

This plugin 'hijacks' the TW core function, ''Story.prototype.focusTiddler()'', so it can add special 'keyDown' handlers to adjust several behaviors associated with the textarea control used in the tiddler editor.  Specifically, it:
* Adds text search INSIDE of edit fields.^^
Use ~CTRL-F for "Find" (prompts for search text), and ~CTRL-G for "Find Next" (uses previous search text)^^
* Enables TAB characters to be entered into field content^^
(instead of moving to next field)^^
* Option to set cursor at top of edit field instead of auto-selecting contents^^
(see configuration section for checkbox)^^
!!!!!Configuration
<<<
<<option chkDisableAutoSelect>> place cursor at start of textarea instead of pre-selecting content
<<option chkTextAreaExtensions>> add control-f (find), control-g (find again) and allow TABs as input in textarea
<<<
!!!!!Installation
<<<
Import (or copy/paste) the following tiddlers into your document:
''TextAreaPlugin'' (tagged with <<tag systemConfig>>)
<<<
!!!!!Revision History
<<<
''2006.01.22 [1.0.1]''
only add extra key processing for TEXTAREA elements (not other edit fields).
added option to enable/disable textarea keydown extensions (default is "standard keys" only)
''2006.01.22 [1.0.0]''
Moved from temporary "System Tweaks" tiddler into 'real' TextAreaPlugin tiddler.
<<<
!!!!!Code
***/
//{{{
version.extensions.textAreaPlugin= {major: 1, minor: 0, revision: 1, date: new Date(2006,1,23)};
//}}}

//{{{
if (!config.options.chkDisableAutoSelect) config.options.chkDisableAutoSelect=false; // default to standard action
if (!config.options.chkTextAreaExtensions) config.options.chkTextAreaExtensions=false; // default to standard action

// Focus a specified tiddler. Attempts to focus the specified field, otherwise the first edit field it finds
Story.prototype.focusTiddler = function(title,field)
{
	var tiddler = document.getElementById(this.idPrefix + title);
	if(tiddler != null)
		{
		var children = tiddler.getElementsByTagName("*")
		var e = null;
		for (var t=0; t<children.length; t++)
			{
			var c = children[t];
			if(c.tagName.toLowerCase() == "input" || c.tagName.toLowerCase() == "textarea")
				{
				if(!e)
					e = c;
				if(c.getAttribute("edit") == field)
					e = c;
				}
			}
		if(e)
			{
			e.focus();
			e.select(); // select entire contents

			// TWEAK: add TAB and "find" key handlers
			if (config.options.chkTextAreaExtensions) // add extra key handlers
				addKeyDownHandlers(e);

			// TWEAK: option to NOT autoselect contents
			if (config.options.chkDisableAutoSelect) // set cursor to start of field content
				if (e.setSelectionRange) e.setSelectionRange(0,0); // for FF
				else if (e.createTextRange) { var r=e.createTextRange(); r.collapse(true); r.select(); } // for IE

			}
		}
}
//}}}

//{{{
function addKeyDownHandlers(e)
{
	// exit if not textarea or element doesn't allow selections
	if (e.tagName.toLowerCase()!="textarea" || !e.setSelectionRange) return;

	// utility function: exits keydown handler and prevents browser from processing the keystroke
	var processed=function(ev) { ev.cancelBubble=true; if (ev.stopPropagation) ev.stopPropagation(); return false; }

	// capture keypress in edit field
	e.onkeydown = function(ev) { if (!ev) var ev=window.event;

		// process TAB
		if (!ev.shiftKey && ev.keyCode==9) { 
			// replace current selection with a TAB character
			var start=e.selectionStart; var end=e.selectionEnd;
			e.value=e.value.substr(0,start)+String.fromCharCode(9)+e.value.substr(end);
			// update insertion point, scroll it into view
			e.setSelectionRange(start+1,start+1);
			var linecount=e.value.split('\n').length;
			var thisline=e.value.substr(0,e.selectionStart).split('\n').length-1;
			e.scrollTop=Math.floor((thisline-e.rows/2)*e.scrollHeight/linecount);
			return processed(ev);
		}

		// process CTRL-F (find matching text) or CTRL-G (find next match)
		if (ev.ctrlKey && (ev.keyCode==70||ev.keyCode==71)) {
			// if ctrl-f or no previous search, prompt for search text (default to previous text or current selection)... if no search text, exit
			if (ev.keyCode==70||!e.find||!e.find.length)
				{ var f=prompt("find:",e.find?e.find:e.value.substring(e.selectionStart,e.selectionEnd)); e.focus(); e.find=f?f:e.find; }
			if (!e.find||!e.find.length) return processed(ev);
			// do case-insensitive match with 'wraparound'...  if not found, alert and exit 
			var newstart=e.value.toLowerCase().indexOf(e.find.toLowerCase(),e.selectionStart+1);
			if (newstart==-1) newstart=e.value.toLowerCase().indexOf(e.find.toLowerCase());
			if (newstart==-1) { alert("'"+e.find+"' not found"); e.focus(); return processed(ev); }
			// set new selection, scroll it into view, and report line position in status bar
			e.setSelectionRange(newstart,newstart+e.find.length);
			var linecount=e.value.split('\n').length;
			var thisline=e.value.substr(0,e.selectionStart).split('\n').length;
			e.scrollTop=Math.floor((thisline-1-e.rows/2)*e.scrollHeight/linecount);
			window.status="line: "+thisline+"/"+linecount;
			return processed(ev);
		}
	}
}
//}}}
A service generating //periodic ticks// &mdash; repetedly invoking a callback with a given frequency.
* defined 1/2009 as part of the PlayerDummy (design sketch)
* probably to be implemented later on based on Posix timers
* will probably be later on integrated into a synchronisation framework (display sync, audio sync, MTC master/slave...)

The Name of the Software driving this Wiki. Is is written completely in ~JavaScript and contained in one single HTML page.
Thus no server and no network connection is needed. Simply open the file in your browser and save changes locally. As the [[Proc-Layer wiki|ProcLayer and Engine]] HTML is located in the Lumiera source tree, all changes will be managed and distributed via GIT. While doing so, you sometimes will have to merge conflicing changes manually in the HTML source.
 * see GettingStarted
 * see [[Homepage|http://tiddlywiki.com]], [[Wiki-Markup|http://tiddlywiki.org/wiki/TiddlyWiki_Markup]]
Simple time points are just like values and thus easy to change. The difficulties arise when time values are to be //quantised to an existing time grid.// The first relevant point to note is that for quantised time values, the effect of a change can be disproportional to the cause. Small changes below the threshold might be accumulated, and a tiny change might trigger a jump to the next grid point. While this might be annoying, the yet more complex questions arise when we acknowledge that the amount of the change itself might be related to a time grid.

The problem with modification of quantised values highlights an inner contradiction or conflicting goals within the design
* the whole system should fit in naturally and just feel like using raw time values
* quantisation should be added dynamically and //late// -- like a view
* there should be a guidance towards the intended proper use

!!!general assumptions in Lumiera
At this point, we should recall some general assumptions within Lumiera
* there are no hard-coded defaults regarding the specific nature of the manipulated objects (format, number of channels)
* there is not "the" timeline, but we have multiple timelines
* sequences might be nesetd and thus be brought into a different context
* framerate is a property of the stream type of a channel; it depends on the output. There is no global format or framerate.

!Usage situations
The complexity arises from the mixture of several concerns, which often blend into a single usage situation.
;Time span is an object prototype
:Within the context of an NLE, a time interval located at a given time point can be considered as the least common denominator of all entities to be manipulated by the user. //Manipulating// such objects is the whole point of using such an application.
:* an object or boundary point can be //dragged//
:* we want to //nudge// by defined amounts
:* we want to put it "somewhere"
;dragging
:as far as the GUI is concerned, dragging some element creates an //offset in display coordinates.//
:Now the task is to tranform this into a change on the object, which typically is grid-aligned
;nudging
:the primary interaction in this case just sends an numeric offset of ±N steps
:but the further processing might involve a global or maybe even local //nudge amount.//
:the recieving object is typically grid-aligned, thus nudge-grid and object-grid need to cascade, i.e. be applied in sequence
;placing
:an object or boundary point gets relocated to some predefined time position, which in turn can be quantised (grid aligned).
:now either we treat this target position as a value, to be applied, possibly with re-quantisation
:or we might treat it as a //source for time values// -- in which case the object just gets attached and consumes time coordinates produced elsewhere.

Considering the analysis this far, there seems to be a common denominator in all these operations: chaining of multiple grids.
A mutation or an increment gets applied to a grid-aligned variable, which then (maybe after transformation into a different scale) in turn is used again as a mutation to be applied to another grid-aligned variable. Now, because at this point we don't need to solve the whole bunch of wide scale design problems, it is sufficient to extract the following operations
::value &rArr; value
::value &rArr; quantised
::increment &rArr; quantised

This results in the following combinations to be implemented:
|  !mutation|!receiver ||!result |
| ~TimeValue|Time || -- |
|~|Duration || ↯ |
|~|~TimeSpan ||set start |
|~|quantised ||set raw value |
| Duration|~TimeSpan ||set length |
|~|Duration  ||set length |
|~|quantised || ↯ |
| Offset|Time || -- |
|~|Duration ||adjust length |
|~|~TimeSpan ||adjust start |
|~|quantised ||adjust raw value |
| ~QuTime|any raw ||materialise, then set raw |
|~|~QuTime ||same &rarr; chanined |
| increment|~QuTime ||snap to grid point |
As rationale, consider the following
* immutable values are bliss. 
* by materialising a quantised change prior to applying it, we get the "chaining effect"

----------
this leads to the following solutions approach
* time values are immutable (as far as possible)
* for simple time calculations the ~TimeVar is sufficient
* but we allow to fed an abstract //modification object.//
* ''nudge by unit(s)'' is subsumed here as a special case

As the actual change logic to apply depends both on the target value and the kind of the change, we end up with //double dispatch.//

 -- can we avoid accepting mutations for Time (and thus only mutate ~TimeSpan)?
Rationale: allowing mutations for Time bears the danger of making ~TimeVar obsolete
* //4/11 yes, implementing it now this way....//
* we have ~TimeVar, so the only thing that //needs// to be mutable is a whole object &rArr; ~TimeSpan
* err, because MObject will include a Duration separate from the start time in the Placement, Duration needs to be mutable too

!!!usage considerations
{{red{Question 5/11}}} when moving a clip taken from 50fps media, the new position might be quantised to the 50fps grid established by the media, while the target timeline runs with 25fps, allowing for finer adjustments based on the intermediate frames present in the source material.

!!!design draft
The special focus of this problem seems to lead itself to a __visitor pattern__ based implementation. Because the basic hierarchy of applicable types is fixed, but the behaviour is open ended (and not yet fully determined). A conventional implementation would scatter this behaviour over all the time entities, thus making it hard to understand and reason about. The classical ~GoF cyclic visitor solution to the contrary allows us to arrange closely related behaviour into thematically grouped visitor classes. As a plus, the concrete action can be bound dynamically, allowing for more flexibility when it comes to dealing with the intricate situations when a quantised time span (= a clip) recieves a quantised mutation (= is re-alligend to a possibly different frame grid)

Thus the possibly mutalble time entities get an {{{accept(time::Mutation&)}}} function, which is defined to reflect back into the (virtual) visitation functions of the Mutation Interface. Additionally, we provide some static convenience shortcuts right in the Mutation interface, performing the trivial mutations.

!!!life value changes and monitoring
Based on this time::Mutation design, we provide a specialised element for dealing with //running time values:// When attached to a target time entity, a "life" connection is established. From then on, continuous changes and mutations can be fed to the target by invoking a functor interface. Besides, a change notification signal (callback) can be installed, which will be invoked on each change. This {{{time::Control}}} element is the foundation for implementing all kinds of running time display widgets, spin buttons, timeline selections, playheads, loop playback and similar.
The term &raquo;Time&laquo; spans a variety of vastly different entities. Within a NLE we get to deal with various //flavours of time values.//
;continuous time
:without any additional assumptions, ''points in time'' can be specified with arbitrary precision.
:the time values are just numbers; the point of reference and the meaning is implicit.
:within Lumiera, time is encoded as integral number of //micro ticks,// practically continuous
;time distance
:a range of time, a ''distance'' on the time axis, measured with the same arbitrary precision as time points.
:distances can be determined by //subtracting// two time points, consequently they are //signed numbers.//
;offset
:a distance can be used to adjust (offset) a time or a duration: this means applying a relative change.
:the //target// of an offset operation is a time or duration, while it's //argument// is a distance (synonymous to offset).
;duration
:the length of a time range yields a ''time metric''.
:the duration can be defined as the //absolute value//&nbsp; of the offset between start and endpoint of the time range.
:a duration always abstracts from the time //when// this duration or distance happens, the relation to any time scale remains implicit
;time span
:contrary to a mere duration, a ''time interval'' or time span is actually //anchored// at a specific point in time.
:it can be seen as a //special kind of duration,// which explicitly states the information //when// this time span takes place.
;changing time
:Time values are //immutable,// like numbers. Only a ''time variable'' can be changed.
:yet some of the special time entities can recieve [[mutation messages|TimeMutation]], allowing e.g. for adjustments to a time interval selection from the GUI

;internal time
:While the basic continuous time values don't imply any commitment regarding the time scale and origin to be used, actually, within the implementation of the application, the meaning of time values is uniform and free of contradictions. Thus effectively there is an ''implementation time scale'' -- but its scope of validity is //strictly limited to the implementation level of a single application instance.// It is never exposed and never persisted. It might not be reproducible over multiple instantiations of the application. The implementation reserves the right to recalibrate this internal scale. Later, when Lumiera gains the capability to run within a network of render nodes, these instance connections will include a negotiation about the internal time scale, which remains completely opaque to the outer world. This explains, why {{{lumiera::Time}}} instances lack the ability to show their time value beyond debugging purposes. This is to avoid confusion and to stress their opaque nature.
;wall clock and system time
:The core property of any external real world time is that it is //running// -- we have to synchronise to an external time source.
:This implies the presence of a //running synchronisation process,// with the authority to adjust the time base;
:contrast this to the internal time, which is static and unconnected -- 
;quantised time
:The ''act of quantisation'' transforms a continuous property into a ''discrete'' structure. Prominent examples can be found in the domain of micro physics and with digital information processing. In a broader sense, any measurement or //quantification// also encompasses a quantisation. Regarding time and time measurement, quantisation means alignment to a predefined ''time grid''. Quantisation necessarily is an //irreversible process// -- possible additional information is discarded.
:Note that quantisation introduces a ''time origin'' and a ''reference scale''
;frame count
:within the context of film and media editing, the specification of a ''frame number'' is an especially important instance of quantisation.
:all the properties of quantisation apply indeed to this special case: it is a time measurement or specification, where the values are aligned to a grid, and there is a reference time point where the counting starts (origin) and a reference scale (frames per second). Handling of quantised time values in Lumiera is defined such as to ensure the presence of all those bits of information. Without such precautions, operating with bare frame numbers leads itself to all kinds of confusions, mismatches, quantisation errors and unnecessary limitations of functionality.
;timecode
:Quantisation also is the foundation of all kinds of formalised time specifications
:actually even a frame count is some kind of (informal) timecode -- other timecodes employ a standardised format.
://every// presentation of time values and every persistent storage and exchange of such values is based on time codes.
:yet quantisation and time code aren't identical: a given quantised time value typically can be cast into multiple timecode formats.

!Patterns for handling quantised time
When it comes to actually handling quantised time values, several patterns are conceivable for dealing with the quantisation operation and representing quantised data. As guideline for judging these patterns, the general properties of time quantisation, as detailed above, should be taken into account. Quantising a time value means both //discarding information,// while at the same time //adding explicit information// pertaining the assumptions of the context.

__casual handling__: this is rather an frequently encountered ''anti pattern''. When reading such code, most striking is the sense of general unawareness of the problem, which is then "discovered" on a per case base, which leads to numerous repetitions of the same basic undertakings, but done with individual treatment of each instance (not so much copy-n-paste). Typical code smells:
* the rounding, modulo and subtract-base operations pertinent with scale handling are seemingly inserted as bugfix
* local code path forks to circumvent or compensate for otherwise hard wired calculations based on specific ways to invoke a function
* playing strikingly clever tricks or employing heuristics to "figure out" the missing scale information from accessible context after the fact
* advertising support for some of the conceivable cases as special feature, or adding it as plugin or extension module with limited scope
* linking parts of the necessary additional information to completely unrelated other structures, thus causing code tangling and scattering
* result or behaviour of calculations depends on the way things are set up in a seemingly contingent way, forcing users to stick to very specific procedures and ordered steps.
[>img[Time and Time Quantisation in Lumiera|uml/fig142725.png]]
__static typing__: an analysis of the cases to be expected establishes common patterns and some base cases, which are then represented by distinct types with well established conversions. This can be combined with generic programming for the common parts. Close to the data input, a factory establishes these statically typed values.

__tagged values__: quantised values are explicitly created out of continuous values by a quantiser entity. These quantised data values contain a copy of the original data, adjusted to be exactly aligned with respect to the underlying time grid. In addition, they carry a tag or ID to denote the respective scale, grid or timecode system. This tag can be used later on to assess compatibility or to recast values into another timecode system.

__delayed quantisation__: with this approach, the information loss is delayed as long as possible. Quantised time values are rather treated as promise for quantisation, while the actual time data remains unaltered. Additionally, they carry a tag, or even a direct link to the responsible quantiser instance. Effectively, these are specialised time values, instances of a sub-concept, able to stand-in for general time values, but exposing additional accessors to get a quantised value.

!!!discussion
For Lumiera, the static typing approach is of limited value -- it excels when values belonging to different scales are actually treated differently. There are such cases, but rather on the data handling level, e.g. sound samples are always handled block wise. But regarding time values, the unifying aspect is more important, which leads to prefering a dynamic (run time typed) approach, while //erasing// the special differences most of the time. Yet the dynamic and open nature of the Lumiera high-level model favours the delayed quantisation pattern; the same values may require different quantisation depending on the larger model context an object is encountered in. This solution might be too general and heavy weight at times though. Thus, for important special cases, the accessors should return tagged values, preferably even with differing static type. Time codes can be integrated this way, but most notably the ''frame numbers'' used for addressing throughout the backend, can be implemented as such specifically typed tagged values; the tag here denotes the quantiser and thus the underlying grid -- it should be implemented as hash-ID for smooth integration with code written in plain C.

At the level of individual timecode formats, we're lacking a common denominator; thus it is preferrable to work with different concrete timecode classes through //generic programming.// This way, each timecode format can expose operations specific only to the given format. Especially, different timecode formats expose different //component fields,// modelled by the generic ''Digxel'' concept. There is a common baseclass ~TCode though, which can be used as marker or for //type erasure.//
&rarr; more on [[usage situations|TimeUsage]]
&rarr; Timecode [[format and quantisation|TimecodeFormat]]
&rarr; Quantiser [[implementation details|QuantiserImpl]]
the following collection of usage situations helps to shape the details of the time values and time quantisation design. &rarr; see also  [[time quantisation|TimeQuant]]

;time position of an object
:indeed the term "time position" encompasses two quite different questions
:* a time or timing specification within the object
:* determining the time point in reference to an existing scale
;time and length of an object
:basically the same situation, but length could be treated in two ways for quantisation
:* having a precise specification and then quantise the start and endpoint
:* quantise the start position and then establish an (independently quantised length)
;moving and resizing an object
:this can in itself be done in two different ways, and each of them can be applied in a quantised flavour
:which sums up to 8 possible combinations, considering that position and length are 2 degrees of freedom.
:* a variable can be //changed// by an offset
:* a variable can be //defined// to a new value
:another (hidden) degree of freedom lies in how to apply an quantised offset to an unquantised value (and reversed)
:because this operation might be done both in the quantised or non-quantised domain, and also the result might be (un)quantised
;updating the playback position
:this can be seen as a practical application of the above; basically we can choose to show the wall clock time or we can advance the playback position in frame increments, thus denoting the frame currently in display. For video, these distinctions may look moot, but they are indeed relevant for precise audio editing, especially when combined with loop playback (recall that audio is processed block wise, but the individual sample frames and thus the possible loop positions are way finer than the processing block size)
;dispatching individual frames for calculation
:when a [[render or playback process|PlayProcess]] is created, at some point we need to translate this logical unit ("calculation stream") into individual frame job entries. This requires to break continuous time into individual frames, and then ennumerating these frames.
;displaying time intervals
:for display, time intervals get //re-quantised// into display array coordinates.
:While evidently the display coordinates are themselves quantised and we obviously don't want to cancel out the effect of an quantisation of the values or intervals to be displayed (which means, we get two quantisations chained up after each other), there remains the question if the display array coordinates should be aligned to the grid of the //elements to be displayed,// and especially if the allowed zoom factors should be limited. This decision isn't an easy one, as it has an immediate and tangible effect on what can be showed, how reversible and reproducible a view is and (especially note this!) on the actual values which can be set and changed through the GUI.
;time value arithmetic
:Client code as well as the internal implementation of time handling needs to do arithmetic operations with time values. Time values are additive and totally ordered. Distance, as calculated by subtraction, can be made into a metric. Another and quite different question is to what extent a quantised variant of this arithmetics is required.
;relative placement
:because of the divergence between quantised and unquantised values, the question arises, if placement relative to another object refers to the raw position or the already quantised position. Basically all the variations discussed for //time and length of an object// also do apply here.

!notable issues
''Direct quantisation of length is not possible''. This is due to the non-linear nature of all but the most trivial time grids: Already such a simple addition like a start offset destroys linearity, and this still the more is true within a compound grid where the grid spacing changes at some point. Thus, the length has to be re-established at the target position of an time interval after each change involving quantisation. Regarding the //strategy// to apply when re-establishing the length, it seems more appropriate to treat the object as an entity which is moved, which means to do quantisation in two steps, first the position, then the endpoint (the second option in the description above). But it seems advisable not to hard wire that strategy -- better put it into the quantiser.

We should note, that the problems regarding quantised durations also carry over to //offsets:// it is difficult to ''define the semantics of a quantised offset''. Seemingly the only viable approach is to have a //intended offset,// and then to apply a re-quantisation to the target after applying the (raw) offset.

''When to materialise a quantisation''. Because of the basic intention to retain information, we delay actually applying the quantisation to the stored values as much as possible. But not materialising immediately at quantisation has the downside of possibly accumulating off-grid values without that being evident. Most notably, if we apply the raw offsets produced by GUI interactions, the object's positions and lengthes are bound to accumulate spurious information never intended by the user.

Thus, an especially important instance of that problem is ''how to deal with updates in a quantised environment''. If we handle quantisation stictly as a view employed on output, we run into the problems with accumulating spurious information. On the other hand, allowing for quantised changes inevitably pulls in all the complexity of mixing quantised and non-quantised values. It would be desirable somehow to move these distinctions out of the scope of this design and offload them onto the client (code using these time classes).

Another closely related problem is ''when to allow [[mutations|TimeMutation]]'', if at all. We can't completely do away with mutations, simply because we don't have a pure functional language at our disposal. The whole concept of //reference semantics// doesn't play so well with immutable objects. The Lumiera high-level (session) model certainly relies on objects intended to be //manipulated.// Thus we need a re-settable length field in {{{MObject}}} and we need a time variable for position calculations. Yet we could make any //derived objects// into immutable descriptor records, which certainly helps with parallelism.

The ''problem with playback position'' is -- that indeed it's an attempt to conceptualise a non-existing entity. There is no such thing like "the" playback position. Yet most applications I'm aware off //do// employ this concept. Likely they got trapped by the metaphor of the tape head, again. We should do away with that. On playback, we should show a //projection of wall-clock time onto the expected playback range// -- not more, not less. It should be acknowledged that there is //no direct link to the ongoing playback processes,// besides the fact that they're assumed to sync to wall-clock time as well. Recall, typically there are multiple playback processes going on in compound, and each might run on a different update rate. If we really want a //visual out-of-sync indicator,// we should treat that as a separate reporting facility and display it apart of the playback cursor.

An interesting point to note for the ''frame dispatch step'' is the fact, that in this case quantised values and quantisation are approached in the reverse direction, compared with the other uses. Here, after establishing a start point on the time scale, we proceed with ennumerating distinct frames and later on need to access the corresponding raw time, especially to find out about the [[timeline segment|Segmentation]] to address, or for retrieving parameter automation. &rarr; FrameDispatcher.

Note that the ''display window might be treated as just an independent instance of quantisation''. This is similar to the approach taken above for modifying quantised time span values. We should provide a special kind of time grid, the display coordinates. The origin of these is always defined to the left (lower) side of the interval to be displayed, and they are gauged in screen units (pixels or similar, as used by the GUI toolkit set). The rest is handled by the general quantisation mechanisms. The problem of aligning the display should be transformed into a general facility to align grids, and solved for the general case. Doing so solves the remaining problems with quantised value changes and with ''specifying relative placements'' as well: If we choose to represent them as quantised values, we might (or might not) also choose to apply this //grid-alignment function.//

!the complete time value usage cycle
The way time value and quantisation handling is designed in Lumiera creates a typical usage path, which actually is a one-way route. We might start out with a textual representation according to a specific ''timecode'' format. Assumed we know the implicit underlying ''time grid'' (coordinate system, framerate), this timecode string may be parsed. This brings us (back) to the very origin, which is a raw ~TimeValue (''internal time'' value). Now, this time value might be manipulated, compared to other values, combined into a ''time span'' (time point and duration -- the most basic notion of an //object// to be manipulated in the Session). Anyway, at some point these time values need to be related to some ''time scale'' again, leading to ''quantised'' time values, which -- finally -- can be cast into a timecode format for external representation again, thus closing the circle.

!substantial problems to be solved
* how to [[align multiple grids|TimeGridAlignment]]
* how to integrate [[modifications of quantised values|TimeMutation]].
* how to isolate the Time/Quantisation part from the grid MetaAsset in the session &rarr; we use the [[Advice]] system
* how to design the relation of Timecode, Timecode formatting and Quantisation &rarr; [[more here|TimecodeFormat]]
The handling of [[Timecode]] is closely related to [[time representation and quantisation|TimeQuant]]. In fact, these two topics blend into one another. Time will be quantised into a //grid,// but this grid only makes sense when linked to a externally relevant meaning and representation, which is the Timecode. But a timecode value also denotes a specific point in time -- performing operations on a timecode is equivalent to manipulating a quantised time value.

!Problem of dependencies
The general design of time and time quantisation in Lumiera requires us to do the actual quantisation as late as possible. The relation of the time-like entities creates a //one way route:// starting from the ''internal time'', which is contiguous and definite, but completely opaque for the client, the usage proceeds to a ''quantised time value'', requiring the specification of a concrete ''time grid''. Now, the possible ''timecode formats'' can be determined, and commiting to one specific timecode format finally allows to get an explicit, number-like value, e.g. the frame count, or the seconds part of a SMPTE Timecode.

In this handling sequence, the ~QuTime with the embedded link to a ''quantiser'' plays the role of an coordinating hub.
* a quantised time element depends on a concrete quantiser, which might be fetched by symbolic ID.
* the quantiser draws upon a configured collection of concrete timecode ''formats''.
* only the concrete format knows how to build the components of a value representation in that format, but in doing so, has to rely on the primitives exposed by the quantiser.
* the concrete individual ''timecode value'' is comprised of several value-like components called ''Digxel'' (generalised digits)
* the timecode value can be //(re)-built,// assumed there is a concrete quantised time and a format and a quantiser

!Usage and operations
Timecode //is a value.// It has an explicit and distinct structure, and the component parts can be accessed as plain numeric values, or in a formatted string representation. To this end, these values need to be (re)-built. This operation -- building the value(s) -- is what effectively //materialises// the quantisation: At this point, the actual rounding and truncating operation is performed.

But timecode values can also be //mutated.// In this respect they differ from a plain ~TimeValue, which is immutable like a number.
* the individual components (digxels) can be assigned, causing the rebuilding of the timecode
* the value as a whole can be incremented, decremented or be offset.

And last but not least, it is possible to get a new ~TimeValue, reflecting the current (quantised) time of this timecode. This closes the circle of time handling and quantisation; actually this is the usual way to enter a new time value into the system, by starting out from an explicitly specified grid aligned timecode value.

!!{{red{WIP 1/11}}}design tasks
* @@color:green;✔@@ find out about the connection to the MetaAsset &rarr; [[Advice]]
* @@color:green;✔@@ determine the primitives which need to be on the //effective quantiser API.//
* @@color:green;✔@@ find out how a format can address the individual components it's comprised of &rarr; direct access by concrete type
* @@color:green;✔@@ decide how a concrete TC value can refer to his quantiser &rarr; always using smart-ptr
* @@color:green;✔@@ decide on how to handle wrap-around and negative values &rarr; by strategy

!negative values and wrap-around
Many timecode formats were defined in the era of analogue media -- typically the key point of a timecode format was the way it can be encoded into some nits and bits within the available channel bandwidth. Often this causes the use of fixed field length representations, imposing hard limits on the number range. Adhering strictly to such ancient specifications in the context of a general purpose computer program usually results in lots of additional complexity without a fundamental reason.

When in doubt, for the design of Lumiera we tend to err for the basic mathematical definition. For time values this means to use an practically unlimited time axis with an arbitrary time origin. Under this assumptions, negative time values are just natural and will be handled without case distinctions. This way, at least the core is kept simple and straight forward. Which leaves us with some of the aforementioned additional complexities showing up when it comes to interfacing with an existing timecode format. Especially, the following points need to be clarified:
* wrap-around: when a timecode component (e.g. the seconds) exceeds the defined range, this creates a propagation (e.g. to the minutes) and a remainder (wrapped seconds value).
* range limitation: what happens when an internal time value exceeds the range of possible timecode values? (e.g. what is 23:59:59 + 70 frames?)
* how to extend the definition to negative values, if applicable.

Especially ''SMPTE Timecode'' is limited to the range from 0:0:0:0 to 23:59:59:##.
But because for Lumiera the SMPTE format is just a presentation rule applied to a more orthogonal internal repesentation, we could think of extending the allowed values...
# by just letting the hours increase arbitrarily
# by letting the timecode wrap from 23:59:59:## to 0:0:0:0 and treat this junction as continuous (effectively a "modulo 1 day").
# by making the hours-field signed, but otherwise contain the same wrapping scheme (0:0:0:0 - 1sec = -1:59:59:0)
# by a representation symmetrical to the zero point (0:0:0:0 - 1sec = -0:0:1:0) -- but beware: //the underlying frame quantisation won't flip//
Because this decision is considered arbitrary and any reasoning will be context dependant, the decision is to provide a hook for a //strategy// --
Currently (1/11), the strategy is implemented according to (1) and (4) above, leaving the actual configuration of a strategy for later.
!!{{red{WIP 1/11}}}BUG
Implementation of this strategy is still broken: it doesn't work properly when actually the change passing over the zero point happens by propagation from lower digits. Because then -- given the way the mutators are implemented -- the //new value of the wrapping digit hasn't been stored.// It seems the only sensible solution is to change the definition of the functors, so that any value will be changed by side-effect {{red{Question 4/11 -- isn't this done and fixed by now??}}}
Timeline is the top level element within the [[Session (Project)|Session]]. It is visible within a //timeline view// in the GUI and represents the effective (resulting) arrangement of media objects, to be rendered for output or viewed in a Monitor (viewer window). A timeline is comprised of:
* a time axis in abolute time ({{red{WIP 1/10}}}: not clear if this is an entity or just a conceptual definition) 
* a list of [[global Pipes|GlobalPipe]] representing the possible outputs (master busses)
* //exactly one// top-level [[Sequence]], which in turn may contain further nested Sequences.
* when used for Playback, a ViewConnection is necessary, allowing to get or connect to a PlayController

Please note especially that following this design //a timeline doesn't define tracks.// [[Tracks form a Tree|Track]] and are part of the individual sequences, together with the media objects placed to these tracks. Thus sequences are independent entities which may exist stand-alone within the model, while a timeline is //always bound to hold a sequence.// &rarr; see ModelDependencies
[>img[Fundamental object relations used in the session|uml/fig136453.png]]

Within the Project, there may be ''multiple timelines'', to be viewed and rendered independently. But, being the top-level entities, multiple timelines may not be combined further. You can always just render (or view) one specific timeline. A given sequence may be referred directly or indirectly from multiple timelines though.

''Note'': in early drafts of the design (2007) there was an entity called "Timeline" within the [[Fixture]]. This entity seems superfluous and has been dropped. It never got any relevance in existing code and at most was used in some code comments.

!Façade and implementation
Actually, Timeline is both an interface and acts as façade. Its an interface, because we'll need "timeline views" ({{red{really? is that a reason to create a hierarchy right here, or shouldn't that be rather conceptual?}}}. It is a façade to the raw structures in the model, in this case a {{{Placement<BindingMO>}}} attached immediately below the [[root scope|ModelRootMO]]. The implementation of the timeline(s) is maintained as StructAsset within the AssetManager, managed by shared-ptr. It always depends on a [[Sequence]], which might be created automatically as empty container when referring to a timeline.

Besides building on the asset management, implementing Timeline (and Sequence) as StructAsset yields another benefit: ~StructAssets can be retrieved by query, allowing to specify more details of the configuration immediately on creation. //But on the short term, this approach causes problems:// there is no real inference engine integrated into Lumiera yet (as of 2/2010 the plan is to get an early alpha working end to end first). For now we're bound to use the {{{fake-configrules}}} and to rely on a hard wired simulation of the intended behaviour of a real query resolution. Just some special magic queries will work for now, but that's enough to get ahead.
There is a three-level hierarchy: [[Project|Session]], [[Timeline]], [[Sequence]]. Each project can contain ''multiple timelines'', to be viewed and rendered independently. But, being the top-level entities, these timelines may not be combined further. You can always just render (or view) one specific timeline. Each of those timelines refers to a Sequence, which is a bunch of [[media objects|MObject]] placed to a tree of [[tracks|Track]]. Of course it is possible to use ~sub-sequences within the top-level sequence within a timeline to organize a movie into several scenes or chapters. 

[>img[Relation of Timelines, Sequences and MObjects within the Project|uml/fig132741.png]]
As stated in the [[definition|Timeline]], a timeline refers to exactly one sequence, and the latter defines a tree of [[tracks|Track]] and a bunch of media objects placed to these tracks. A Sequence may optionally also contain nested sequences as [[meta-clips|VirtualClip]]. Moreover, obviously several timelines (top-level entities) may refer to the same Sequence without problems.
This is because the top-level entities (Timelines) are not permitted to be combined further. You may play or render a given timeline, you may even play several timelines simultaneously in different monitor windows, and these different timelines may incorporate the same sequence in a different way. The Sequence just defines the relations between some objects and may be placed relatively to another object (clip, label,...) or similar reference point, or even anchored at an absolute time if desired. In a similar open fashion, within the track-tree of a sequence, we may define a specific signal routing, or we may just fall back to automatic output wiring.

!Attaching output
The Timeline owns a list of global [[pipes (busses)|Pipe]] which are used to collect output. If the track tree of a sequence doesn't contain specific routing advice, then connections will be done directly to these global pipes in order and by matching StreamType (i.e. typically video to video master, audio to stereo audio master). When a monitor (viewer window) is attached to this timeline, similar output connections are made from those global pipes, i.e. the video display will take the contents of the first video (master) bus, and the first stereo audio pipe will be pulled and sent to system audio out. The timeline owns a ''play control'' shared by all attached viewers and coordinating the rendering-for-viewing. Similarly, a render task may be attached to the timeline to pull the pipes needed for a given kind of generated output. The actual implementation of the play controller and the coordination of render tasks is located in the Backend, which uses the service of the Proc-Layer to pull the respective exit nodes of the render engine network.

!Timeline versus Timeline View
Actually, what the GUI creates and uses is the //view// of a given timeline. This makes no difference to start with, as the view is modelled to be a sub-concept of "timeline" and thus can stand-in. All different views of the //same// timeline also share one single play control instance, i.e. they all have one single playhead position. Doing it this way should be the default, because it's the least confusing. Anyway, it's also possible to create multiple //independent timelines// &mdash; in an extreme case even so when referring to the same top-level sequence. This configuration gives the ability to play the same arrangement in parallel with multiple independent play controllers (and thus independent playhead positions)

To complement this possibilities, I'd propose to give the //timeline view// the possibility to be re-linked to a sub-sequence. This way, it would stay connected to the main play control, but at the same time show a sub-sequence //in the way it will be treated as  embedded// within the top-level sequence. This would be the default operation mode when a meta-clip is opened (and showed in a separate tab with such a linked timeline view). The reason for this proposed handling is again to give the user the least surprising behaviour. Because, when &mdash; on the contrary &mdash; the sub-sequence would be opened as //separate timeline,// a different absolute time position and a different signal routing may result; doing such should be reserved for advanced use, e.g. when multiple editors cooperate on a single project and a sequence has to be prepared in isolation prior to being integrated in the global sequence (featuring the whole movie).
//Timing constraints of a render or playback process.//

!constituents
The Timings record is used at various places at the engine interface level, since it links together several pieces of information crucial for playback control.
* a reference to the underlying frame grid.
** the frame grid defines time origin and framerate
** additionally the grid translates nominal time into frame numbers
* the Timings record provides information about latency -- both output latency and engine latency are exposed here
* indication of the service class or playback urgency is included, e.g. {{{ASAP, NICE, TIMEBOUND}}}
* from the usage view the Timings record represents a //constraint// -- with the ability to specify additional constraining conditions

!!!!role of the ~TimeAnchor
While the timings define the nature of a specific playback feed, they omit the actual link to wall clock time. The latter is established and maintained during the planning process, which preceds in chunks of evaluation: each planning chunk is pinned to a specific real time deadline through the ''time anchor'' used as closure to define this planning chunk. Whenever a chunk of frames has been planned, a new time anchor is established and scheduled as a follow-up job to pick up the frame planning work.

note: still {{red{WIP as of 1/2013}}}

!where are the timings definied?
Timing constraint records are used at various places within the engine. Basically we need such a timings record for starting any kind of playback or render.
The effective timings are established when //allocating an OutputSlot// -- based on the timings defined for the ModelPort  to be //performed,// i.e. the global bus to render.
Tracks are just a structure used to organize the Media Objects within the Sequence. Tracks are associated allways to a specific Sequence and the Tracks of an Sequence form a //tree.// They can be considered to be an organizing grid, and besides that, they have no special meaning. They are grouping devices, not first-class entities. A track doesn't "have" a port or pipe or "is" a video track and the like; it can be configured to behave in such manner by using placements.

The ~Track-IDs are assets on their own, but they can be found within a given sequence. So, several sequences can share a single track or each sequence can hold tracks with their own, separate identity. (the latter is the default)
* Like most ~MObjects, tracks have a asset view: you can find a track asset (a track ID) in the asset manager.
* and they have an object view: there is an track MObject which can be [[placed|Placement]], thus defining properties of this track within one sequence, e.g. the starting point in time
Of course, we can place other ~MObjects relative to some track (that's the main reason why we want to have tracks). In this sense, the [[handling of Tracks|TrackHandling]] is somewhat special: the placements forming the tree of tracks can be accessed directly through the sequence, and a track acts as container, forming a scope to encompass all the objects "on" this track. Thus, the placement of a track defines properties of the track, which will be inherited (if necessary) by all ~MObjects placed to this track. For example, if placing (=plugging) a track to some global [[Pipe]], and if placing a clip to this track, without placing the clip directly to another pipe, the associated-to-pipe information of the track will be fetched by the builder when needed to make the output connection of the clip.
&rarr; [[Handling of Tracks|TrackHandling]]
&rarr; [[Handling of Pipes|PipeHandling]]

&rarr; [[Anatomy of the high-level model|HighLevelModel]]
What //exactly&nbsp;// is denoted by &raquo;Track&laquo; &mdash; //basically&nbsp;// a working area to group media objects placed at this track at various time positions &mdash; varies depending on context:
* viewed as [[structural asset|StructAsset]], tracks are nothing but global identifiers (possibly with attached tags and description)
* regarding the structure //within each [[Sequence]],// tracks form a tree-like grid, the individual track being attached to this tree by a [[Placement]], thus setting up properties of placement (time reference origin, output connection, layer, pan) which will be inherited down to any objects located on this track and on child tracks, if not overridden more locally.
* with respect to //object identity,// a given track-ID can have an incarnation or manifestation as real track-object within several [[sequences|Sequence]] (meaning you could select, disable or choose for rendering all objects in any sequence placed onto this track). Moreover, the track-//object// and the //placement&nbsp;// of this track within the tree of tracks of a given sequence are two distinguishable entities (meaning a given track &mdash; with a couple of objects located on it &mdash; could be placed differently several times within the same sequence {{red{really??}}}, for example with different start offset or with different layering, output mode or pan position)
{{red{3/2010: no!}}} &mdash; it doesn't work this way. Relative placements attach to other placements, not ~MObjects. Thuse the clips in this example are within the scope of //one// placement. We could create //another// placement of the same track and attach it to to a differenc sequence, but this other placement wouldn't "inherit" the clips attached to the first one!

!Identification
Tracks thus represent a blend of several concepts, but depending on the context it is allways clear which aspect is meant. Seen as [[assets|Asset]], tracks are known by a unique track-ID, which can be either [[queried|ConfigQuery]], or directly refered to by use of the asset-ID (which is a globally known hash). Usually, all referrals are via track-ID, including when you [[place|Placement]] an object onto a track. 
Under some cincumstances though, especially from within the [[Builder]], we refer to a {{{Placement<Track>}}} rather, denoting a specific instantiation located at a distinct node within the tree of tracks of a given sequence. These latter referrals are always done by direct object reference, e.g. while traversing the track tree (generally there is no way to refer to a placement by name {{red{Note 3/2010 meanwhile we have placementIDs and we have ~MObjectRef. TODO better wording}}}).

!creating tracks
Similar to [[pipes|Pipe]] and [[processing patterns|ProcPatt]], track-assets need not be created, but rather leap into existence on first referral. On the contrary, you need to explicitly create the {{{Placement<Track>}}} for attaching it to some node within the tree of tracks of an sequence. The public access point for creating such a placement is {{{MObject::create(trackID}}} (i.e. the ~MObjectFactory. Here, the {{{trackID}}} denotes the track-asset. This placement, as returned from the ~MObjectFactory isn't attached to the session yet; {{{Session::current->attach(trackPlacement)}}} performs this step by creating a copy managed by the PlacementIndex and attaching it at the current QueryFocus, which was assumed to point to a track previously. Usually, client code would use one of the provided convenience shortcuts for this rather involved call sequence:
* the interface of the track-~MObjects exposes a function for adding new child tracks.
* the session API contains a function to attach child tracks. In both cases, existing tracks can be referred by plain textual ID.
* any MObjectRef to an object within the session allows to attach another placement or ~MObjectRef

!removal
Deleting a Track is an operation with drastic consequences, as it will cause the removal of all child tracks and the deletion of //all object placements to this track,// which could cause the resepctive objects to go out of scope (being deleted automatically by the placements or other smart pointer classes in charge of them). If the removed track was the root track of a sequence, this sequence and any timeline or VirtualClip binding to it will be killed as well. Deleting of objects can be achieved by {{{MObjectRef::purge()}}} or {{{Session::purge(MObjectRef)}}}

!using Tracks
The '''Track Asset''' is a rather static object with limited capabilities. It's main purpose is to be a point of referral. Track assets have a description field and you may assign a list of [[tags|Tag]] to them (which could be used for binding ConfigRules).  Note that track assets are globally known within the session, they can't be limited to just one [[Sequence]] (but you are allways free not to refer to some track from a given sequence). By virtue of this global nature, you can utilize the track assets to enable/disable a bunch of objects irrespective of what sequence they are located in, and probably it's a good idea to allow the selection of specific tracks for rendering.
Matters are quite different for the placement of a Track within the tree of tracks of a given sequence, and for placing some media object onto a given track. The track placement defines properties which will be inherited to all objects on this track and on all child tracks and thus plays a key role for wiring the objects up to some output pipe. Typically, the top level track of each sequence has a placement-to "the" video and "the" audio master pipe.

!!!!details to note
* Tracks are global, but the placement of a track is local within one sequence
* when objects are placed onto a track, this is done by referal to the global track asset ID. But because this placement of some media object is allways inherently contained within one sequence, the //meaning&nbsp;// of such a placement is to connect to the properties of any track-placement of this given track //within this sequence.//
* thus tracks-as-ID appear as something global, but tracks-as-propperty-carrier appear to the user as something local and object-like.
* in an extreme case, you'll add two different placements of a track at different points within the track tree of an sequence. And because the objects placed onto a track refer to the global track-ID, every object "on" this track //within this sequence&nbsp;// will show up two times independently and possibly with different inherited properties (output pipe, layering mode, pan, temporal position)
* an interesting configuration results from the fact that you can use an sequence as a [["meta clip" or "virtual clip"|VirtualClip]] nested within another sequence. In this case, you'll probably configure the tracks of the "inner" sequence such as to send their output not to a global pipe but rather to the [[source ports|ClipSourcePort]] of the virtual clip (which are effectively local pipes). Thus, within the "outer" sequence, you could attach effects to the virutal clip, combine it with transitions and place it onto another track, and any missing properties of this latter placement are to be resolved within the "outer" sequence <br/>(it would be perfectly legal to construct a contrieved example when using the same track-ID within "inner" and the "outer" sequence. Because the Placement of this track will probably be different in the both sequences, the behaviour of this placement could be quite different in the "inner" and the "outer" sequence. All of this may seem weird when discussed here in a textual and logical manner, but when viewed within the context and meaning of the various entities of the application, it's rather the way you'd expect it to be: you work locally and things behave as defined locally)
* note further, the root of the tree of tracks within each sequence //is itself again a //{{{Placement<Track>}}}. There is no necessitiy for doing it this way, but it seemed more stright forward and logical to Ichthyo, as it allowes for an easy way of configuring some things (like ouput connections) as a default within one sequence. As every track can have a list of child tracks, you'll get the "list of tracks" you'd expect.
* a nice consequence of the latter is: if you create a new sequence, it automatically gets one top-level track to start with, and this track will get a default configured placement (according to what is defined as [[default|DefaultsManagement]] within the current ConfigRules) &mdash; typically starting at t=0 and being plugged into the master video and master audio pipe
* nothing prevents us from putting several objects at the same temporal location within one track. If the builder can't derive any additional layering information (which could be provided by some other configuration rules), then //there is no layering precedence// &mdash; simply the object encountered first (or last) wins.
* obviously, one wants the __edit function__ used to create such an overlapping placement&nbsp; also to create an [[transition|TransitionsHandling]] between the overlapping objects. Meaning this edit function will automatically create an transition processor object and provide it with a placement such as to attach it to the region of overlap.
''towards a definition of »Track«''. We don't want to tie ourself to some naive and overly simplistic definition, just because it is convenient. For classical (analogue) media, tracks are physical entities dictated by the nature of the process by which the media works. Especially, Tape machines have read/writing heads, which creates fixed tracks to which to route the signals. This is a practical geometric necessity. For digital media, there is no such necessity. We are bound primarily by the editor's habits of working.

!!!Assessment of Properties
Media are used as Clips (contiguous chunks), they are a compound of several elementary streams, and they have a temporal extension (length). Indeed, the temporal dimension is the only fundamental property that can't be changed. Orthogonal to this dimension, we find one or more organisational dimensions forming a grid:
* a media stream may be sent to one of several possible output destinations (image or sound, subgroup busses, MIDI instruments)
* for any given output destination there may be variations in the //way of connecting// (overlay mode and layer, pan position, MIDI channel)
* besides, we often just want to stash away some clip without using it, e.g. as an alternative or for later referral
This is to say we have //several degrees of freedom// within this organisational grid. Just because some sound is located on this track doesn't mean he will be sent to a given output, rather the clip is located on this track //and// is connected to that output //and// &mdash; supposed we have full-periphonic surround sound &mdash; it is located 60° to the right and with 30° elevation. Combined with the (always contiguous) temporal dimension, this discrete grid is thus extruded to form something like discrete Tracks.

!!!do we really need Tracks?
Starting with the assumption "everything is just connected processing nodes", Tracks may seem superfluous. The problem with this approach is: it doesn't scale well. While it is fine to be able to connect clips and effects as we see fit (indeed, we want to build such a system), it is clearly not feasible to wire every clip manually to the output ports or add a panner effect to each and every audio sample. Because while editing, most of the time things are done in a fairly regular and systematic manner. Preferably we use the tracks as //preconfigured group setup// and just //place media onto them;// any such [[Placement]] can do the necessary wiring semi-automatic (rule-based).

!!!the constant part
there seems to be some non time-varying part in each sequence, that doesn't fit well with the simple model "objects on a timeline". Tracks seen as an organisational grid fall into this category: they are a global property of the given sequence. They could be associated to the Session as a whole, but effectively this would subvert the concept of having [[several sequences|SessionOverview]]. On the other hand, 
[[pipes|Pipe]] for Video and Sound output are obviously a global property of the Session. There can be several global pipes forming a matrix of subgroup busses. We could add ports or pipes to tracks by default as well, but we don't do this, because, again, this would run counter to our attempt of treating tracks as merely organisational entities. We have special [[source ports|ClipSourcePort]] on individual clips though, and we will have ports on [[virtual clips|VirtualClip]] too.

!Design
[[Tracks|Track]] are just a structure used to organize the Media Objects within the session. They form a grid, and besides that, they have no special meaning. It seems convenient to make the tracks not just a list, but allow grouping (tree structure) right from start. __~MObjects__ are ''placed'' rather than wired. The wiring is derived from the __Placement__. Placing can happen in several dimensions:
* placing in time will define when to activate and show the object.
* placing onto a track associates the ~MObject with this track; the GUI will show it on this track and the track may be used to resolve other properties of the object.
* placing to a __Pipe__ brings the object in conjunction with this pipe for the build process. It will be considered when building the render network for this pipe. Source-like objects (clips and exit nodes of effect chains) will be connected to the pipe, while transforming objects (effects) are inserted at the pipe. (you may read "placed to pipe X" as "plug into pipe X")
* depending on the nature of the pipe and the source, placing to some pipe may create additional degrees of freedom, demanding the object to be placed in this new, additional dimensions: Connecting to video out e.g. creates an overlay mode and a layer position which need to be specified, while connecting to a spatial sound system creates the necessity of a pan position. On the other hand, placing a mono clip onto a mono Pipe creates no additional degrees of freedom.
Placements are __resolved__ resulting in an ExplicitPlacement. In most cases this is just a gathering of properties, but as Placements can be incomplete and relative, there is room for real solving. The resolving mechanism tries to __derive missing properties__ from the __context__: When a clip isn't placed to some pipe but to a Track, than the Track and its parents will be inspected. If some of them has been placed to a pipe, the object will be connected to this pipe. Similar for layers and pan position. This is done by [[Placement]] and LocatingPin; as the [[Builder]] uses ~ExplicitPlacements, he isn't concerned with this resolving and uses just the data they deliver to drive the [[basic building operations|BasicBuildingOperations]]
&rarr; [[Definition|Track]] and [[handling of Tracks|TrackHandling]]
&rarr; [[Definition|Pipe]] and [[handling of Pipes|PipeHandling]]
Transitions combine the data from at least two processing chains and do this combining in a time varying fashion. So, any transition has
* N input connections
* either one or N output connections
* temporal coordinates (time, length)
* some control data connection to a ParamProvider, because in the most general case the controling curves are treated similar to  [[automation data|AutomationData]]

!!!how much output ports?
The standard case of a transition is sort of mixing together two input streams, like e.g. a simple dissolve. For this to be of any use, this input streams need to be connected to the same ouput destination before and after the transition (with regards to the timeline), i.e. the inputs and the transition share placement to the same output pipe. In this case, when the transition starts, the direct connections can be suspended and the transition will switch in seamlessly.

Using transitions is a very basic task and thus needs viable support by the GUI. Handling of transitions need to be very convienient, because it is so important. Because of this, it is compelling to subsume a more complicated situation and treat this more complicated case similar. This is the case, when two (or N) elements have to be combined in the way of a transition, but their further processing in the processing chain //after// the transition needs to be different, maybe they belong to differnent subgroups, have to appear on different layers or with different pan positions. Of courese, the workaround is simple, at least "simple" from the programmers point of view. It is //not// simple from the editor's point of view the moment the editor has to do this simple thing (changing the wiring and add manualy synced fade curves to the individual parts) a dozend or even a hundred times in some larger project.

Because of this experience, ichthyo wants to support a more general case of transitions, which have N output connections, behave similar to their "simple" counterpart, but leave out the mixing step. As a plus, such transitions can be inserted at the source ports of N clips or between any intermediary or final output pipes as well. Any transition processor capable of handling this situation should provide some flag, in order to decide if he can be placed in such a manner. (wichin the builder, encountering a  inconsistently placed transition is just an [[building error|BuildingError]])
//drafted service as of 4/10 &mdash; &rarr;[[implementation plans|TypedLookup]]//
A registration service to associate object identities, symbolic identifiers and types.

!Motivation
For maintaining persistent objects, generally an unique object ID is desirable. Within Lumiera, we employ 128 hash-~IDs (&raquo;{{{LUID}}}&laquo;). But hash-~IDs are difficult to handle for testing and configuration, as they aren't self explanatory for human readers. They're best used in a way avoiding textual representation.

Formal symmetry might be another motivation: we separate into objects and assets, where the latter represent the //bookkeeping view.// But there remain some cases where the asset side, the bookkeeping is void of any substantial functionality. Just for the sake of orthogonality it //would be preferable//&nbsp; to maintain a registration (for scripted use, for overview and organisational activities by the user, for diagnostics and repair work in the saved session state). A mere data record suffices to fulfil these requirements &mdash; while there are other kinds of asset exposing a substantial amount of functionality.

Both these motivations highlight a tension between having a single global namespace of unique ~IDs, and having type-bound sub namespaces of limited scope, as the latter allows for using human readable symbolic ~IDs. Usually, when it comes to writing rules and configuration or creating instances explicitly from code, the specific context of the situation allows or even requires to focus down upon a single and distinct kind of objects immediately &mdash; global uniqueness is not much of a concern here.

!desired functionality
A registration service backed by an index table can be used to //translate//&nbsp; between these two seemingly contradictory usage view angles. 
* re-accessing the specific type to deal with data known just by unique object ID.
* retrieving the unique implementation instance, given just a symbolic ID and the type to use.
* ensuring uniqueness of symbolic ~IDs //within one kind of entities.//
* enumerating all instances of a specific kind
* ability to handle instance registration and de-registration automatically

!!!usage scenarios
* automatically maintained lists of all clips, labels, tracks and sequences in the &raquo;Asset&laquo; section of the application
* addressing objects in script driven operations, just based on symbolic names, allowing for additional conventions
* implementing a predicate {{{isTypeXXX(Element)}}}, which is crucial for [[rules based configuration|ConfigQuery]].

!!!{{red{WIP}}}Analysis and discussion
Still some contradictions &mdash; and rather seems helpful, not so much necessary.
We //already have an registration service,// both for Assets (AssetManager) and for Placements (PlacementIndex). These facilities maintain not only a raw ID &harr; object association, but also structuring information, albeit bound to more specific circumstances (the system of placement scopes, and the asset category). The lookup uniqueID &rArr; object could be implemented by sequentially querying this small number of central registration facilities. Thus, still lacking is a ''system of sub index tables''.

As mentioned above, an ID &harr; type association plays a crucial role when it comes to implementing any kind of rules based configuration. It would allow to bridge from our session objects to rules and resolution working entirely symbolic. (&rarr; [[more|ConfigQueryIntegration]]). But, as of 3/2010 this is a //planned feature and not required to get the initial pipeline working.// Thus, according to the YAGNI principle, we shouldn't engage into any implementation details right now and just create the extension points.

The immediate need prompting to start development on this facility, is how to get sub-selections from the objects in the session and for certain kinds of asset &mdash; especially how to deal with retrieving the referred track for the &rarr; [[sequence and timeline handling|ModelDependencies]].
<<<
So the plan is to use ''per type'' mapping tables for an association: ''symbolic-ID &rarr; unique-ID''
There should be a configurable slot to ''attach an object reference'' &mdash; extensions to be defined later
Just an ''registration scheme'' should be implemented right now, working completely automatic
<<<

!!!!notes
* //code bloat// &mdash; need a frontend and an untyped backend for the raw storage
* one table or many tables?
** obviously, one table is more space efficient
** but it would be redundant (both PlacementIndex and AssetManager //are// implemented with a large hashtable)
** moreover, ennumeration of all elements of a specific kind was one of the reasons to build this facility
* supporting 1:n ? (e.g. one track-ID for many track objects?). Rather not directly, because this defeats lookup by ID
see [[implementation planning|TypedLookup]]
TypedID is a registration service to associate object identities, symbolic identifiers and types. It acts as frontend to the TypedLookup service within Proc-Layer, at the implementation level. While TypedID works within a strictly typed context, this type information is translated into an internal index on passing over to the implementation, which manages a set of tables containing base entries with an combined symbolic+hash ID, plus an opaque buffer. Thus, the strictly typed context is required to re-access the stored data. But the type information wasn't erased entirely, so this typed context can be re-gained with the help of an internal type index. All of this is considered implementation detail and may be subject to change without further notice; any access is assumed to happen through the TypedID frontend. Besides, there are two more specialised frontends.

!Front-ends
* TypedID uses static but templated access functions, plus an singleton instance to manage a ~PImpl pointing to the ~TypedLookup table
* the individual //registration groups// (see below) are automatically exposed as MetaAsset.
* TypeHandler is heavily used by the ConfigRules {{red{planned}}}; each participating primary type provides an implementation

!Tables and index
The Table consists of several registration groups, each of which contains a hashtable and deals with one specific type. Groups are created on demand, but there is initially one group holding the internal type index (translation table). There may be even sub-groups, usable to create clusterings within one group.

__Individual entries__ are comprised of a EntryID as key (actually a ~BareEntryID, without the typing) and a payload, which //doesn't store data,// but information necessary to ''lookup and access'' the registered object. Obviously, this information is type specific, and thus the ~TypedLookup implementation can't know how to deal with it directly. Rather, we store a ''functor'' in the payload of the type index group. This functor is directly linked to the TypeHandler, i.e. any type wanting to be a primary type within Lumiera, so as to be directly usable within the ConfigRules, needs to provide a suitable functor implementation through its ~TypeHandler. These functors are then invoked by the ~TypedID frontend, when it comes to re-accessing a registered entity by ID

!link for automatic registration
An entity can be linked with the TypedLookup system to be registered and deregistered automatically. This is achieved by mixing in the {{{TypedID::Link}}}. On creation, this will set up an EntryID for this individual instance and cause creation of an empty entry within the suitable registration group. As a side-effect, uniqueness of any symbolic-ID within one group (type) is enforced. Obviously, the dtor of this registration Link cares for de-registration automatically. Be forwarned though, by creating an unique identity, this mechanism will interfere with copying and cloning of the registered entity.

In most cases, the //actually usable instance// of an entity isn't identical to a implementation class (language) instance. Typically, there is a frontend, a smart-ptr, reference or similar, which in turn might link to another registration mechanism. This ''actual instance tag'' needs to be attached deliberately through the {{{TypedID::Link}}} to be stored into the payload of the registration group table entry. This involves invocation of the functor provided by the TypeHandler.

!basic usage patterns
* ''Assets'' are maintained by the AssetManager, which always holds a smart-ptr to the managed asset. Assets include explicit links to dependent assets. Thus, there is no point in interfering with lifecylce management, so we store just a ''weak reference'' here, which the access functor turns back into a smart-ptr, sharing ownership.
* Plain ''~MObjects'' are somewhat similar, but there is no active lifecycle management &mdash; they are always tied either to a placement of linked from within the assets or render processes. When done, they just go out of scope. Thus we too use a ''weak reference'' here, thereby expecting the respective entity to mix in {{{TypedID::Link}}}
* Active ''Placements'' of an MObject behave like //object instances// within the model/session. They live within the PlacementIndex and cary an unique {{{LUID}}}-ID. Thus, it's sufficient to store this ''~Placement-ID'', which can be used by the access functor to fetch the corresponding Placement from the session.
Obviously, the ~TypedLookup system is open for addition of completely separate and different types.
[>img[TypedLookup implementation sketch|uml/fig140293.png]]
|>| !Entity |!pattern |
|1:1| Track|Placement|
|~| Label|Placement|
|~| Sequence| Asset|
|~| StreamType| Asset|
|1:n| Tag| Asset|
|~| Clip| ~MObject|
|~| Effect| ~MObejct|
|~| Transition| Asset(?)|
//the problem of processing specifically typed queries without tying query and QueryResolver.//<br/>This problem can be seen as a instance of the problematic situation encountered with most visitation schemes: we want entities and tools (visitors) to work together, without being forced to commit to a pre-defined table of operations. In the situation at hand here, we want //some entities// &mdash; which we don't know &mdash; to issue //some queries// &mdash; which we don't know either. Obviously, such a problem can be solved only by controlling the scope of this incomplete knowledge, and the sequence in time of building it.
Thus, to re-state the problem more specifically, we want the //definition//&nbsp; of the entities to be completely separate of those definitions concerning the details of the query resolution mechanism, so to be able to design, reason, verify and extend each one without being forced into concern about the details of the respective other side. So, the buildup of the combined structure has to be postponed &mdash; assuring it //has//&nbsp; happened at the point it's operations are required.

!Solution Stages
* in ''static scope'' (while compiling), just a {{{Query<TY>}}} gets mentioned.<br/>As this is the only chance for driving a specifically typed context, we need to prepare a registration mechanism, to allow for //remembering//&nbsp; this context later.
* at the same time, but otherwise independently, compilation needs to drive the generation of possible query resolution mechanisms. As C++ only allows very limited influence on the compilation process, which is always based on a common visibility scope, and on the other hand doesn't allow to re-access the static level after the fact (no language-rooted introspection), we're forced to use an external tool, or to rely on manual specification. As we're building a rather specialised facility, not a general purpose library, the latter seems adequate, if done close to where the actual query resolution mechanism is implemented.
* during ''initialisation'', a complete list of all specifically typed scopes needs to be collected. This might be a flat list, but may well become structured in multiple dimensions later, when using multiple //kinds of query.// It is important to notice that this buildup of a complete list won't happen, unless triggered explicitly, so we need registration into a system lifecycle hook.
* on ''first use'' (which me must ensure to be //after//&nbsp; initialisation) the list-of-typed queries will be hooked into the query resolving facility to build a dispatcher table there.
* the ''actual query'' just picks the respective type-ID for accessing the dispatcher table, making for a total of //two//&nbsp; function-pointer indirections.

!{{red{Preliminary solution}}}
The above outlines the planned full version of this service.
For now, as of 6/10, we use specialised QueryResolver instances explicitly and don't need the lifecycle hook registration yet. But a dispatcher table is in place already
&rarr; QueryRegistration
For any kind of playback to happen, timeline elements (or similar model objects) need to be attached to a Viewer element through a special kind of [[binding|BindingMO]], called a ''view connection''. In the most general case, this creates an additional OutputMapping (and in the typical standard case, this boils down to a 1:1 association, sending the master bus of each media kind to the standard OutputDesignation for that kind).

Establishing a ~ViewConnection is prerequisite for creating or attaching an PlayController through the PlayService. Multiple "play control" GUI elements can be associated with such a play controller, causing them to work as being linked together: if you e.g. push "play" on one of them, the button states of all linked GUI controls will reflect the state change of the underlying play controller.

View connections are part of the model and thus persistent. They can be created explicitly, or just derived by //allocating a viewer.// And a new view connection can push aside (and thus "break") an existing one from another timeline or model element. When a view connection is //broken,// any associated PlayProcess needs to be terminated (this is a blocking operation). Thus, at any time, there can be only one active view connection to a given viewer; here "active" means, that a PlayController has been hooked up, and the connection is ready for playback or rendering. But on the other hand, nothing prevents a timeline (or similar model object) to maintain multiple view connections -- consequently the actual playback position behaves as if associated with the view connection.
A [[structural Asset|StructAsset]] corresponding to a Viewer element in the GUI.

These Viewer (or Monitor) elements play an enabling role for any output generation. In order to [[initiate playback|PlayService]], we need a fully resolved OutputDesignation, which typcially can be achieved by creating a ViewConnection, i.e. connecting a timeline or similarily suitable model element to such a viewer. Actually, this connection is modelled by attaching to a BindingMO which is linked to a ViewerAsset. This way, the model tracks and persists the available viewer windows and the current connection situation.

When the GUI is outfitted, based on the current Session or HighLevelModel, it is expected to retrieve the viewer assets and for each of them, after installing the necessary widgetes, registers an OutputSlot with the global OutputManager.
for showing output, three entities are involved
* the [[Timeline]] holds the relevant part of the model, which gets rendered for playback
* by connecting to a viewer component (&rarr; ViewConnection), an actual output target is established
* the playback process itself is coordinated by a PlayController, which in turn creates a PlayProcess

!the viewer connection
A viewer element gets connected to a given timeline either by directly attaching it, or by //allocating an available free viewer.// Anyway, as a model element, the viewer is just like another set of global pipes chained up after the global pipes present in the timeline. Connecting a timeline to a viewer creates a ViewConnection, which is a special [[binding|BindingMO]]. The number and kind of pipes provided is a configurable property of the viewer element &mdash; more specifically: the viewer's SwitchBoard. Thus, connecting a viewer activates the same internal logic employed when connecting a sequence into a timeline or meta-clip: a default channel association is established, which can be overridden persistently (&rarr; OutputMapping). Each of the viewer's pipes in turn gets connected to a system output through an OutputSlot registered with the OutputManager &mdash; again an output mapping step.
A ''~Meta-Clip'' or ''Virtual Clip'' (both are synonymous) denotes a clip which doesn't just pull media streams out of a source media asset, but rather provides the results of rendering a complete sub-network. In all other respects it behaves exactly like a "real" clip, i.e. it has [[source ports|ClipSourcePort]], can have attached effects (thus forming a local render pipe) and can be placed and combined with other clips. Depending on what is wired to the source ports, we get two flavours:
* a __placeholder clip__ has no "embedded" content. Rather, by virtue of placements and wiring requests, the output of some other pipe somewhere in the session will be wired to the clip's source ports. Thus, pulling data from this clip will effectively pull from these source pipes wired to it.
* a __nested sequence__ is like the other sequences in the Session, just in this case any missing placement properties will be derived from the Virtual Clip, which is thought as to "contain" the objects of the nested sequence. Typically, this also configures the tracks of the "inner" sequence such as to [[connect any output|OutputMapping]] to the source ports of the Virtual Clip.

Like any "real" clip, Virtual Clips have a start offset and a length, which will simply translate into an offset of the frame number pulled from the Virtual Clip's source connection or embedded sequence, making it possible to cut, splice, trim and roll them as usual. This of course implies we can have several instances of the same virtual clip with different start offset and length placed differently. The only limitation is that we can't handle cyclic dependencies for pulling data (which has to be detected and flagged as an error by the builder)

!Implementation
The implementation is based on the observation, that a virtual clip is like a real clip, just referring to a ''virtual media''.
Thus we'll employ a special kind of media asset, which actually links to a [[binding|BindingMO]], which in turn connects to a [[Sequence]]. Its the binding's job to help translating the sequence's contents into a media-like entity, with just //content// apearing in several channels.
{{red{WIP as of 11/2010 -- most details yet to be defined}}} 
The ''Visitor Pattern'' is a special form of //double dispatch// &mdash; selecting the function actually to be executed at runtime based both on the concrete type of some tool object //and // the target this tool is applied to. The rationale is to separate some specific implementation details from the basic infrastructure and the global interfaces, which can be limited to describe the fundamental properties and operations, while all details relevant only for some specific sub-problem can be kept together encapsulated in a tool implementation class. Typically, there is some iteration mechanism, allowing to apply these tools to all objects in a given container, a collection or object graph, without knowing the exact type of the target and tool objects. See the [[Visitor Pattern design discussion|VisitorUse]]

!Problems with Visitor
The visitor pattern is not very popular, because any implementation is tricky, difficult to understand and often puts quite some burden on the user code. Even Erich Gamma says that on his list of bottom-ten patterns, Visitor is at the very bottom. This may be due to the fact that this original visitor implementation (often called the ''~GoF visitor'') causes a cyclic dependency between the target objects and the visiting tool objects, and includes some repetitive code (which results in silent failure if forgotten). Robert Martin invented 1996 an interesting variation (commonly labled ''acyclic visitor''). By using a marker base class for each concrete target type to be treated by the visitor, and by applying a dynamic cast, we can get rid of the cyclic dependencies. The top level "Visitor" is reduced to a mere empty marker interface in this design, while the actual visiting capabilities for a concrete target object is discovered at application time by the aforementioned dynamic cast. Besides the runntime cost of such a cast, the catch is that now the user code has still more responsibilities, because the need to maintain consistently two parallel object hierarchies. 
Long time there seemed to be not much room for improvement, at least before the advent of generic programming and the discovery of template metaprogramming. ''Loki'' (Alexandrescu, 2000) showed us how to write a library implementation which hides away the technicallities of the visitor pattern and automatically generates most of the repetitive code.

!Requirements
* cyclic dependencies should be avoided or at least restricted to some central, library related place.
* The responsibilities for user code should be as small as possible. Especially, we should minimize the necessity to do corresponding adjustments to separate code locations, e.g. the necessity to maintain parallel hierarchies.
* Visitor is about //double dispatch,// thus we can't avoid using some table lookup implementation &mdash; more specifically we can't avoid using the cooperating classes vtables. We can expect at least two table lookups for each call dispatch. Besides that, the implementation should not be too wasteful...
* individual "visiting tool" implementation classes should be able to opt in or opt out on implementing functions treating some of the visitable subclasses.
* there should be a safe fallback mechanism backed by the visitable object's hierarchy relations. If some concrete visiting tool class decides not to implement a {{{treat(..)}}}-function for some concrete target type, the call should fall back to the next best match according to the target object's hierarchy, i.e. the next best {{{treat(..)}}}-function should be used.
The last requirement practically rules out the Loki acyclic visitor, because this implementation calls a general fallback function when an exact match based on the target object's type is not possible. Considering our use of the visitor pattern within the render engine builder, such a solution would not be of much use: Some specific builder tool may implement a {{{treat(CompoundClip&)}}}-function, while most of the other builder tools just implement a {{{treat(Clip&)}}}-function, thus handling any multichannel clip via the general clip interface. This is exactly the reason why we want to use visitor at first place. Isolate specific treatment, implement against interfaces.

!Implementation Notes
A good starting point for understanding our library implementation of the visitor pattern is {{{tests/components/common/visitingtoolconcept.cpp}}}, which contains a all-in-one-file proof of concept implementation, on which the real implementation ({{{"common/visitor.hpp"}}}) was based.
* similar to Loki, we use a {{{Visitable}}} base class and a {{{Tool}}} base class (we prefer the name "Tool" over "Visitor", because it makes the intended use more clear).
* the objects in the {{{Visitable}}} hierarchy implement an {{{apply(Tool&)}}}-function. This function needs to be implemented in a very specific manner, thus the {{{DEFINE_PROCESSABLE_BY}}} macro should be used when possible to insert the definition into a concrete {{{Visitable}}} class.
* similar to the acyclic visitor, the concrete visiting tool classes inherit from {{{Applicable<TARGET, ...>}}} marker base classes, where the template parameter {{{TARGET}}} is the concrete Visitable type this tool wants to treat, either directly by defining a {{{treat(ConcreteVisitable&)}}}, or by falling back to some more general {{{treat(...)}}} function.
* consequently our implementation is //rather not acyclic// &mdash; the concrete tool implementation depends on the full definition (header) of all concrete Visitables, but we avoid cyclic dependencies on the interface level. By using a typelist technique inspired by Loki, we can concentrate these dependencies in one single header file, which keeps things maintainable.
* we use the {{{Applicable<TARGET, ...>}}} marker base classes to drive the generation of Dispatcher classes, each of which holds a table of trampoline functions for carrying out the actual double dispatch at call time. Each concrete Visitable using the {{{DEFINE_PROCESSABLE_BY}}}-macro generates a separate Dispatcher table containing slots for each concrete tool implementation class. To store and access the index position for these "call slots", we use a tag associated with the concrete visiting tool class, which can be retrieved by going though the tool's vtable
* __runtime cost__: the concrete tool's ctor stores the trampoline pointers (this could be optimized to be a "once per class" initialisation). Then, for each call, we have 2 virtual function calls and a lookup and call of the trampoline function, typically resulting in another virtual function call for resolving the {{{treat(..)}}} function on the concrete tool class
* __extension possibilities__: while this system may seem complicated, you should note that it was designed with special focus on extension and implementing against interfaces:
** not every Visitable subclass needs a separate Dispatcher. As a rule of thumb, only when some type needs to be treated separately within some concrete visiting tool (i.e. when there is the need of a {{{treat(MySpecialVisitable&)}}}), then this class should use the {{{DEFINE_PROCESSABLE_BY}}}-macro and thus define it's own {{{apply()}}}-function and Dispatcher. In all other cases, it is sufficient just to extend some existing Visitable, which thus will act as an interface with regards to the visiting tools.
** because the possibility of utilizing virtual {{{treat(...)}}} functions, not every concrete visiting tool class needs to define a set of {{{Applicable<...>}}} base classes (and thus get a separate dispatcher slot). We need such only for each //unique set// of Applicables. All other concrete tools can extend existing tool implementations, sharing and partially extending the same set of virtual {{{treat()}}}-functions.
** when adding a new "first class" Visitable, i.e. a concrete target class that needs to be treated separately in some visiting tool, the user needs to include the {{{DEFINE_PROCESSABLE_BY}}} macro and needs to make sure that all existing "first class" tool implementation classes include the Applicable base class for this new type. In this respect, our implementation is clearly "cyclic". (Generally speaking, the visitor pattern should not be used when the hierarchy of target objects is frequently extended and remoulded). But, when using the typelist facillity to define the Applicable base classes, we'll have one header file defining these collection of Applicables and thus we just need to add our new concrete Visitable to this header and recompile all tool implementation classes.
** when creating a new "~Visitable-and-Tool" hierarchy, the user should derive (or typedef) and parametrize the {{{Visitable}}}, {{{Tool}}} and {{{Applicable}}} templates, typically into a new namespace. An example can be seen in {{{proc/mobject/builder/buildertool.hpp}}}
Using the ''Visitor Design Pattern'' is somewhat controversial, mainly because this pattern is rather complicated, requires certain circumstances to be usefull, and &mdash; especially when used in the original form described by Gamma et al &mdash; puts quite some burden on the implementor. Contrary to some patterns commonly used today (like e.g. singleton or factory), visitor is by no way a popular pattern. Ichthyo thinks that until now it's potential stregths remain still to be discovered, due to obvious and well known weaknesses. This problems can be ameliorated a good deal by using template based library implementation techniques along the lines of the [[Loki library|http://loki-lib.sourceforge.net/]].
&rarr; [[implementation deatails|VisitingToolImpl]]

!why bothering with visitor?
In the Lumiera Proc layer, Ichthyo uses the visitor pattern to overcome another notorious problem when dealing with more complex class hierarchies: either, the //interface// (root class) is so unspecific to be almost useless, or, in spite of having a useful contract, this contract will effectively be broken by some subclasses ("problem of elliptical circles"). Initially, when designing the classes, the problems aren't there (obviously, because they could be taken as design flaws). But then, under the pressure of real features, new types are added later on, which //need to be in this hierarchy// and at the same time //need to have this and that special behaviour// and here we go ...
Visitor helps us to circumvent this trap: the basic operations can be written against the top level interface, such as to include visiting some object collection internally. Now, on a case-by-case base, local operations can utilise a more specific sub interface or the given concrete type's public interface. So visitor helps to encapsulate specific technical details of cooperating objects within the concrete visiting tool implementation, while still forcing them to be implemented against some interface or sub-interface of the target objects.

!!well suited for using visitors
generally speaking, visitors are preferable when the underlying element type hierarchy is rather stable, but new operations are to be added frequently.
* Implementation of the builder. All sorts of special treatments of some MObject kinds can be added later
* Operating on the Object collection within the session. Effectively, this decouples the invoking interface on the session completely from the special operations to be carried out on individual objects.

To see an simple example of our "visiting tool", have a look at {{{tests/components/common/visitingtooltest.cpp}}}
The Intention of this text is to help you understanding the design and to show some notable details.

!!!!Starting Point
Design is an experiment to find out how things are related. We can't //plan// a Design top down, rather we have to start at some point with some hypothesis and look how it works out. The point of origin for Ichthyo's design is the observation that the Render Engine needs some Separation of Concerns to get the complexity down. And especially, this design ''uses three different Levels'' or Layers within the Render Engine and Session.
* the __high level__ within the session uses uniformly treated MObjects which are assembled/glued together by a network of [[Placements|Placement]].<br> It is supposed that the GUI will present this and //only this view //to the user, giving him the ability to work with the objects
* the __builder level__ works on a stripped-down subset of this ~MObject network: it uses the //same Object instances// but only assembled by [[Explicit Placements|ExplicitPlacement]] which locate the objects //on a simple (track, time) grid.// It's the job of the builder to create out of this simplified Network the Configuration of [[Render Nodes|ProcNode]] needed to do the actual rendering
* the __engine level__ uses solely [[Render Pipeline Nodes (ProcNode)|ProcNode]], i.e. a Graph of interconnected processing nodes. The important constraint here is that //any decisions are ruled out//. The core Render Engine with all its nodes is lacking the ability to do any tests and checks and has no possibility to branch or reconfigure anything. (this is an especially important lesson I draw from studying the current Cinelerra source code)

!!!!Performance Considerations
* within the Engine the Render Nodes are containing the ''inner loop'', whose contents are to be executed hundred thousands to million times per frame. Every dispensable concern, which is not strictly necessary to get the job done, is worth the effort of factoring out here.
* performance pressure at the builder level is far lower, albeit still existent. Compared to the effort of calculating a single processing step, looping even over some hundred nodes and executing quite some logic is negligible. Danger bears on creating memory pressure or causing awkward execution patterns (in the backend) rather. So the main concern should be the ability of reconfiguring different aspects separately without much effort. If for example a given render strategy works out to create lots of deadlocks and waitstates in the backend, the design should account for the possibility to exchange it with another strategy without having to modify the inner workings of the build process.<br>On the other hand, I wouldn't be overly concerned to trigger the build process yet another time to get some specific problem solved. However, the possibility to share one Render configuration for, say, 20 sec of video, instead of triggering the build process 500 times for every frame in this timespan, would sure be worth considering if it's not overly complicated to achieve.
* contrary to this, the session level is harmless with respect to performance. Getting acceptable responsiveness on user interactions is sufficient. We could consider using high level languages here, for it is much more important being able to express and handle complicated object relationships with relative ease. The only (indirect) concern is to avoid generating memory pressure inadvertently. Edit actions generating memory peaks could interfere with an ongoing background render process. If we decide to use lots of relation objects or transient objects, we should use an object pool or still better an garbage collector.

!!!!Concepts and Interfaces
This design strives to build each level and subsystem around some central concepts, which are directly expressed as Interfaces. Commonly used Interfaces clamp the different layers.
* MObject gives an uniform view on all the various entities to be arranged in the session.
* all the arranging and relating of ~MObjects is abstracted as [[Placement]]. The contract of a Placement is that it always has a related Subject, that we can change the //way of placement&nbsp;// by adding and removing [["locating pins"|LocatingPin]], call some test methods on it (still to be defined), and, finally, that we can get an ExplicitPlacement from it.
* albeit being a special form of a Placement, the ExplicitPlacement is treated as a separate concept. With respect to edit operations within the session, it can stand for any sort of Placement. On the other hand the Builder takes a list of ~ExplicitPlacements as input for building up the Render Engine(s). This corresponds to the fact that the render process needs to organize the things to be done on a simple two dimensional grid of (output channel / time). The (extended) contract of an ~ExplicitPlacement provides us with this (output,time).
* on the lower end of the builder, everything is organized around the Concept of a ProcNode, which enables us to //pull// one (freely addressable) Frame of calculated data. Further, the ProcNode has the ability to be wired with other nodes and [[Parameter Providers|ParamProvider]]
* the various types of data to be processed are abstracted away under the notion of a [[Frame]]. Basically, a Frame is an Buffer containing an Array of raw data and it can be located by some generic scheme, including (at least) the absolute starting time (and probably some type or channel id).
* All sorts of (target domain) [[parameters|Parameter]] are treated uniformly. There is a distinction between Parameters (which //could// be variable) and Configuration (which is considered to be fixed). In this context, [[Automation]] just appears as a special kind of ParamProvider.
* and finally, the calculation //process// together with its current state is represented by a StateProxy. I call this a "proxy", because it should encapsulate and hide all tedious details of communication, be it even asynchronous communication with some Controller or Dispatcher running in another Thread. In order to maintain a view on the current state of the render process, it could eventually be necessary to register as an observer somewhere or to send notifications to other parts of the system.

!!!!Handling Diversity
An important goal of this approach is to be able to push down the treatment of variations and special cases. We don't need to know what kind of Placement links one MObject to another, because it is sufficient for us to get an ExplicitPlacement. The Render Engine doesn't need to know if it is pulling audio Frames or video Frames or GOPs or OpenGL textures. It simply relies on the Builder wiring together the correct node types. And the Builder in turn does so by using some overloaded function of an iterator or visitor. At many instances, instead of doing decisions in-code or using hard wired defaults, a system of [[configuration rules|ConfigRules]] is invoked to get a suitable default as a solution (and, as a plus, this provides points of customisation for advanced users). At engine level, there is no need for the video processing node to test for the colormodel on every screen line, because the Builder has already wired up the fitting implementation routine. All of this helps reducing complexity and quite some misconceptions can be detected already by the compiler.

!!!!Explicit structural differences
In case it's not already clear: we don't have "the" Render Engine, rather we construct a Render Engine for each structurally differing part of the timeline. (please relate this to the current Cinelerra code base, which constructs and builds up the render pipeline for each frame separately). No need to call back from within the pipeline to find out if a given plugin is enabled or to see if there are any automation keyframes. We don't need to pose any constraints on the structuring of the objects in the session, besides the requirement to get an ExplicitPlacement for each. We could even loosen the use of the common metaphor of placing media sequences on fixed tracks, if we want to get at a more advanced GUI at some point in the future.

!!!!Stateless Subsystems
The &raquo;current setup&laquo; of the objects in the session is sort of a global state. Same holds true for the Controller, as the Engine can be at playback, it can run a background render or scrub single frames. But the whole complicated subsystem of the Builder and one given Render Engine configuration can be made ''stateless''. As a benefit of this we can run this subsystems multi-threaded without the need of any precautions (locking, synchronizing). Because all state information is just passed in as function parameters and lives in local variables on the stack, or is contained in the StateProxy which represents the given render //process// and is passed down as function parameter as well. (note: I use the term "stateless" in the usual, slightly relaxed manner; of course there are some configuration values contained in instance variables of the objects carrying out the calculations, but this values are considered to be constant over the course of the object usage).
within Lumiera's ~Proc-Layer, on the conceptual level there are two kinds of connections: data streams and control connections. The wiring deals with how to define and control these connections, how they are to be detected by the builder and finally implemented by links in the render engine.
&rarr; see OutputManagement
&rarr; see OutputDesignation
&rarr; see OperationPoint
&rarr; see StreamType


!Stream connections
A stream connection should result in media data travelling from a source to a sink. Here we have to distinguish between the high-level view and the situation in the render engine. At the session level and thus for the user, a //stream// is the elementary unit of "media content" flowing through the system. It can be described unambigously by having an uniform StreamType &mdash; this doesn't exclude the stream from being inherently structured, like containing several channels. The HighLevelModel can be interpreted as //creating a system of stream connections,// which, more specifically can be categorised into two kinds or types of connections -- the rigid and the flexible parts:
* the [[Pipes|Pipe]] are rather rigid ''processing chains'', limited to using one specific StreamType. <br/>Pipes are built by attaching processing descriptors (Effect ~MObjects), where the order of attachment is given by the placement.
* there are flexible interconnections or ''routing links'', including the ability to sum or overlay media streams, and the possibility of stream type conversions. They are controlled by the OutputDesignation ("wiring plug"), to be queried from the placement at the source side of the interconnection (i.e. at the exit point of a pipe)
Please note, the ''High-level model'' comprises a blue print for constructing the render engine. There is no real data "flowing" through this model, thus any "wiring" may be considered conceptual. Within this context, any wiring specifications just express //the intention of getting things connected in a specific way.// Consider the example of a clip, which might find out (through query of his placement) that he's intended to produce output for some destination or  bus or subgroup called "Ambiance sound"

The ''Builder'' to the contrary considers matters locally. He's confined to a given set of objects handed in for processing, and during that processing will successively collect all the [[plugs|WiringPlug]] and [[claims|WiringClaim]] encountered. While the former denote the desired //destination//&nbsp; of data emerging from a pipe, the wiring claim expresses the fact that a given object //claims to be some output destination.// Typically, each [[global pipe or bus|GlobalPipe]] raises such a claim. Both plugs and claims are processed on a "just in case they exist" base: When encountering a plug and //resolving//&nbsp; the corresponding claim, the builder drops off a WiringRequest accordingly. Note the additional resolution, which might be necessary due to virtual media and nested sequences (read: the output designation might need to be translated into another designation, using the media/channel mapping created by the virtual entity &rarr; see [[mapping of output designations|OutputMapping]])

The Processing of such a wiring request drives the actual connection step. It is conducted by the OperationPoint, provided by and executing within a BuilderMould and controlled by a [[processing pattern|ProcPatt]]. This rather flexible setup allows for wiring summation lines, include faders, scale and offset changes and various overlay modes. Thus the specific form of the connection wired in here depends on all the local circumstances visible at that point of operation:
* the mould is picked from a small number of alternatives, based on the general wiring situation.
* the processing pattern is queried to fit the mould, the stream type and additional placement specifications ({{red{TODO 11/10: work out details}}})
* the stream type system itself contributes in determining possible connections and conversions, introducing further processing patterns

The final result, within the ''Render Engine'', is a network of processing nodes. Each of this nodes holds a WiringDescriptor, created as a result of the wiring operation detailed above. This descriptor lists the predecessors, and (in somewhat encoded form) the other details necessary for the processing node to respond properly at the engine's calculation requests (read: those details are implementation bound and can be expeted to be made to fit)

On a more global level, this LowLevelModel within the engine exposes a number of [[exit nodes|ExitNode]], each corresponding to a ModelPort, thus being a possible source to be handled by the OutputManager, which is responsible for mapping and connecting nominal outputs (the model ports) to actual output sinks (external connections and viewer windows). A model port isn't necessarily an absolute endpoint of connected processing nodes &mdash; it may as well reside in the middle of the network, e.g. as a ProbePoint. Besides the core engine network, there is also an [[output network|OutputNetwork]], built and extended on demand to prepare generated data for the purpose of presentation. This ViewerPlayConnection might necesitate scaling or interpolating video for a viewer, adding overlays with control information produced by plugins, or rendering and downmixing multichannel sound. By employing this output network, the same techniques used to control wiring of the main path, can be extended to control this output preparation step. ({{red{WIP 11/10}}} some important details are to be settled here, like how to control semi-automatic adaptation steps. But that is partially true also for the main network: for example, we don't know where to locate and control the faders generated as a consequence of building a summation line)

!!!Participants and Requirements
* the ~Pipe-ID is an universal key to denote connections, outputs and ports.
* output designation is just a ~Pipe-ID, actually a thin wrapper to make the intention explicit
* claims and plugs are to be implemented as LocatingPin
* as decided elsewhere, we get a [[Bus-MObject|BusMO]] as an attachment point
* we need OutputMapping as a new generic interface, allowing us to define ~APIs in terms of mappings
* builder moulds, operation point and processing patterns basically are in place, but need to be shaped
* the actual implementation in the engine will evolve as we go; necessary adaptations are limited to the processing patterns (which is intentional)


!Control connections
Each [[processing node|ProcNode]] contains a stateless ({{{const}}}) descriptor detailing the inputs, outputs and predecessors. Moreover, this descriptor contains the configuration of the call sequence yielding the &raquo;data pulled from predecessor(s)&laquo;. The actual type of this object is composed out of several building blocks (policy classes) and placed by the builder as a template parameter on the WiringDescriptor of the individual ProcNode. This happens in the WiringFactory in file {{{nodewiring.cpp}}}, which consequently contains all the possible combinations (pre)generated at compile time.

!building blocks
* ''Caching'': whether the result frames of this processing step will be communicated to the Cache and thus could be fetched from there instead of actually calculating them.
* ''Process'': whether this node does any calculations on it's own or just pulls from a source
* ''Inplace'': whether this node is capable of processing the result "in-place", thereby overwriting the input buffer
* ''Multiout'': whether this node produces multiple output channels/frames in one processing step

!!implementation
!!!!Caching
When a node participates in ''caching'', a result frame may be pulled immediately from cache instead of calculating it. Moreover, //any output buffer//&nbsp; of this node will be allocated //within the cache.// Consequently, caching interferes with the ability of the next node to calculate "in-Place". In the other case, when ''not using the cache'', the {{{pull()}}} call immediately starts out with calling down to the predecessor nodes, and the allocation of output buffer(s) is always delegated to the parent state (i.e. the StateProxy pulling results from this node). 

Generally, buffer allocation requests from predecessor nodes (while being pulled by this node) will either be satisfied by using the "current state", or treated as if they were our own output buffers when this node is in-Place capable.

!!!!Multiple Outputs
Some simplifications are possible in the default case of a node producing just ''one single output'' stream. Otherwise, we'd have to allocate multiple output buffers, and then, after processing, select the one needed as a result and deallocate the superfluous further buffers.

!!!!in-Place capability
If a node is capable of calculating the result by ''modifying it's input'' buffer(s), an important performance optimization is possible, because in a chain of in-place capable nodes, we don't need any buffer allocations. But, on the other hand, this optimization may collide with the caching, because a frame retrieved from cache must not be modified.
Without this optimization, in the base case each processing needs an input and an output. Exceptionally, we could think of special nodes which //require// to process in-place, in which case we'd need to provide it with a copy of the input buffer to work on. 

!!!!Processing
If ''not processing'' we don't have any input buffers, instead we get our output buffers from an external source.
Otherwise, in the default case of actually ''processing'' out output, we have to organize input buffers, allocate output buffers, call the {{{process()}}} function of the WiringDescriptor and finally release the input buffers.
{{red{This is an early draft as of 9/08}}}
Wiring requests rather belong to the realm of the high-level model, but play an important role in the build process, because the result of "executing" a wiring request will be to establish an actual low-level data connection. Wiring requests will be created automatically in the course of the build process, but they can be created manually and attached to the media objects in the high-level model as a ''wiring plug'', which is a special kind of LocatingPin (&rarr; [[Placement]])

Wiring requests are small stateful value objects. They will be collected, sorted and processed ordered, such as to finish the building and get a completely wired network of processing nodes. Obviously, there needs to be some reference or smart pointer to the objects to be wired, but this information remains opaque.

&rarr; ConManager
The purpose of automation is to vary a parameter of some data processing instance in the course of time while rendering. Thus, automation encompasses all the variability within the render network //which is not a structural change.//


!Parameters and Automation

[[Automation]] is treated as a function over time. Everything beyond this definition is considered an implementation detail of the [[parameter provider|ParamProvider]] used to yield the value. Thus automation is closely tied to the concept of a [[Parameter]], which also plays an important role in the communication with the GUI and while [[setting up and wiring the render nodes|BuildRenderNode]] in the course of the build process (&rarr; see [[tag:Builder|Builder]])
Definition of commonly used terms and facilities...