There are some basic experimental presumptions that I'd like to try out in the Einsteins Dreams work. There aren't many, but they go to the heart of research in how we compose the behavior of rich responsive environments. The TML starts from where most of the world of interactive environments stops.
One of those places is how events evolve. The obvious ways include: timeline (graph, cues), random (stochastic), decision tree (if-then).
This is where things were at up to the 1990's, well, even now if you ask most programmers and conventional time-based media artists.
I started designing responsive environments with a profoundly different approach to media choreography. And that's been a core part of the TML's radically different way of making rich responsive environments that are more like ecologies. This approach learns from continuous state evolution characteristic of tangible, physical, ecological systems.
Practically, how do we do this? That's an open question. The Topological Media Lab is for exploring open questions, rather than producing artwork reproducing convention. And it is not the case that sprinkling some "AI" will save the day. GIven that learning methods such as HMM, PCA, ICA, are all retrospective, (Bergson's critique of mechanism), and given that scores, scripts, clocks, and timelines cage action, we set them aside in favor of techniques that give us the maximum nuance, and potential for expressive invention over conditioned space. The most powerful alternative we've only begun to exploit is the dynamical systems approach. Rather than rehashing it, let me attach some references, like the "Theater Without Organs" essay (for artists, writers), and the more precise Ozone ACM Multimedia paper.
Most fundamentally, the ED project is to really push on these fronts:
• Move from time-line, random, or decision-tree logics that are typical of engineered environments to dynamical systems modeled on ecologies. (See pages 16-19 of "Theater Without Organs.")
• Acting & Articulating vs. Depicting or Allegory
The TML is about making environments that palpable condition experience in definite ways, not to display representations (pictures) or models of experience. The radical experimental challenge of ED is about inducing rich experiences of dynamics, change, rhythm, not making an image (representation) of some model of time. The latter is merely allegory. Easy. The former is alchemically transmuted experience. Hard. Given enough skill, making representations is easy. We built the TML, got the ED seed grant, and coordinated the temporality seminars these past 3 years to do something hard: inducing a different mode of temporality -- sense of temporal change.
• Rhythm ≠ Isochrony (regularly periodic)
There are no mathematically regular periods "in nature" -- that's an artifact of delusions imposed by our models of mechanical time -- frozen in by computers.
Adrian Freed has a rich way of thinking about this, and a rich way of making things that reflect this.
No matter what "curves" you draw, if the pattern is repeated, then you have imposed an isochronous pattern. So we've cheated life by pushing / pumping with an artificial "beat." Instead, ED includes how pseudo-regularities emerge from the dynamical system.
• Give up (geometric) time as a independent parameter
Also since a few years ago, when people like Adrian and David Morris' students came on board, some people have taken up a challenge I put to radicalize our notion and use of "time"
Instead of using time as an independent parameter, in fact, instead of using any parameter as a "clock" driving the event, use our sensors -- cameras and mics -- to pick up what is happening, and from the contingent action derive the changes of the responsive environment.
100 years ago Bergson insightfully criticized what he called the cinematic conceit of time / temporal experience. (This is part of the point of the Ontogenesis group this past year with Magda, Will, Felix, Liza, Harry, Adrian, and myself.) We don't need to fall back into those naiveties.
Even more fundamentally, let's be mindful of Maturana and Varela's profound observation that "time" is itself just a linguistic description rather than some thing in the stuff of our bodies and the stuff of the world:
Time as a Dimension
Any mode of behavioral distinction between otherwise equivalent interactions, in a domain that has to do with the states of the organism and not with the ambience features which define the interaction, gives rise to a referential dimension as a mode of conduct. This is the case with time. It is sufficient that as a result of an interaction (defined by an ambience configuration) the nervous system should be modified with respect to the specific referential state (emotion of assuredness, for example) which the recurrence of the interaction (regardless of its nature) may generate for otherwise equivalent interactions to cause conducts which distinguish them in a dimension associated with their sequence, and, thus, give rise to a mode of behavior which constitutes the definition and characterization of this dimension. Therefore, sequence as a dimension is defined in the domain of interactions of the organism, not in the operation of the nervous system as a closed neuronal network. Similarly, the behavioral distinction by the observer of sequential states in his recurrent states of nervous activity, as he recursively interacts with them, constitutes the generation of time as a dimension of the descriptive domain. Accordingly, time is a dimension in the domain of descriptions, not a feature of the ambience. (H. Maturana & F. Varela, p 133. Autopoiesis and Cognition. See also Henri Bergson's example of the arcing arm, Creative Evolution, chap 1.)
Why not tug the sun as a controller rather than passively watch it sail out of reach!
As I said before, I think time is an effect not an "independent parameter." This permits a more profound interpretation of Lightman's novel beyond its "time is…" syntax.
• Functional relation ≠> Determinism
A curve f(t), eg f(t) = sin(t), can provide an utterly precise and reproducible result simply because it is a FUNCTION. For example, f(t) could govern the height and intensity of the "sun" in the Blackbox. However f(t) need NOT be fed parameters t1, t2, t3 ... in a regularly incremented monotone sequence. There just needs to be a (reversible) function in order to have reproducibility of the event when the action is reproduced.
There is a profound performative difference in live experience between a a fixed curve -- a graph which is traced from left to right in order, and a f[t] = Sin[t] ready to be evaluated given any input.
In the example above, "t" is the INDEPENDENT parameter. y = f(t) is the DEPENDENT parameter. In a realistic system, there is no reason to presume that the world runs on only one independent parameter. (the "unidimensionality" fallacy)
There is no contradiction with the graphs that J drew. In fact we use this in many places in Ozone code for 10 years in the form of Max function object (you draw the curve yourself). Jerome could in fact use Morgan and Navid's function-clocks instead of re-inventing the wheel :). But we called those abstractions "fake state" because we knew that they simply imposed a uni-dimensional sequence on the entire event.
Indeed we can write in any number of FUNCTIONAL, even REVERSIBLE (invertible) relations at the parameter level , yielding an arbitrary number of dimensions of deterministic relation between parameters, I.e. optical flow and number of particles; Color of input and wind (potential) force on fluid (MF : red => heat => flow up against gravity); scratchiness of sound and brittleness of floor. ALL of these can be functions of action, and even of each other. That way, the human can do richly nuanced action, and even drive the action in a fully definite manner because the parameters are deterministically coupled to action. But the relation is mapped from action in as many dimensions as the instruments can sense (either as raw or cooked sensor channels).
See Ozone documentation by Mani Thivierge on TML Private WIKI for precise description. Since we are short on time, I propose that composers read the Ozone document merely for the notation and the approach. In this workshop, I propose we try only this notation "on paper" as a way of thinking about composing an event. If the composers have time, they are welcome to write state engines, but that is not necessary this round.
• COMPLEXITY vs RICH, COHERENT ACTION
We do not control complexity by imposing a small number of independent parameters. In fact, as long as we can engineer functional relations, then the human and nonhuman agents can drive the event by ACTION. Actions can be compact and coherent -- e.g. Everyone huddle together and stay huddled together in one place. OR everyone huddle together but move about in a compact group around the floor. Etc. Even if this maps to multiple parameters there should be no need for us actors / inhabitants to think in terms of parameters as we act.
SCRIPTED CONTROL vs LIVE ACTION
There's a fundamental difference in attitude between code state as a trace of what's going on, vs. code as a driver of action.
There are at least three modes of agency: script (machine), human, and medium.
(A) Clock drives event
For example, some software code animates a light simulating the sun rising in the course of a day. The shadow of a pole shortens lengthen as a function of clock-time.
(B) Human drives event
For example: human lifts a lantern. Overhead Camera sees shadow of fixed pole shorten on the floor. Code uses length of shadow to move an image of the sun...
A & B may look quite similar. Downstream media code may even be identical, driven by OSC -- that is via a FUNCTIONAL, hence DETERMINISTIC dependence. But the KEY difference is that A is driven by a clock, and B can be nuanced by LIVE action. The actor can "scrub" through the event by moving his her arm up or down in any manner.
• (C) MAKING A MEDIUM rather than a movie of a medium's particular action.
Nothing precludes programming a zone or instrument as a living medium with its own dynamics -- think of creating not a movie of a ripple spreading across the floor, but a whole sheet of "water" that ripples in response to any number of fingers or toes or stones doing any action in it.
• An embodied second order EVENT DESIGN METHOD
(inspired by Harrry Smoak, Matthew Warne's Thick/N 2004)
NOT as actual scenography, just as a design method: lay out several ZONES on the floor of the Blackbox, each with its own dynamics. Then we can try walking from zone to zone in many different sequences, to get a feel for what transitions might feel like. Imagine what players / inhabitants should be doing in order for the state of the event to change from zone A to zone B. THEN we can design a state topology of those as POTENTIAL transitions, that actualize only when the inhabitants and the system actually act accordingly (as picked up by the sensors).
(The Topological Media Lab's Ozone media choreography architecture as coded in Max / C already does this.)
Henri Bergson, Creative Evolution, Dover 1998 (1907), Chapter 1: “The evolution of life – Mechanism and teleology”; Chapter 4: “The cinematographical mechanism of thought and the mechanistic illusion – A glance at the history of systems – Real becoming and false evolutionism.”
Broglio, Ron, “Thinking about stuff: Posthumanist phenomenology and cognition,” in Special Issue on Poetic and Speculative Architecture in Public Space, AI & Society 26.2, 2011, p. 187-192.
Humberto Maturana and Francisco Varela, Autopoiesis and Cognition: The Realization of the Living, Reidel 1980.
Maurice Merleau-Ponty, Phenomenology of Perception, tr. Donald Landes, Routledge 2012 (1945).
Sha Xin Wei, Michael Fortin, Tim Sutton, Navid Navab, “Ozone: Continuous state-based media choreography system for live performance,” ACM Multimedia 2010.
Sha Xin Wei, Poiesis and Enchantment in Topological Matter, MIT Press, forthcoming 2013. (Preface and Chapter 1).
_____, “Theater without organs” (2013),
(March 2013, updated for Synthesis Workshop February - March 2014, ASU)
Scatter / gather
Your shadow splits. The shadows run way from you. The shadows quiver with tension & intention.
Follow spot lights you up. Other spots lurk in the shadows, come after you with persona.
(Use Julian's rhythm abstractions to record corporeal/ analog movements, and playback. Use analog rhythms as cheap way to get huge variety of very subtle NON-regularity to avoid dead mechanical beats. Also can improvise. )
Deposit snapshots of yourself.
Use "flash" timing : charging increase in tension. Snap!
Alternatively: NO tension, just fill zone with flashes.
Use MF VP8 to take webcam images and project them kaleidoscopically throughout zone, with time warp, delay, reversals.
The world is fissured, and sutured: as you walk you see/hear into discontiguous parts of the room.
Portals! Use MFortin's VP8, + Jitter intermediation, eg timespace to introduce time dilation effects).
Motion, oil slick, molasses
Every action is weighted down, slooooooowwwweeedd asymptotically but never quite stilling. Every action causes all the pieces of the world to slide as if on air hockey table, but powered so they accelerate like crazy : Map room to TORUS so the imagery is always visible. Use zeno-divide-in-half or any tunable asymptotic (Max expr object) to decellerate or accelerate.
Fray actions, images, sounds into noise
(e.g. ye olde jit.streak + feedback example )
Spin the world -- every linear movement or "line" of sound becomes drawn into a giant vortex, that sucks into the earth. OR reverse.
Stepped video-strobe in the concentric rings, tunable by OSC + Midi sliders like everything.
Need to step carefully. If not, hear and see pending catastrophe: cracking ice underfoot ...
Or sometimes pure acoustic in darkness or whiteout strobe.
Use Navid’s adaptive scaler to shrink sensitivity down to smallest movement causing catastrophe in strobe + massive sound. Use subs to add pre-preparatory sound like Earth grinding her teeth before breaking loose.
Stasis ( hot or cold )
Sitting in the bowl of the desert (Sahara or Himalayas )
No-time sonic and field. Noise-field, snow blindness video (black or white majority), or BillViola ultra-slow-mo? (better than 25-1 speed reduction, with no motion-blur)?
Use heat lamps or fans, to subtly add heat or cold ?
Visual -- take video from a given location, but send to multiple locations (using VP8 + repeated stills) or map to OpenGL polygons...
Audio -- use OMAX, with coarse grain
Infection / Dark Light
Use video, e.g. use particles -- thickened as necessary -- as sources of light. Cluster around movement or around bodies presence as source of light.
IMPORTANT OZONE ARCHITECTURE (Julian with MF, working with Navid, Jerome, Omar’s instruments): Each of your instruments should expose key parameters to be tunable by OSC + Midi sliders, so someone OTHER than programmer can play with the instruments qualitatively. OSC gives access to handheld MF’s Max/iOS client so we can walk around IN the space under the effects and vary the instruments IN SITU, LIVE.
ALSO: Crucial that any TML experimentalist can walk in with her / his laptop, tap into the video feed, and emit her own video into the projectors, and control where her video shows up, an with what alpha blend. S/he must be able to do this without Jerome babysitting on call 24 x 7.
Ditto sound & lighting & OSC feed -- Julian’s got good design for this. Navid for sensor channels. I hope the sensor channels work transparently on top of Julian’s OSC resource discovery code.
"Sara Reichert" <firstname.lastname@example.org>Fernanda and me constructed a water curciut system that interacts because of 20 ir- sensors with the movement of the people in the room. The dynamic of the pumps were controlled by a microcontroller. We followed the plants on the 9m hight ceiling with transparent waterpipes and the water came down through invisible tiny holes into a messing funnel. The sound of the falling drops were amplified.
For every one of his fantastic shots, photographer Peter Funch stakes out a busy New York City street corner, capturing hundreds of moments over the course of several weeks from the same spot. Then, he digitally combines real scenes into one surreal super scene. Suddenly, all the people who passed him while dressed in black re-appear simultaneously. Every single person in Times Square is a picture-snapping tourist. Everyone on a Lower East Side street has a little dog. Everyone is yawning. Everyone has balloons. The happenings in his “fictional documentary” series Babel Tales aren’t lies, they’re truths exaggerated. Enjoy the views in our gallery.
Cheers,Jérôme Delapierrewww.delapierrejerome.weebly.comPhotography / Graphic & webdesign / Video / Interactive design
Cell : 5147467818
Morgan, XW, Navid, Tyr
1. Finish multispecies particles to support chemistry.
2. Map Kinect via Externals + Jitter to lighting
3. Map particle data to Sound micro-satesPrevious notes ☞ http://topological.posterous.com/
Wednesday March 23, 13:30- 14:15, TML
Morgan, Jerome, Navid, Tyr, XW
Previous notes ☞ http://topological.posterous.com/
Morgan will chair. Scrummed by Michael F.