[Synthesis] background for "Einsteins Dreams" project

Dear Synthesis researchers -- TT + grads:

As we build the portal tools, we are laying instruments for second and third order experiments.  I’ll explain as we go, next week in live meeting.  For example: Einsteins Dream work on simultaneity and latency, and the experiential mutation of experienced topology of space-time.

Here’s the link to background work for the Einsteins Dreams research stream

that Synthesis will carry forward in three settings:

• Marked, theatrical / artistic event  (e.g. iStage Blackbox, Beall Center UCI)
• Everyday space everyday, unmarked activity, (e.g. Brickyard commons and Stauffer office)
• Public spaces (eg, ASU Turrell sculpture, Santa Monica)

Xin Wei

PS Please cc. all thoughts, scrapbook notes re ED to post@einsteinsdreams.posthaven.com

PPS The apparatus could also be used in a very different set of embodied rehearsal-speculation with the climate modellers that Dehlia Hannah  & I would like to bring to Synthesis over the coming year.   e.g. Andrew Harding.

__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Einstein's Dreams : a series of experiments, exhibitions about temporality

Einstein's Dreams is a series of experiments, exhibitions about temporality: NOT “time” as an abstraction but the lived experience of change, dynamic, rhythm. of temporal texture (if that make sense).

Einstein's Dreams as a research stream dates back to about 2010 when choreographer/filmmaker Michael Montanaro and I discovered common inspiration in Alan Lightman’s novel.   The Topological Media Lab has mounted a series of installation-experiments over the past few years, most notably a set of Hexagram blackbox experiments in 2013, and a set of installations / actions in U Chicago’s Play Symposium .

There are rich background notes to this creation-research stream:

I propose we post emails about Einsteins Dreams to post@einsteinsdreams.posthaven.com

Please post more general or more theoretical and technical notes and links about textural approaches  to temporal experience, (change, dynamic, rhythm, etc.) to post@textures.posthaven.com


David — or Ed or Katie — can you flesh out the notes from our jam session?  I may not be able to do this while the memory is hot.

Time frame 2015 (- 2016)
Formats:
Sprint (yielding documents, diagrams, scores, recipes, maquettes)
Exhibition (LACMA, Yerba Buena)
Exhibition (Exploratorium)

Here are the snaps:

 

Cheers,
Xin Wei


__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Einsteins Dreams: An Ecological Approach to Media Choreography

There are some basic experimental presumptions that I'd like to try out in the Einsteins Dreams work.   There aren't many, but they go to the heart of research in how we compose the behavior of rich responsive environments.   The TML starts from where most of the world of interactive environments stops.  

One of those places is how events evolve.   The obvious ways include:  timeline (graph, cues), random (stochastic), decision tree (if-then).

But is this all there is?  Not in life nor in art.

This is where things were at up to the 1990's, well, even now if you ask most programmers and conventional time-based media artists.

I started designing responsive environments with a profoundly different approach to media choreography.  And that's been a core part of the TML's radically different way of making rich responsive environments that are more like ecologies.  This approach learns from continuous state evolution characteristic of tangible, physical, ecological systems.

Practically, how do we do this?  That's an open question.   The Topological Media Lab is for exploring open questions, rather than producing artwork reproducing convention.    And it is not the case that sprinkling some "AI" will save the day.   GIven that learning methods such as HMM, PCA, ICA, are all retrospective, (Bergson's critique of mechanism), and given that scores, scripts, clocks, and timelines cage action, we set them aside in favor of techniques that give us the maximum nuance, and potential for expressive invention over conditioned space.   The most powerful alternative we've only begun to exploit is the dynamical systems approach.   Rather than rehashing it, let me attach some references, like the "Theater Without Organs" essay (for artists, writers), and the more precise Ozone ACM Multimedia paper.

Most fundamentally, the ED project is to really push on these fronts:

• Move from time-line, random, or decision-tree logics that are typical of engineered environments to dynamical systems modeled on ecologies.  (See pages 16-19 of "Theater Without Organs.")

Acting & Articulating vs. Depicting or Allegory

The TML is about making environments that palpable condition experience in definite ways, not to display representations (pictures) or models of experience.   The radical experimental challenge of ED is about inducing rich experiences of dynamics, change, rhythm, not making an image (representation)  of some model of time.   The latter is merely allegory.  Easy.  The former is alchemically transmuted experience.  Hard.  Given enough skill, making representations is easy.   We built the TML, got the ED seed grant, and coordinated the temporality seminars these past 3 years to do something hard: inducing a different mode of temporality -- sense of temporal change. 

Rhythm ≠ Isochrony (regularly periodic)

There are no mathematically regular periods "in nature"  -- that's an artifact of delusions imposed by our models of mechanical time -- frozen in by computers.

Adrian Freed has a rich way of thinking about this, and a rich way of making things that reflect this.

No matter what "curves" you draw, if the pattern is repeated, then you have imposed an isochronous pattern.  So we've cheated life by pushing / pumping with an artificial "beat."  Instead, ED includes how pseudo-regularities emerge from the dynamical system.

Give up (geometric) time as a independent parameter 

Also since a few years ago, when people like Adrian and David Morris' students came on board, some people have taken up a challenge I put to radicalize our notion and use of "time"

Instead of using time as an independent parameter, in fact, instead of using any parameter as a "clock" driving the event,  use our sensors -- cameras and mics -- to pick up what is happening, and from the contingent action derive the changes of the responsive environment.

100 years ago Bergson insightfully criticized what he called the cinematic conceit of time / temporal experience.  (This is part of the point of the Ontogenesis group this past year with Magda, Will, Felix, Liza, Harry, Adrian, and myself.)  We don't need to fall back into those naiveties.

Even more fundamentally, let's be mindful of Maturana and Varela's profound observation that "time" is itself just a linguistic description rather than some thing in the stuff of our bodies and the stuff of the world:

Time as a Dimension

Any mode of behavioral distinction between otherwise equivalent interactions, in a domain that has to do with the states of the organism and not with the ambience features which define the interaction, gives rise to a referential dimension as a mode of conduct.  This is the case with time. It is sufficient that as a result of an interaction (defined by an ambience configuration) the nervous system should be modified with respect to the specific referential state (emotion of assuredness, for example) which the recurrence of the interaction (regardless of its nature) may generate for otherwise equivalent interactions to cause conducts which distinguish them in a dimension associated with their sequence, and, thus, give rise to a mode of behavior which constitutes the definition and characterization of this dimension.  Therefore, sequence as a dimension is defined in the domain of interactions of the organism, not in the operation of the nervous system as a closed neuronal network. Similarly, the behavioral distinction by the observer of sequential states in his recurrent states of nervous activity, as he recursively interacts with them, constitutes the generation of time as a dimension of the descriptive domain. Accordingly, time is a dimension in the domain of descriptions, not a feature of the ambience.  (H. Maturana & F. Varela, p 133. Autopoiesis and Cognition. See also Henri Bergson's example of the arcing arm, Creative Evolution, chap 1.)

Why not tug the sun as a controller rather than passively watch it sail out of reach!

As I said before, I think time is an effect not an "independent parameter."  This permits a more profound interpretation of Lightman's novel beyond its "time is…" syntax.

Functional relation ≠> Determinism

A curve f(t), eg f(t) = sin(t), can provide an utterly precise and reproducible result simply because it is a FUNCTION.  For example, f(t) could govern the height and intensity of the "sun" in the Blackbox. However f(t) need NOT be fed parameters t1, t2, t3 ... in a regularly incremented monotone sequence.  There just needs to be a (reversible) function in order to have reproducibility of the event when the action is reproduced.

There is a profound performative difference in live experience between a a fixed curve -- a graph which is traced from left to right in order, and a  f[t] = Sin[t] ready to be evaluated given any input.

In the example above, "t" is the INDEPENDENT parameter.  y = f(t) is the DEPENDENT  parameter.  In a realistic system, there is no reason to presume that the world runs on only one independent parameter.   (the "unidimensionality" fallacy)

Functions

There is no contradiction with the graphs that J drew.  In fact we use this in many places in Ozone code for 10 years in the form of Max function object (you draw the curve yourself).  Jerome could in fact use Morgan and Navid's function-clocks instead of re-inventing the wheel :).  But we called those abstractions "fake state" because we knew that they simply imposed a uni-dimensional sequence on the entire event.

Indeed we can write in any number of FUNCTIONAL, even REVERSIBLE (invertible) relations at the parameter level , yielding an arbitrary number of dimensions of deterministic relation between parameters,  I.e. optical flow and number of particles;  Color of input and wind (potential) force on fluid (MF : red => heat => flow up against gravity);  scratchiness of sound and brittleness of floor.   ALL of these can be functions of action, and even of each other.  That way, the human can do richly nuanced action, and even drive the action in a fully definite manner because the parameters are deterministically coupled to action.  But the relation is mapped from action in as many dimensions as the instruments can sense (either as raw or cooked sensor channels).

See Ozone documentation by Mani Thivierge on TML Private WIKI for precise description.   Since we are short on time,  I propose that composers read the Ozone document merely for the notation and the approach.   In this workshop, I propose we try only this notation "on paper" as a way of thinking about composing an event.   If the composers have time, they are welcome to write state engines, but that is not necessary this round.

COMPLEXITY vs RICH, COHERENT ACTION

We do not control complexity by imposing a small number of independent parameters.  In fact, as long as we can engineer functional relations, then the human and nonhuman agents can drive the event by ACTION.  Actions can be compact and coherent -- e.g. Everyone huddle together and stay huddled together in one place.  OR everyone huddle together but move about in a compact group around the floor.  Etc.  Even if this maps to multiple parameters there should be no need for us actors / inhabitants to think in terms of parameters as we act.

SCRIPTED CONTROL vs LIVE ACTION

There's a fundamental difference in attitude between code state as a trace of what's going on, vs. code as a driver of action.

There are at least three modes of agency: script (machine), human, and medium.  

(A) Clock drives event

For example, some software code animates a light simulating the sun rising in the course of a day.  The shadow of a pole shortens  lengthen as a function of clock-time.

(B) Human drives event

For example: human lifts a lantern.   Overhead Camera sees shadow of fixed pole shorten on the floor.   Code uses length of shadow to move an image of the sun...

A & B may look quite similar.  Downstream media code may even be identical, driven by OSC -- that is via a FUNCTIONAL, hence DETERMINISTIC dependence.  But the KEY difference is that A is driven by a clock, and B can be nuanced by LIVE action.   The actor can "scrub" through the event by moving his her arm up or down in any manner.    


• (C) MAKING A MEDIUM rather than a movie of a medium's particular action.

Nothing precludes programming a zone or instrument as a living medium with its own dynamics -- think of creating not a movie of a ripple spreading across the floor, but a whole sheet of "water" that ripples in response to any number of fingers or toes or stones doing any action in it.

• An embodied second order EVENT DESIGN METHOD

(inspired by Harrry Smoak, Matthew Warne's Thick/N 2004)

NOT as actual scenography, just as a design method: lay out several ZONES on the floor of the Blackbox, each with its own dynamics.    Then we can try walking from zone to zone in many different sequences, to get a feel for what transitions might feel like.  Imagine what players / inhabitants should be doing in order for the state of the event to change from zone A to zone B.  THEN we can design a state topology of those as POTENTIAL transitions, that actualize only when the inhabitants and the system actually act accordingly (as picked up by the sensors). 

(The Topological Media Lab's Ozone media choreography architecture as coded in Max / C already does this.)

REFERENCES

Henri Bergson, Creative Evolution, Dover 1998 (1907), Chapter 1: “The evolution of life – Mechanism and teleology”; Chapter 4: “The cinematographical mechanism of thought and the mechanistic illusion – A glance at the history of systems – Real becoming and false evolutionism.”

Broglio, Ron, “Thinking about stuff: Posthumanist phenomenology and cognition,” in Special Issue on Poetic and Speculative Architecture in Public Space, AI & Society 26.2, 2011, p. 187-192.

Humberto Maturana and Francisco Varela, Autopoiesis and Cognition: The Realization of the Living, Reidel 1980.

Maurice Merleau-Ponty, Phenomenology of Perception, tr. Donald Landes, Routledge 2012 (1945).

Sha Xin Wei, Michael Fortin, Tim Sutton, Navid Navab, “Ozone: Continuous state-based media choreography system for live performance,” ACM Multimedia 2010.

Sha Xin Wei, Poiesis and Enchantment in Topological Matter, MIT Press, forthcoming 2013. (Preface and Chapter 1).

_____, “Theater without organs” (2013),

Einstein Dreams : Scenarios / Zones, Mechanics, Design approaches

(March 2013, updated for Synthesis Workshop February - March 2014, ASU)

SCENARIOS

Scatter / gather

Your shadow splits.  The shadows run way from you.  The shadows quiver with tension & intention.  

Follow spot lights you up.   Other spots lurk in the shadows, come after you with persona.

(Use Julian's rhythm abstractions to record corporeal/ analog movements, and playback.  Use analog rhythms as cheap way to get huge variety of very subtle NON-regularity to avoid dead mechanical beats.  Also can improvise.  )


Freeze

Deposit snapshots of yourself.  

Use "flash" timing :  charging increase in tension. Snap!  

Alternatively: NO tension, just fill zone with flashes.

Use MF VP8 to take webcam images and project them kaleidoscopically throughout zone, with time warp, delay, reversals.


Sutures

The world is fissured, and sutured:  as you walk you see/hear into discontiguous parts of the room.  

Portals!  Use MFortin's VP8,  + Jitter intermediation, eg timespace to introduce time dilation effects).


Motion, oil slick, molasses

Every action is weighted down, slooooooowwwweeedd asymptotically but never quite stilling.  Every action causes all the pieces of the world to slide as if on air hockey table, but powered so they accelerate like crazy :  Map room to TORUS so the imagery is always visible.   Use zeno-divide-in-half or any tunable asymptotic (Max expr object) to decellerate or accelerate.


Blur wind

Fray actions, images, sounds into noise 

(e.g. ye olde  jit.streak + feedback example )


Vortex, dizzy

Spin the world -- every linear movement or "line" of sound becomes drawn into a giant vortex, that sucks into the earth.   OR reverse.

Stepped video-strobe in the concentric rings, tunable by OSC + Midi sliders like everything. 


Brittle, crack

Need to step carefully.  If not, hear and see pending catastrophe: cracking ice underfoot ...

Or sometimes pure acoustic in darkness or whiteout strobe.

Use Navid’s adaptive scaler to shrink sensitivity down to smallest movement causing catastrophe in strobe + massive sound.   Use subs to add pre-preparatory sound like Earth grinding her teeth before breaking loose.


Stasis ( hot or cold )

Sitting in the bowl of the desert (Sahara  or Himalayas )

No-time sonic and field.  Noise-field, snow blindness video (black or white majority), or BillViola ultra-slow-mo?  (better than 25-1 speed reduction, with no motion-blur)?

Use heat lamps or fans, to subtly add heat or cold ?



Repetition

Visual -- take video from a given location, but send to multiple locations (using VP8 + repeated stills) or map to OpenGL polygons...

Audio -- use OMAX, with coarse grain


Infection / Dark Light

Use video, e.g. use particles -- thickened as necessary -- as sources of light.  Cluster around movement or around bodies presence as source of light. 


MECHANICS

IMPORTANT OZONE ARCHITECTURE (Julian with MF, working with Navid, Jerome, Omar’s instruments): Each of your instruments should expose key parameters to be tunable by OSC + Midi sliders, so someone OTHER than programmer can play with the instruments qualitatively.  OSC gives access to handheld MF’s Max/iOS client so we can walk around IN the space under the effects and vary the instruments IN SITU, LIVE.


ALSO: Crucial that any TML experimentalist can walk in with her / his laptop, tap into the video feed, and emit her own video into the projectors, and control where her video shows up, an with what alpha blend.   S/he must be able to do this without Jerome babysitting on call 24 x 7.


Ditto sound & lighting & OSC feed -- Julian’s got good design for this.  Navid for sensor channels.   I hope the sensor channels work transparently on top of Julian’s OSC resource discovery code.

Berlinengineer Sara Reichert für EinsteinsTraum

Peter Funch, Babel Tales, composite photographs of New York

We should try compositing time, also in video.

http://flavorwire.com/188589/amazing-composite-street-photographs-of-new-york-city

Amazing Composite Street Photographs of New York City
1:30 pm Monday Jun 20, 2011 by Marina Galperina

For every one of his fantastic shots, photographer Peter Funch stakes out a busy New York City street corner, capturing hundreds of moments over the course of several weeks from the same spot. Then, he digitally combines real scenes into one surreal super scene. Suddenly, all the people who passed him while dressed in black re-appear simultaneously. Every single person in Times Square is a picture-snapping tourist. Everyone on a Lower East Side street has a little dog. Everyone is yawning. Everyone has balloons. The happenings in his “fictional documentary” series Babel Tales aren’t lies, they’re truths exaggerated. Enjoy the views in our gallery.

Peter Funch, Babel Tales, composite photographs of New York
http://www.peterfunch.com/index.php?/ongoing/babel-tales/

Also make a video that filters through only pixels with particular speed in optical flow.

necessary, but not sufficient (was: Here is the show I saw with 3D mapping - Must see!)

It's impressive -- we want to own that sort of functionality!  I don't see any rocket science in their tech, which is great, so it would be straightforward to assemble the same solution ourselves.  Perhaps this can be accomplished with much less budget: they seem to be assembling a lot of different design shops  to do what could be assembled by one well-trained engineering team.    

I'm curious how it feels live.  In terms of effect, after a while, it (and all these other building projections ) seems kind of boring bc it's just mapping textures to walls -- very screeny.  (Notice that the interviews show the graphic designers working on screens, rather than painting/beaming with handhelds  directly on , and walking around their mock-up models.)

I saw no compositional, dramatic development in the first half of the video.  ("Dramatic" can be non-anthropocentric, in our work.)

What MM's doing with what I'm calling the "building character" approach to building-scale events will be very interesting, I think, because it will actually give character to buildings and breathe relationships into them and their "actions".  So there's a extra layer of meaning, magic, breath-taking as-if-accident that will emerge.  All the practice of staging, blocking, action, choreography, composition, extended from the  anthropocentric to the scale of buildings, will be really powerful.

Let me remind us of a practical goal that I've been urging for ten years: let's get away from screen-centric design!!!   For example, we could make mock-up panels out of foamcore + fiducials (for auto-tracking by DL2's) and walk with them / move them by hand to rapidly prototype effect of shifting perspectives as function of movement of visitor and of physical architecture.   We can take the RP 3D "prints" and light them with real light, and walk around them with our little mitsubishi's (or even jury rig our own small pen-cameras and pen-projectors, (and ears), and puppets!    

We should schedule some workshops with Mark Sussman now, for when?  

Thanks for the conversation-provoking example :)
Let's post lots more examples to post@alkemie.posterous.com !

Xin Wei

On 2011-06-02, at 11:32 PM, Jérôme Delapierre wrote:

Cheers,

Jérôme Delapierre
www.delapierrejerome.weebly.com
Photography / Graphic & webdesign / Video / Interactive design
Cell : 5147467818

Ozone & Einsteins Dream workshops in 2011

Here is my current list of potential and proposed opportunities to do workshops related to Ozone & Einsteins Dream in 2011.
Some are fundable.
I will be working on funding these...depending on who can accomplish what.

March (during intersession break) UC Berkeley
1 or 2 members of Ozone team demo Lisa Wymore's lab
Meyer Sound work

March 5 - April 30 Bain St Michel, Montreal

April 15-23 Berkeley Dance Productions 8, Berkeley
  Workshop in Lisa Wymore's lab ?
Maybe only media techniques

May 1-8 Hexagram, Montreal ??
Michael Montanaro & visitors Lisa Wymore (UC Berkeley) and Sheldon Smith (Mills College) ?

August 1-10 Hawaii
SECT Seminar in Experimental Critical Theory
technoscientific knowledge production and urban experience in Asia

Xin Wei