Bill Forsythe: Nowhere and Everywhere at the same time No. 2 (pendulums)

Two works by two choreographers Dimitris Papaioanno, and Bill Forsythe,
with very different and interesting approaches to causality and temporal texture…

- Xin Wei

On Jul 20, 2014, at 12:55 AM, Michael Montanaro <michael.montanaro@concordia.ca> wrote:

A beautiful choreographed work: NOWHERE (2009) / central scene / for Pina
from Dimitris Papaioanno



Begin forwarded message:

From: "Vangelis Lympouridis" <vl_artcode@yahoo.com>
\Date: July 22, 2014 at 8:39:27 AM GMT+2
To: "Adrian Freed" <Adrian.Freed@asu.edu>, "'Sha Xin Wei'" <shaxinwei@gmail.com>, "'John MacCallum'" <john@cnmat.berkeley.edu>

When you have a second please watch this 2 min video with Forsythe’s piece Nowhere and Everywhere at the same time No2.

I think it is SO to the core of what we reasoning about… J

                                                                                                                                                                                                                                                        

 
 
Vangelis Lympouridis, PhD

Visiting Scholar,

School of Cinematic Arts

University of Southern California

 
Senior Research Consultant,
Creative Media & Behavioral Health Center

University of Southern California

 
Whole Body Interaction Designer

Tel: +1 (415) 706-2638 

[Synthesis] networked cameras; potential response to activity vs actual activity

Given Tain, Cooper and other folks’ findings, IMHO we should use wifi networked cameras but not GoPro’s.  (Can we borrow two from somewhere hort term to get the video network going?)  Comments on networked cameras?  
But let’s keep the focus on portals for sound and gesture / rhythm.

Keep in mind the point of the video portal is not to look AT an image in a rectangle, but to suture different regions of space together. 

Cooper’s diagram (see attached) offers a chance to make another distinction about Synthesis research strategy, carrying on from the TML, that is experientially and conceptually quite different from representationalist or telementationalist uses of signs.  

(Another key tactic: to eschew allegorical art.)

A main Synthesis research interest here is not about looking-at an out-of-reach image far from the bodies of the inhabitants of the media space, 
but how to either:;
(1) use varying light fields to induce senses of rhythm and co-movement
(2) animate puppets (whether made of physical material, light, or light on matter).

The more fundamental research is not the actual image or sound, but the artful design of potential responsivity of the activated media to action.
This goes for projected video, lighting, acoustics, synthesized sound, as well as kinetic objects.
All we have to do this summer is to build out enough kits or aspects of the media system to demonstrate these ways of working and in the formate of some attractive installations installed in the ordinary space.

Re. the proto-rearch goals this month, we do NOT have to pre-think the design of the whole of the end-user experience, but to build some kits that would allow us to demonstrate the promise of this approach to VIP visitors in July, students in August, and to work with in September when everyone’s back.

Cheers for the smart work to date,
Xin Wei

streaming video from small cameras in a local net: Go pro fact finding research

There are many solutions for a suite of video cams.  It’s not bad to repurpose some commercial and cheaper solutions for a network of video+audio cameras. Analog solutions have much less latency compared to these digital solutions (even after an A2D converter), and they may still be a lot cheaper for a given equivalent resolution.   

Are there (still?) more variety of lens solutions available for analog cameras?

Be sure to get wide angle lenses for whatever camera solution you come up with.

Also, threaded lenses may make it easier to affix IR pass filters as necessary.

Cheers,
Xin Wei

__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Brickyard media systems architecture; notes for Thursday BY checkin

Hi Everyone,

This note is primarily about media systems, but should connect with the physical interior design, furniture and lighting etc.

Thanks Pete for the proposed media systems architecture summary.  I started to edit the doc but then bogged down because the architecture was computer-centric, and it’s clearer as a diagram.   
 
Instead of designing around the 2 Mac Minis and the Mac Pro computers I’d like to design the architecture around relatively autonomous i/o devices :

Effectively a network of:
Lamps (for now addressed via DMX dimmers), floor and desk lamps as needed to provide comfortable lighting 
5 Wifi GoPro audio-video cameras (3 in BY, 1 Stauffer, 1 iStage)
xOSC-connected sensors and stepper motors,  ask Byron 
1 black Mac Pro on internet switch
processing video streams, e.g. extracting video feature emitted in OSC
1 Mac Minis on internet switch
running lighting master control Max patch, ozone media state engine (later)
1 Mac Mini on internet switch
processing audio, e.g. extracting audio features emitted in OSC
1 iPad running Ozone control panel under Mira
1 Internet switch in Synthesis incubator room for drop-in laptop processing
8 x transducers (or Genelecs — distinct from use in lab or iStage, I prefer not to have “visible” speaker boxes or projectors in BY — perhaps flown in corners or inserted inside Kevin’s boxes, but not on tables or the floor )

The designer-inhabitants should be able to physically move the lamps and GoPros and stepper motor’ed objects around the room, replugging into few cabled interface devices (dimmer boxes) as necessary.

Do the GoPro’s have audio in?   Can we run contact or boundary or air mic into one ?

We could run ethernet from our 3 computers to the nearest wall ethernet port?

Drop-in researchers should be able to receive via OSC whatever sensor data streams (e.g. video and audio feature streams cooked by the fixed black MacPro and audio Mini)
and also emit audio / video / control OSC into this BY network. 

DITTO iStage.

Can we organize an architecture session Thursday sometime ?  
I’m sorry since I have to get my mother into post-accident care here in Washington DC I don’t know when I’ll be free.

Perhaps Thursday sometime between 2 and 5 PM Phoenix?  I will make myself available for this because I’d like to take up a more active role now thanks to your smart work.

Looking forward to it!
Xin Wei


PS. Please let’s email our design-build to post@synthesis.posthaven.com  so we can capture the design process and build on accumulated knowledge in http://synthesis.posthaven.com 

__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________




MIRA TouchOSC, lighting in BY

Hi all, 
   Xin Wei + Tech team, I shall carry on here the discussion re: touchOSC vs. Mira, as this relates to the lighting thread.

Matt and I have discussed taking on on everyday lighting starting now but carrying on through the Fall.  

A la TML/iStage, I wanted to set up a Mira controller for the conference/lunch area with various presets: conference/lunch/partymode/moodlighting/dynamic. I had Mira in mind for its smooth user interface abilities, but I hadn't thought about using touchOSC (or OSC control, which is what I have found for free. Touch OSC appears to cost some $4.99). Either is ideal for scaling up. Using Julian's tml.dmx_op (+ or -), we can have multiple controllers going into the same DMX universe without discrepancy. 

Here are some links to some encouraging models, some of them more theatrical or black box space-y, but all using conventional, everyday lighting apparatuses:

 

I'm not familiar with Riemannian's work, but Leibniz monads do seem to resonate with this kind of work. The monad as a unit of a kind of proto-systems theory provides an interesting model for assemblages of lighting, clusters and pairings of energies. I've seen the monad used, as Leibniz himself did, as a mode of viewing Western musical structures. However, most interesting are attempts to account for not only structure, but also for those timbral sonic parameters neglected by Western musical notation. 

A parallel in lighting could perhaps be found as a starting point. musical structure might be akin to the spatial placement (topological or geometrical) of each light bulb. How they're structured, how they're bound to one another, how (if) they hang, what kind of bulb are we using, wattage, these are all important timbral considerations which define how each monad functions within the larger ecology. 

This is also defined by movement/rhythm, a seemingly critical aspect to identifying relationships between monads (lights). This sort of work has also been done at TML-- my hope is that we can build on this together as we go forth over the next few months. 

Matthew has done some great research into proper thread cables and light bulbs for the space, and has spec'd what promises to make the environment here a much warmer space. He has, with Luke, ordered supplies to create 20-30 lights. We should have them in ~7 business days and we can start throwing lights up. Ideal will be to pair this with the grid beam/ desks. 

Other lighting possibilities are non traditional; Katie purchases an old-school overhead projector from Surplus, which I've hooked up to DMX. Interesting with this might be to find some materials (tissue paper?) which provide an interesting degree of translucence. Perhaps they could (Byron?) be manipulated by some servos. Robotic shadow puppets could also be an interesting endeavor. Byron also mentioned sand...

I would also like to continue to work with RGB LEDs, although there is a general fear of disco @ BY :-) Although the testing with LEDs has been admittedly "disko-orientiert," it would be nice to build them into window boxes. Using a simple pixel selector (thanks Julian), we can use incoming video feeds to detect sky hue and reproduce it in artificial windows (rectangular, oval, asymmetrical, as we please) which do not have to be on walls. 

All best,
Garrett 


On Sun, Jun 22, 2014 at 9:24 AM, Sha Xin Wei<shaxinwei@gmail.com> wrote:
Hi, 

I’d like to propose a research sub-thread on lighting design for the Brickyard commons this Fall
inspired and guided by specific practical and aesthetic intents.

For foveal work, when someone is at a work site 
Lighting either work zones or ceilings with floor lamps
BUT research goal is NOT to beam light onto objects or work surfaces  a la electric era lighting design,
but to emulate and learn from skylight and shafts of sunbeams(preferably natural light*) directed by agencies 
summing both the individual as well as the ambient
(e.g. outdoor sky colour temp => colour temp of interior washes, 
amount of activity in other parts of floor => … 
sound timbre => …. )
Criteria: enlivening, and can serve 
either focus foveal work,or support reverie depending on
the inhabitant’s phenomenological disposition, which can flicker.

For night and default empty (when people leave the commons) lighting 
use cloud of bulbs in amorphous, bunched constellations
soften by animation (ramped brighten / darken) and by slightly intermingling  clusters.

Clusters can brighten over the heads of people where they gather. 
(Use  motion to initiate, presence to stay on…)

Message and actual: even empty of people, the room is still working, 
research is still happening with non-human agencies. 

Below I include a snip from the Fountain to give a sense of volumetric density
and constant shifting of perspective that we can perhaps induce as people move through the space.
This has direct, poetic connection with the conceptual work referencing
Deleuze & Guattari, Leibniz’s monadology, 
Riemannian geometric approaches to design (special volume being prepared).


But I’d like to clearly distinguish this from the iStage blackbox atmosphere, but we can use
theatrical technologies with non-blackbox results.

Of course this will connect with everyone’s work in the BY, 
but who will be responsible for the research direction and push the aesthetic-technical work with me?
Who can own this as a focus?

Cheers,
Xin Wei

* It seems ethico-aesthetically ugly to use electrical light when we have so much sunlight to draw from.  S
It takes careful thought to see how to respond to this subject to our practical time+budget constraints.



-- 
Garrett L. Johnson
Music History TA 
Arizona State University 
School of Music

[Synthesis][TML]: table-portal test (Was: [Portal research stream] opening up the audio-video portals between Synthesis Brickyard and TML studios )

Hi,

This is a good example of the sort of poetic portal behaviours and forms we might explore in a great variety over a brief period of time.

Bravo Evan! :)

Portal = porthole experiment

What I’d like to see us experiment with as well are
multiple pairs of small (20cm) disk portals  with similar wipe => reveal behaviours
set up on 24x7 live streams.  Use cameras and / or sub-regions of video streams that yield restricted views
of only a small field of view.

Then we can explore invention of social protocols for tailoring the topology of gaze / portal via physically angling the cameras and pico-projectors on the spot in live, in situ tests.  
Let’s unbolt projectors and cameras from fixed locations and fixed perspectives!

The point: Deleuze & Guattari’s rhizomes are not simply networks of identical nodes as is commonly understood in mechanical interpretation, but manifolds of perspectival monads that continuously vary, continuously.   And ethico-aesthetic reflexivity demands that we ought to permit us inhabitants to tailor the topological gaze identification ourselves.   This is not for the programmer-engineer-designer to over-think and "solve" ahead of the dwelling.   I’ll relay Luke’s comments as well in subsequent email.

Xin Wei

__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________


Begin forwarded message:

From: Michael Montanaro <michael.montanaro@concordia.ca>
Subject: Test table
Date: June 21, 2014 at 12:20:39 PM PDT
To: Xin Wei Sha <Xinwei.Sha@asu.edu>, Sha Xin Wei <shaxinwei@gmail.com>, "Evan Montpellier" <evan.montpellier@gmail.com>

Have a look. Will test live portal tomorrow with Niko in Greece. 



take care
m


Michael Montanaro, Chair
Department of Contemporary Dance
Concordia University
Artist/Researcher /  HEXAGRAM 
Centre for Research-Creation in Media Arts and Technologies