"correlation" experiments with Mike K et al ( & Xin Wei ) during Lighting and Rhythm Workshop

Dear Mike , Julie, Varsha, Pavan,

Everyone: Beginning to carry out some "correlation” experiments as core scientific research of the LRR is a very high priority for Synthesis.  So thanks to Mike for dong this.  (We’ll want to continue this through the year until we generate some publishable insights. :)

I would  like to make sure that Pavan is in the loop here with his group.   The goal for "correlation experiments” during the LRR Nov 17-26 is pretty concrete.   We want to discover some non-isomorphic relations between co-movement, movement by ensembles (or individuals) that those people themselves claim are 

(1) “correlated”, mutually responding in some way, or
(2) anticipate or retrospect activity,

either among the humans or the light fields.

(1) Julie and Varsha and Mike can go through some movement “games” / studies responding to the questions laid out in the Lighting and Rhythm workshop .  See the Etudes Motivating Questions  .

(2) Xin Wei, Pavan and Pavan’s group can witness this work, talk with Julie Mike and Varsha,
review videos simultaneously with streams of data as well as correlation 
and look at Mike’s data from accelerometers and correlation computations.
Mike’s recorded data streams concurrent with video of activity. 

( Hint:  QT Screen-recording off of Jitter patches does the job: 
a lesson for TMLabbers who would like to do parallel research in this domain :)


Chris R : I’ll make myself available after 5:30 or on the weekends too, to fit Julie, Mike Varsha’s availability.

Xin Wei

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________



On Nov 7, 2014, at 3:29 PM, Julie Akerly <j.akerly@live.com> wrote:

Yes, I would really like to be a part of this, but things during the weekdays do not work for me.
Can you schedule some workshop times that are after 5:30 or on the weekends?

Julie Akerly
Co-Director of NueBox
Artistic Director of J.A.M.

The dancer's body is simply the luminous manifestation of the soul.
~Isadora Duncan




Date: Fri, 7 Nov 2014 15:12:52 -0700
Subject: Meetings with Xin Wei during Rhythm and Lighting Workshop
From: mkrzyzan@asu.edu
To: cmrober2@asu.eduj.akerly@live.comvarshaiyengar@hotmail.comshaxinwei@gmail.com

Chris,

Xin Wei wanted me to ask you if we can schedule time together in iStage during the Rhythm and Lighting workshop to work on dancer correlation. He suggested the Tuesday and Thursday afternoons (Nov 18, 20, 25, 27), and possibly on the afternoon of the 24th when Michael Montanaro is here.

Julie and Varsha, what are your schedules like at these times? Julie, I know you work during the day, but we can work around your schedules, both of you...

Mike 

-- 
Sine coffa, vita nihil est!!!

Re: Door Portals : prototypes delivered by Nov 15, bomb tested by Nov 21, and deployed by Nov 24 for WIP event

To be clear,
What happens inside the frame of the Door in terms of visual effect, beyond the DEFAULT behaviours
(0) Mirror
(1) Live streams: Stauffer Reception, iStage, Brickyard Commons

is really up to the ingenuity of the realtime video artists, starting with Prof. David Tinapple.
David can curate other realtime video “instruments” for Dec 2 or beyond.
All we need for Dec 2 is the realtime feed with blend between them.

Anything beyond that, such as David’s clever “gaze” tracking is icing on the cake.  That would be fantastic.

David (or Ozzie or Garrett):
Can you look at Ozzie’s code for the Stauffer display, and tweak it to serve as a robust SHELL program?
We need a good shell program that will switch among video instruments, either on a bang
(e.g. from a handclap so we should put in audio with calibration).  The video instruments will
be other students’ works.

Xin Wei

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________



On Nov 1, 2014, at 10:44 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:

Dear Pete, David, Nicole, (technical advisor Ozzie)

I would like Pete to be in charge of installations for WIP Dec 2 following my event design — (For the dramaturgy I’ll of course rely on experienced folks — Chris Z as well as Pete, Garth, Todd, Ed, David.)

For Dec 2, I would like monitors mounted in portrait on some walls in Stauffer and iStage: hence the name DOOR PORTALS, DoorPorts

BEHAVIOR

Each single-sided Door Portal has a camera facing out from its front.

In iStage: when you stand 20’ away from the Door Portal  you see live feed of the receptionist desk at Stauffer.   When there is noone present, the DoorPortal should show past people, faintly.

As you occlude the Door Portal (background subtract, use tml.jit.Rokeby), it turns into a mirror.
   As you walk toward this Door Portal, at 15' the mirror fades to live feed from another Door Portal B
At 10’ the live feed fades to live feed from kn Door Portal C


LOCATIONS, SIZES

(1) In iStage in the location that we discussed — next to the door to offices, flush with the floor.  Largest size.

(2) On a wall in Stauffer B, facing the reception desk  (in addition or instead of the display behind receptionist’s head.)
Cables must be hidden — we cannot have exposed cable in the receptionist area.  Let the Space committee take note.

(3) Mounted on an wall in the dark Commons of the BY.

All these should be PORTRAIT — no landscape, please — let’s avoid knee-jerk screenic response!   Bordered images are boring.

NICOLE, your idea for prototyping is very smart.   if you want, please feel free to make paper mockups — just 
ask Ozzie or Pete for dimensions of our monitors, cut paper to 1-1 scale, and bring some mock doors that we can pin to the walls in the BY and iStage.

Re. scale, I’m ok with the doors not human height — 1-1 body size is too literal (= boring).  Plus I want children-sized apparatus in our office spaces.  Our office spaces are all designed for 5-6’ tall people, nominally adults — this is socially sterile.

NOTE: I think we do NOT need a big screen on the foyer — it is too expensive and a waste of a monitor (unless CSI buys one :)  
For that Brickyard foyer, if people — e.g. Chris R — wants a porthole (≠ portal) then let’s put a pair of iPads one on each side of that foyer sub wall, and stream live video between them.


TIMETABLE (ideal)

Nov 15
prototypes delivered

Nov 21
bomb tested, 

Nov 24
Deployed for WIP event
(If is documentation day for LRR, then this is when we should invite Sarah H to send her team to shoot the Door Portal along with the best of LRR for Herberger (gratis!))


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

Door Portals : prototypes delivered by Nov 15, bomb tested by Nov 21, and deployed by Nov 24 for WIP event

Dear Pete, David, Nicole, (technical advisor Ozzie)

I would like Pete to be in charge of installations for WIP Dec 2 following my event design — (For the dramaturgy I’ll of course rely on experienced folks — Chris Z as well as Pete, Garth, Todd, Ed, David.)

For Dec 2, I would like monitors mounted in portrait on some walls in Stauffer and iStage: hence the name DOOR PORTALS, DoorPorts

BEHAVIOR

Each single-sided Door Portal has a camera facing out from its front.

In iStage: when you stand 20’ away from the Door Portal  you see live feed of the receptionist desk at Stauffer.   When there is noone present, the DoorPortal should show past people, faintly.

As you occlude the Door Portal (background subtract, use tml.jit.Rokeby), it turns into a mirror.
   As you walk toward this Door Portal, at 15' the mirror fades to live feed from another Door Portal B
At 10’ the live feed fades to live feed from kn Door Portal C


LOCATIONS, SIZES

(1) In iStage in the location that we discussed — next to the door to offices, flush with the floor.  Largest size.

(2) On a wall in Stauffer B, facing the reception desk  (in addition or instead of the display behind receptionist’s head.)
Cables must be hidden — we cannot have exposed cable in the receptionist area.  Let the Space committee take note.

(3) Mounted on an wall in the dark Commons of the BY.

All these should be PORTRAIT — no landscape, please — let’s avoid knee-jerk screenic response!   Bordered images are boring.

NICOLE, your idea for prototyping is very smart.   if you want, please feel free to make paper mockups — just 
ask Ozzie or Pete for dimensions of our monitors, cut paper to 1-1 scale, and bring some mock doors that we can pin to the walls in the BY and iStage.

Re. scale, I’m ok with the doors not human height — 1-1 body size is too literal (= boring).  Plus I want children-sized apparatus in our office spaces.  Our office spaces are all designed for 5-6’ tall people, nominally adults — this is socially sterile.

NOTE: I think we do NOT need a big screen on the foyer — it is too expensive and a waste of a monitor (unless CSI buys one :)  
For that Brickyard foyer, if people — e.g. Chris R — wants a porthole (≠ portal) then let’s put a pair of iPads one on each side of that foyer sub wall, and stream live video between them.


TIMETABLE (ideal)

Nov 15
prototypes delivered

Nov 21
bomb tested, 

Nov 24
Deployed for WIP event
(If is documentation day for LRR, then this is when we should invite Sarah H to send her team to shoot the Door Portal along with the best of LRR for Herberger (gratis!))


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

Matt Briggs: lighting instrument experiments in everyday space (i.e. Brickyard)



I may have found a way to incorporate Byron's mechatronics to utilize natural light.
I have many laser cut "icicles" of dichromatic acrylic (picture attached)
(i.e  ).

These "icicles" do a great job of capturing light. I could hang many of these from a panel 
which either may be recessed in place of a ceiling panel or hung from a ceiling
panel. This panel could be made of a flexible material that already exists:
( )
or out of a material that is laser cut to become flexible. The panel could be animated from
a number of control points using Byron's mechatronics. This would create an effect similar to the
combination of:
+
= potential light installation
We then could create an array of panels or clusters of panels which could take in rhythm data
via xOSC and output them into movement of the "icicles" positions in space. 

it is important to have a certain amount of light in the space to create a visual equilibrium and
relieve tension in the eyes. These "icicles" will not provide an significant introduction of light in
to the space. However they will produce a refraction of the existing light and a dynamic and non-
intrusive way. If this light is paired with Josh's light tube system or the already existing natural light 
I believe it could create an small amplification of that light but more importantly a psychological affect 
that may have equal weight on the users of the space. I think it is important to have subtle but
aesthetic pleasing sources of light in the workspace and this may function in that regard.

Any thoughts or criticisms?

I could fabricate these panels and install a small cluster by the workshop date.
Byron do you think it would be possible to animate them by this time?
I will leave the bag of "icicles" on your desk at synthesis for reference.

Side note: I leave Monday for the AR2U conference in Ames, Iowa.
So I won't be able to continue on this project until I get back on the 10th.

A follow up,

I just spoke with Byron and hypothesized some potential applications the "icicles."

In the application above we could create movement based of the refraction of the
natural light or interject movement based on applied rhythm. 

We also spoke of using this light refraction as a feedback loop as a form of rhythm.
This application could be applied to the movement in forms beyond the panels using
the natural light as a parameter.

A suggested form could be a desk lamp or a corner extension. The "icicles" 
could be stacked from top to bottom and rotated from the base to create a vertical
or horizontal deflection. The "icicles" could also be installed in a flexible panel in
a more rigid application to give direct control over their movement.

We discussed the fact that the application of the "icicles" needs to be architectural
rather then sculptural. I think this can be achieved in all of these applications
but would like some input on these applications. Unfortunately I cannot find
any examples of the other applications but I can create sketches if it is unclear.

Byron also assured that the mechatronics would work in these scenarios. 

PS (I've attached the images in this email)

thanks,

-matthew briggs
 undergraduate researcher @synthesiscenter

lighting instrument experiments in everyday space (i.e. Brickyard)

I’d like to (re)introduce you and Omar Faleh, who is coming from the TML for the LRR in November.

In parallel to the iStage work, we should push forward on the animation of lighting in the Brickyard by • Bringing natural sunlight into the BY commons (Josh) • Modulating that natural light using Byron’s mechatronics • Make motorized desk lamps using Byron’s mechatronics • Designing rhythms (ideas from Omar, using Julian’s rhythm kit)

Can we see what you have come up with this coming week?

Meanwhile let me encourage you to communicate with Omar to see what you can create by the time of his arrival circa Nov 17.

To be clear, this is distinct from the LRR in the iStage, but as strategically important in that it is the extension of that work into everyday space, which is one of Synthesis’ main research extensions this year.

I’d like to write this work into a straw proposal for external sponsorship of your work… so it’d be great to have some pix soon!

Cheers, Xin Wei

LIGHT-ECOLOGY GAME: Chiaroscuro instrument

I know that Garret, Omar or any of the cc’d Max-perts can do this, but 

Ian and Michael I propose that you do this in Max as an exercise,
consulting with Pete, Garrett and Julian,
document it, and enter it into our github so we can play with it.

Omar, or Garrett can write and share the code implementing the logic described below extracting “rhythm" from the movement detected from a fiducial.  (Ask Qiao if that is possible.)

We’ve discussed the lighting hardware instruments. 
Let’s now propose some behaviours AKA instruments and games:

LIGHT-ECOLOGY GAME: Chiaroscuro instrument

The array of overhead lights are kept at a medium intensity, flickering with a pattern derived by downsampling a video of glowing embers in closeup.  [ Julian’s code should have a patch to downsample  video to DMX, but it takes minutes to write this in Max / Jitter.]

People who walk onto the floor pick up a trackable fiducial. Depending on which one they pick up, they become a Lightener or Darkener.  [ MoCap in Stauffer B123 supports this — I don't know about iStage.  How many fidicuials can be tracked at a time at most? ]

Where they stand their presence brightens or darkens the lamp under which they stand — adds a +/- offset to the lamp value.

Waving the trackable magnifies their effect — multiply the offset by a ratio as a function of the averaged speed.  Calibrate so that it does not take much effort, and make the slide up very short, but slide-down many many samples — so the decay happens over say 60-300 sec.    Maybe onset of movement should increment (or decrement ) immediately, but the movement extends relaxation to the default average level.

[Synthesis] lighting instrument experiments in everyday space (i.e. Brickyard)

Following Matt and Byron --

How about 

(0)  Shafts of sun light beaming through the space as if through forest canopy  — How can we focus diffuse sunlight from Josh’s mylar tunnels?

(1)  Accordioned paper lanterns, floor standing,, size of body?
Motorized to turn its head slowly tracking sun like a sunflower, but spilling light ..

(2) A layer of flat clouds of round curved “chips” about 5-10 cm wide, made of frosted diffuser material — or even better cut into intricate lace?
Each cloud could be pretty compact — no more than say 1m wide, so each array would be about 100 pieces at the very most.
Each piece could be suspended on a string.   Some strings are on motors.


Diverse images, none of which capture what I’m imagining.






Lenka Novakova

Suspended 1m wide cast class disks into which she projected video of rivers







On Oct 29, 2014, at 5:56 PM, Matthew Briggs <matthewjbriggsis@gmail.com> wrote:

A follow up,

I just spoke with Byron and hypothesized some potential applications the "icicles."

In the application above we could create movement based of the refraction of the
natural light or interject movement based on applied rhythm. 

We also spoke of using this light refraction as a feedback loop as a form of rhythm.
This application could be applied to the movement in forms beyond the panels using
the natural light as a parameter.

A suggested form could be a desk lamp or a corner extension. The "icicles" 
could be stacked from top to bottom and rotated from the base to create a vertical
or horizontal deflection. The "icicles" could also be installed in a flexible panel in
a more rigid application to give direct control over their movement.

We discussed the fact that the application of the "icicles" needs to be architectural
rather then sculptural. I think this can be achieved in all of these applications
but would like some input on these applications. Unfortunately I cannot find
any examples of the other applications but I can create sketches if it is unclear.

Byron also assured that the mechatronics would work in these scenarios. 

PS (I've attached the images in this email)

thanks,

-matthew briggs
 undergraduate researcher @synthesiscenter

On Wed, Oct 29, 2014 at 4:43 PM, Matthew Briggs <matthewjbriggsis@gmail.com> wrote:
Hello all,

I may have found a way to incorporate Byron's mechatronics to utilize natural light.
I have many laser cut "icicles" of dichromatic acrylic (picture attached)

These "icicles" do a great job of capturing light. I could hang many of these from a panel 
which either may be recessed in place of a ceiling panel or hung from a ceiling
panel. This panel could be made of a flexible material that already exists:
or out of a material that is laser cut to become flexible. The panel could be animated from
a number of control points using Byron's mechatronics. This would create an effect similar to the
combination of:
+
= potential light installation
We then could create an array of panels or clusters of panels which could take in rhythm data
via xOSC and output them into movement of the "icicles" positions in space. 

it is important to have a certain amount of light in the space to create a visual equilibrium and
relieve tension in the eyes. These "icicles" will not provide an significant introduction of light in
to the space. However they will produce a refraction of the existing light and a dynamic and non-
intrusive way. If this light is paired with Josh's light tube system or the already existing natural light 
I believe it could create an small amplification of that light but more importantly a psychological affect 
that may have equal weight on the users of the space. I think it is important to have subtle but
aesthetic pleasing sources of light in the workspace and this may function in that regard.

Any thoughts or criticisms?

I could fabricate these panels and install a small cluster by the workshop date.
Byron do you think it would be possible to animate them by this time?
I will leave the bag of "icicles" on your desk at synthesis for reference.

Side note: I leave Monday for the AR2U conference in Ames, Iowa.
So I won't be able to continue on this project until I get back on the 10th.

thanks,

-matthew briggs
 undergraduate researcher @synthesiscenter

On Sun, Oct 26, 2014 at 6:41 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Dear Matt, Kevin, and Josh,

I’d like to (re)introduce you and Omar Faleh, who is coming from the TML for the LRR in November.

In parallel to the iStage work, we should push forward on the animation of lighting in the Brickyard by
• Bringing natural sunlight into the BY commons (Josh)
• Modulating that natural light using Byron’s mechatronics
• Make motorized desk lamps using Byron’s mechatronics
• Designing rhythms (ideas from Omar, using Julian’s rhythm kit)

Can we see what you have come up with this coming week?

Meanwhile let me encourage you to communicate with Omar to see what you can create by the time of his arrival circa Nov 17.

To be clear, this is distinct from the LRR in the iStage, but as strategically important in that it is the extension of that work into everyday space, which is one of Synthesis’ main research extensions this year.

I’d like to write this work into a straw proposal for external sponsorship of your work… so it’d be great to have some pix soon!

Cheers,
Xin Wei

From body shadows to bodily attention: Automatic orienting of tactile attention driven by cast shadows

(Thanks to David Morris )
From body shadows to bodily attention: Automatic orienting of tactile attention driven by cast shadows
• Francesco Pavanian
• Paola Rigob, 
• Giovanni Galfanoc
 http://www.sciencedirect.com/science/article/pii/S1053810014001202

Abstract
Body shadows orient attention to the body-part casting the shadow. We have investigated the automaticity of this phenomenon, by addressing its time-course and its resistance to contextual manipulations. When targets were tactile stimuli at the hands (Exp.1) or visual stimuli near the body-shadow (Exp.2), cueing effects emerged regardless of the delay between shadow and target onset (100, 600, 1200, 2400 ms). This suggests a fast and sustained attention orienting to body-shadows, that involves both the space occupied by shadows (extra-personal space) and the space the shadow refers to (own body). When target type became unpredictable (tactile or visual), shadow-cueing effects remained robust only for tactile targets, as visual stimuli showed no overall reliable effects, regardless of whether they occurred near the shadow (Exp.3) or near the body (Exp.4). We conclude that mandatory attention shifts triggered by body-shadows are limited to tactile targets and, instead, are less automatic for visual stimuli.

Keywords
• Body perception; 
• Shadow; 
• Spatial attention; 
• Multisensory; 
• Touch; 
• Vision

[Synthesis] Protentive and retentive temporality (Was: Notes for Lighting and Rhythm Residency Nov (13)17-26)

Casting the net widely to AME + TML + Synthesis, here follows very raw notes that will turn into the plans for Synthesis Center’s Lighting and Rhythm Residency Nov (13)17-26.  Forgive me for the roughness, but I wanted to give as early a note as possible about what we are trying to do here.   Think of the work here as live, experientially rich yet expertly built experiments on temporality -- a sense of change, dynamic, rhythm or more generally temporal texture.

Please propose experiential provocations relevant to temporality, especially those that use modulated, animate lighting. 

I draw special attention to the phenomenological, Husserlian proposition of:

“… something more ambitious attempted along the lines of things we have played with using sound. By tracking feet we can produce the anticipatory  sound of a foot fall which messes with the neat and tidy
notion of retentions and protentions. Can you rework a visual representation of oneself (shadow, image, silouette, ghost) to move in anticipation of where one moves?“

This would be an apt application of bread and butter statistical DSP methods!

TODOS:

• Chris Z. and I are reaching toward a strand of propositions that are both artistically relevant and phenomenologically informative.    We can start w a double strand of fragments of prior art and prior questions relevant to “first person” / geworfen temporality (versus spectatorship as a species of vorhanden attitude which is not in Synthesis’ scope, by mandate).
We need to seed this with some micro-etudes, but we expect that we’ll discover more as we go.    This requires that we do all  the tech development prior to the event, all gear acquisition, installation,  sw engineering should be done  prior to Nov 13.   The micro is a way to fight the urge to make a performance or an installation out of this scratch work.

Informed by conversation w Chris Z and all parties interested in contributing ideas on what we do during the LRR Nov 17-26, Chris R (and I) will
• Agreed on outcomes
• Timeline
• Organize teams
• Plan Publicity, Documentation


Begin forwarded message:

From: Xin Wei Sha <Xinwei.Sha@asu.edu>
Subject: Re: [Synthesis] Notes for Lighting and Rhythm Residency Nov (13)17-26
Date: October 4, 2014 at 2:31:10 PM MST

Please please please before we dive into more gear specs

What are the experiential provocations being proposed?

For example, Omar, everyone, can you please write into some common space more example micro-studies  similar to Adrian’s examples?
(See the movement exercises that MM has drawn up for past experiments for more examples.)
Here at Synthesis, I must insist on this practice, prior to buying gear, so that we have a much greater ratio of
propositions : gadgets.

Thank you, let’s play.
Xin Wei

_________________________________________________________________________________________________

On Oct 4, 2014, at 12:53 PM, Omar Faleh <omar@morscad.com> wrote:

I got the chance lately to work with the Philips Nitro strobes which is intensely stronger than the atomics 3000 for example. It is an LED strobe,  so you can pulse, flicker, and keep on for quite a while without having to worry about discharge and re-charge.. and being an all-led strobe, it isn't as voltage- hungry as the atomics..

 The LED surface is split into 6 sub rectangle that you can address individually or can animate by preset effects, which allows for a nice play with shadows with only one light (all DMX-controlled)
and there is an RGB version of it too.. so no need for gels and colour changers.

I am also looking into some individually-addressable RGB LED strips . Placing the order today so I will hopefully be able to test and report the findings soon


_________________________________________________________________________________________________

On 2014-10-04, at 3:30 PM, Adrian Freed <adrian@adrianfreed.com> wrote:

Sounds like a fun event!

Does the gear support simple temporal displacement modulations, e.g., delaying one's shadow or a projected image of oneself?

This is rather easy to do with the right gear.

I would like to see something more ambitious attempted along the lines of things we have played with using sound. By tracking feet we can produce the anticipatory  sound of a foot fall which messes with the neat and tidy
notion of retentions and protentions. Can you rework a visual representation of oneself (shadow, image, silouette, ghost) to move in anticipation of where one moves?

It would also be interesting to modulate a scrambling of oneself and connect its intensity to movement intensity. Navid has done things similar to this with sound. The experience was rather predictible but might well be different visually.


_________________________________________________________________________________________________

On Oct 4, 2014, at 11:53 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:

Chris, Garrett, Julian, Omar, Chris Z, Evan, Byron, Prashan, Althea, Mike B, Ian S, Aniket, et al.

This is a preliminary note to sound out who is down for what for the coming residency on Lighting and Rhythm (LRR).  

The goal is to continue work on temporality from the IER last Feb March, and this time really seriously experimentally mucking with your sense of time by modulating lighting or your vision as you physically move.  First-person experience, NOT designing for spectator.    

We need to identify a more rigorous scientific direction for this residency.  Having been asking people for ideas — I’ll go ahead and decide soon!

Please think carefully about:
Core Questions to extend:  http://improvisationalenvironments.weebly.com/about.html
Playing around with lights: https://vimeo.com/tml/videos/search:light/sort:date
Key Background:  http://textures.posthaven.com


The idea is to invite Chris and his students to work [richly] on site in the iStage and have those of us who are hacking time via lighting play in parallel with Chris.   Pavan & students and interested scientist/engineers should be explicitly invited to kibbutz.




• Lighting and Rhythm
The way things are shaping up — we are gathering some gadgets to prepare for .

Equipment requested (some already installed thanks to Pete Ozzie and TML)
Ozone media system in iStage
Chris Ziegler’s Wald Forest system (MUST be able to lift off out of way as necessary within minutes — can an inexpensive motorized solution be installed ?)
3 x 6 ? grid of light fixtures with RGB gels, beaming onto floor
IR illuminators and IR-pass camera for tracking
Robe Robin MiniMe Moving Light/ Projector
Hazer (?)
Strobe + diffuser (bounce?)
+ Oculus DK1, (Mike K knows )
+ Google Glass (Chris R can ask Cooper , Ruth @ CSI)

We need to make sure we have a few rich instruments (NOT one-off hacked tableaux!) coded up ahead of time -- hence the call to Max-literate students who would like to try out what we have in order to adapt them for playing in the LRR by November.

Note 1:
Let’s be sure to enable multiplex of iStage to permit two other groups:
• Video portal - windows : Prashan, Althea Pergakis, Jen Weiler
•  shadow puppetting, Prashan working with Byron

Note 2:
Garth’s Singing Bowls are there.  Think about how to integrate such field effects.
Mike can you provide a Max patch to control them — ideally OSC -- but at least to fade up/down without having to physically touch any of the SB hardware.

Note 3:
This info should go on the lightingrhythm.weebly.com  experiment website that the LRR leads should create Monday unless someone has a better solution — it must be editable by the researchers and experiment leads themselves.  Clone from http://improvisationalenvironments.weebly.com !

Xin Wei

_________________________________________________________________________________________________


On Sep 4, 2014, at 8:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I am afraid I can't be very helpful here. I don't do MIR work myself. The field for the most part does
offline analyses of large data sets using musicologically naive Western musical concepts of pitch and rhythm.

One exception to the realtime/offline choice is from our most recent graduate student to work on the
beat tracking problem, Eric Battenburg. Here is his dissertation: http://escholarship.org/uc/item/6jf2g52n#page-3
There is interesting machine learning going on in that work but it presumes that one can make a reliable
onset detector which is a reasonable (but narrow) assumption for certain percussion sounds and drumming practice.

The questions of phase and "in sync." raised below interest me greatly. There are is no ground truth to the beat
(up or down or on the "beat"). I remember being shocked recently to discover that a bunch of research on dance/music entrainment relied as a reference on hand-labeled visual beat markings from "expert listeners in the computer music lab next door" . Various concepts such as "perceptual onset time" have been developed to sufficiently complicate this question and explain the difficulty people have observing concensus on musical event timing and relating a particular beat measurement to features of the acoustic signals.
Even a "simple" case, bass and drums, is extremely difficult to unravel. The bass being a low frequency instrument complicates the question of "onset" or moment of the beat. The issue of who in this pair is determining the tempo
is challenging and the usual handwaving that the tempo is an emergent coproduction of the performers is not very helpful in itself in elaborating the process or identifying which features of the action and sound are relevant to the entrainment. My guess is that we will find models like the co-orbital arrangment of Saturn's moons Epimetheus and Janus.
What are the system identification tools to reveal these sorts of entrainment structures? Can this be done from the sound
alone or do we have to model  embodied motions that produce the sounds?


NOTE from Adrian XW Mike Krzyzaniak on Percival-Tzanetakis Tempo Estimator :

_________________________________________________________________________________________________


On Sep 3, 2014, at 6:38 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.

Yes, it would be great to have a different measure.  For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks.  This is a weaker criterion than being in phase, and does not require periodicity.

Xin Wei


_________________________________________________________________________________________________


On Sep 2, 2014, at 5:07 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

Of course we could reduce the 6 second lag by reducing the window sizes and increasing the hop sizes, at the expense of resolution. Also, rather than using the OSS calculation provided, perhaps we could perhaps just use a standard amplitude follower that sums the absolute value of the signal with the absolute value of the Hilbert Transform of the signal and filtering the result. This would save us from decimating the signal on input and reduce the amount of time needed to gather enough samples for autocorrelation (at the expense of accuracy, particularly for slow tempi).

What are you ultimately using this algorithm for? Percival-Tzanetakis also doesn't keep track of phase. If you plan on using it to take some measure of metaphorical rhythm between, say, humans as they interact with each other or the environment, then it seems like phase would be highly important. Are we in sync or syncopated? Am I on your upbeats or do we together make a flam on the downbeats?

Mike

_________________________________________________________________________________________________


On Tue, Sep 2, 2014 at 4:09 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi Adrian,

Mike pointed out what for me is a serious constraint in the Percival-Tzanetakis tempo estimator : it is not realtime.
I wonder if you have any suggestion on how to modify the algorithm to run more “realtime” with less buffering if that’s the right word for it…

Anyway I’d trust Mike to talk with you since this is more your than my competence.   cc me for my edification and interest!

Xin Wei

_________________________________________________________________________________________________

On Sep 2, 2014, at 12:06 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

Hi Xin Wei,

I read the paper last night and downloaded the Marsyas source, but only the MATLAB implememtation is there. I can work on getting the c++ version and porting it, but the algorithm has some serious caveats that I want to run by you before I get my hands too dirty.

The main caveat is that it was not intended to run in real-time. The implementations they provide take an audio file, process the whole thing, and spit back one number representing the overall tempo.

"our algorithm is more accurate when these estimates are accumulated for an entire audio track"

It could be adapted to run in sort-of real time, but at 44.1k the tempo estimation will always lag by 6 seconds, and at a control rate of 30ms (i.e. the rate touchOSC uses to send accelerometer data from iPhone) the  algorithm as described will have to gather data for over 2 hours to make an initial tempo estimation and will only update once every 5 minutes.

Once I get the c++ source I can give an estimation of how difficult it might be to adapt (in the worst-case scenario it would be time-consuming but not terribly difficult to re-implement the whole thing in your language of choice).

If you would still like me to proceed let me know and I will contact the authors about the source.

Mike

________________________________________________________________________________________________



On Mon, Sep 1, 2014 at 3:45 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
beat~ hasn't worked well for our research purposes so I'm looking for a better instrument.

I'm no expert but P & T carefully analyze the extant techniques.
the keyword is 'streamlined'

Read the paper.  Ask Adrian and John.

Xin Wei

IMU's and sensor fusion source

a thread i wanted to share from Mariantina Adrian et al

Begin forwarded message:

From: Adrian Freed <adrian@cnmat.berkeley.edu>
Subject: Re: good comparison of IMU's and sensor fusion source
Date: August 23, 2014 at 12:02:17 PM MST
To: Sha Xin Wei <shaxinwei@gmail.com>
Cc: Vangelis Lympouridis <vl_artcode@yahoo.com>, John MacCallum <john@cnmat.berkeley.edu>, Garth Paine <Garth.Paine@asu.edu>, Todd Ingalls <TestCase@asu.edu>, Assegid Kidane <Assegid.Kidane@asu.edu>, post@synthesis.posthaven.com, Teoma Naccarato <teomajn@gmail.com>

Vangelis has been tracking the ready-to-wear IMU space more carefully than I.
I am hoping IMU's are a temporary bootstrap and that we will have less encumbering techniques with
absolute position measurements such as the upcoming Sixense Stem system.

My fear is that we will be surrounded by even cheaper, slower,  uncalibratable IMU's before the situation
improves substantially.

Keep an eye out for the next-gen x-OSC with a built-in charger and better IMU.


On Aug 23, 2014, at 6:50 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

cool. thanks.  Adrian suggested last year a ready-to-wear IMU that went for ~ $200- $250.   
Can’t recall the make.
Xin Wei


On Aug 22, 2014, at 7:11 PM, Vangelis Lympouridis <vl_artcode@yahoo.com> wrote:

That's great! Thanks a lot Adrian.

Vangelis Lympouridis, PhD
Visiting Scholar,
School of Cinematic Arts
University of Southern California

Senior Research Consultant,
Creative Media & Behavioral Health Center
University of Southern California
http://cmbhc.usc.edu

Whole Body Interaction Designer
www.inter-axions.com

vangelis@lympouridis.gr
Tel: +1 (415) 706-2638

-----Original Message-----
From: Adrian Freed [mailto:adrian@cnmat.berkeley.edu]
Sent: Friday, August 22, 2014 10:47 AM
To: Xin Wei Sha; Vangelis L
Cc: John MacCallum
Subject: good comparison of IMU's and sensor fusion source

https://github.com/kriswiner/MPU-6050/wiki/Affordable-9-DoF-Sensor-Fusion





On Aug 23, 2014, at 11:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I would add that the observation/performance pair problems are connected to the problems of signal/noise - both dependent
on POV and preschema. Another tactic I have started to explore is the material agency of "lenses" (or filters as lenses are framed in the signal processing literature). This points to bringing in the material aspects of intersubjectivity - one of the key conundrums of quantum theory that has had to invoke a lot of magic around the macroscopic and microscopic properties
of "apparatus" to keep the rest of the theory coherent.