lighting instrument experiments in everyday space (i.e. Brickyard)

I’d like to (re)introduce you and Omar Faleh, who is coming from the TML for the LRR in November.

In parallel to the iStage work, we should push forward on the animation of lighting in the Brickyard by • Bringing natural sunlight into the BY commons (Josh) • Modulating that natural light using Byron’s mechatronics • Make motorized desk lamps using Byron’s mechatronics • Designing rhythms (ideas from Omar, using Julian’s rhythm kit)

Can we see what you have come up with this coming week?

Meanwhile let me encourage you to communicate with Omar to see what you can create by the time of his arrival circa Nov 17.

To be clear, this is distinct from the LRR in the iStage, but as strategically important in that it is the extension of that work into everyday space, which is one of Synthesis’ main research extensions this year.

I’d like to write this work into a straw proposal for external sponsorship of your work… so it’d be great to have some pix soon!

Cheers, Xin Wei

LIGHT-ECOLOGY GAME: Chiaroscuro instrument

I know that Garret, Omar or any of the cc’d Max-perts can do this, but 

Ian and Michael I propose that you do this in Max as an exercise,
consulting with Pete, Garrett and Julian,
document it, and enter it into our github so we can play with it.

Omar, or Garrett can write and share the code implementing the logic described below extracting “rhythm" from the movement detected from a fiducial.  (Ask Qiao if that is possible.)

We’ve discussed the lighting hardware instruments. 
Let’s now propose some behaviours AKA instruments and games:

LIGHT-ECOLOGY GAME: Chiaroscuro instrument

The array of overhead lights are kept at a medium intensity, flickering with a pattern derived by downsampling a video of glowing embers in closeup.  [ Julian’s code should have a patch to downsample  video to DMX, but it takes minutes to write this in Max / Jitter.]

People who walk onto the floor pick up a trackable fiducial. Depending on which one they pick up, they become a Lightener or Darkener.  [ MoCap in Stauffer B123 supports this — I don't know about iStage.  How many fidicuials can be tracked at a time at most? ]

Where they stand their presence brightens or darkens the lamp under which they stand — adds a +/- offset to the lamp value.

Waving the trackable magnifies their effect — multiply the offset by a ratio as a function of the averaged speed.  Calibrate so that it does not take much effort, and make the slide up very short, but slide-down many many samples — so the decay happens over say 60-300 sec.    Maybe onset of movement should increment (or decrement ) immediately, but the movement extends relaxation to the default average level.

Re: [Synthesis] lighting instrument experiments in everyday space (i.e. Brickyard)

Hi everyone,

I spent a few minutes playing (!) with the dichromatic shafts that Matt created. These have a lot of interesting potential. I attached a few photos and a couple of very rough videos (one-handed/no-handed capture on an old phone) just to give a few hints at this potential. It goes without saying that these perceptual experiences are far richer in person. The static images show the effects on the wall of light reflected off of a random pile of these pieces. This static configuration produces a projection that changes shape and color as the sun moves across the sky. While this is cool, it is certainly nothing new. The videos show a couple of quick experiments moving the shafts. Rotating them, one can get shafts of light that move like clock hands. Imagine a large cloud of these on an wall all rotating at their own frequencies (all of which could be variable). Most interesting of all (at least to me) is the effects created by bending the shafts. This allows the light to be focused and spread in organic shapes. With several of these simultaneously bent (not well documented), the effect is quite striking since they all bend slightly differently and distort in a broadly cohesive, but individually independent way. This process could produce the "shafts of light through trees" effect that Xin Wei suggested. I would be very interested in seeing how this works with the light guide that Josh is designing.

Cheers,
Byron



On Thu, Oct 30, 2014 at 1:41 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Following Matt and Byron --

How about 

(0)  Shafts of sun light beaming through the space as if through forest canopy  — How can we focus diffuse sunlight from Josh’s mylar tunnels?

(1)  Accordioned paper lanterns, floor standing,, size of body?
Motorized to turn its head slowly tracking sun like a sunflower, but spilling light ..

(2) A layer of flat clouds of round curved “chips” about 5-10 cm wide, made of frosted diffuser material — or even better cut into intricate lace?
Each cloud could be pretty compact — no more than say 1m wide, so each array would be about 100 pieces at the very most.
Each piece could be suspended on a string.   Some strings are on motors.


Diverse images, none of which capture what I’m imagining.






Lenka Novakova

Suspended 1m wide cast class disks into which she projected video of rivers







On Oct 29, 2014, at 5:56 PM, Matthew Briggs <matthewjbriggsis@gmail.com> wrote:

A follow up,

I just spoke with Byron and hypothesized some potential applications the "icicles."

In the application above we could create movement based of the refraction of the
natural light or interject movement based on applied rhythm. 

We also spoke of using this light refraction as a feedback loop as a form of rhythm.
This application could be applied to the movement in forms beyond the panels using
the natural light as a parameter.

A suggested form could be a desk lamp or a corner extension. The "icicles" 
could be stacked from top to bottom and rotated from the base to create a vertical
or horizontal deflection. The "icicles" could also be installed in a flexible panel in
a more rigid application to give direct control over their movement.

We discussed the fact that the application of the "icicles" needs to be architectural
rather then sculptural. I think this can be achieved in all of these applications
but would like some input on these applications. Unfortunately I cannot find
any examples of the other applications but I can create sketches if it is unclear.

Byron also assured that the mechatronics would work in these scenarios. 

PS (I've attached the images in this email)

thanks,

-matthew briggs
 undergraduate researcher @synthesiscenter

On Wed, Oct 29, 2014 at 4:43 PM, Matthew Briggs <matthewjbriggsis@gmail.com> wrote:
Hello all,

I may have found a way to incorporate Byron's mechatronics to utilize natural light.
I have many laser cut "icicles" of dichromatic acrylic (picture attached)

These "icicles" do a great job of capturing light. I could hang many of these from a panel 
which either may be recessed in place of a ceiling panel or hung from a ceiling
panel. This panel could be made of a flexible material that already exists:
or out of a material that is laser cut to become flexible. The panel could be animated from
a number of control points using Byron's mechatronics. This would create an effect similar to the
combination of:
+
= potential light installation
We then could create an array of panels or clusters of panels which could take in rhythm data
via xOSC and output them into movement of the "icicles" positions in space. 

it is important to have a certain amount of light in the space to create a visual equilibrium and
relieve tension in the eyes. These "icicles" will not provide an significant introduction of light in
to the space. However they will produce a refraction of the existing light and a dynamic and non-
intrusive way. If this light is paired with Josh's light tube system or the already existing natural light 
I believe it could create an small amplification of that light but more importantly a psychological affect 
that may have equal weight on the users of the space. I think it is important to have subtle but
aesthetic pleasing sources of light in the workspace and this may function in that regard.

Any thoughts or criticisms?

I could fabricate these panels and install a small cluster by the workshop date.
Byron do you think it would be possible to animate them by this time?
I will leave the bag of "icicles" on your desk at synthesis for reference.

Side note: I leave Monday for the AR2U conference in Ames, Iowa.
So I won't be able to continue on this project until I get back on the 10th.

thanks,

-matthew briggs
 undergraduate researcher @synthesiscenter

On Sun, Oct 26, 2014 at 6:41 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Dear Matt, Kevin, and Josh,

I’d like to (re)introduce you and Omar Faleh, who is coming from the TML for the LRR in November.

In parallel to the iStage work, we should push forward on the animation of lighting in the Brickyard by
• Bringing natural sunlight into the BY commons (Josh)
• Modulating that natural light using Byron’s mechatronics
• Make motorized desk lamps using Byron’s mechatronics
• Designing rhythms (ideas from Omar, using Julian’s rhythm kit)

Can we see what you have come up with this coming week?

Meanwhile let me encourage you to communicate with Omar to see what you can create by the time of his arrival circa Nov 17.

To be clear, this is distinct from the LRR in the iStage, but as strategically important in that it is the extension of that work into everyday space, which is one of Synthesis’ main research extensions this year.

I’d like to write this work into a straw proposal for external sponsorship of your work… so it’d be great to have some pix soon!

Cheers,
Xin Wei

[Synthesis] lighting instrument experiments in everyday space (i.e. Brickyard)

Following Matt and Byron --

How about 

(0)  Shafts of sun light beaming through the space as if through forest canopy  — How can we focus diffuse sunlight from Josh’s mylar tunnels?

(1)  Accordioned paper lanterns, floor standing,, size of body?
Motorized to turn its head slowly tracking sun like a sunflower, but spilling light ..

(2) A layer of flat clouds of round curved “chips” about 5-10 cm wide, made of frosted diffuser material — or even better cut into intricate lace?
Each cloud could be pretty compact — no more than say 1m wide, so each array would be about 100 pieces at the very most.
Each piece could be suspended on a string.   Some strings are on motors.


Diverse images, none of which capture what I’m imagining.






Lenka Novakova

Suspended 1m wide cast class disks into which she projected video of rivers







On Oct 29, 2014, at 5:56 PM, Matthew Briggs <matthewjbriggsis@gmail.com> wrote:

A follow up,

I just spoke with Byron and hypothesized some potential applications the "icicles."

In the application above we could create movement based of the refraction of the
natural light or interject movement based on applied rhythm. 

We also spoke of using this light refraction as a feedback loop as a form of rhythm.
This application could be applied to the movement in forms beyond the panels using
the natural light as a parameter.

A suggested form could be a desk lamp or a corner extension. The "icicles" 
could be stacked from top to bottom and rotated from the base to create a vertical
or horizontal deflection. The "icicles" could also be installed in a flexible panel in
a more rigid application to give direct control over their movement.

We discussed the fact that the application of the "icicles" needs to be architectural
rather then sculptural. I think this can be achieved in all of these applications
but would like some input on these applications. Unfortunately I cannot find
any examples of the other applications but I can create sketches if it is unclear.

Byron also assured that the mechatronics would work in these scenarios. 

PS (I've attached the images in this email)

thanks,

-matthew briggs
 undergraduate researcher @synthesiscenter

On Wed, Oct 29, 2014 at 4:43 PM, Matthew Briggs <matthewjbriggsis@gmail.com> wrote:
Hello all,

I may have found a way to incorporate Byron's mechatronics to utilize natural light.
I have many laser cut "icicles" of dichromatic acrylic (picture attached)

These "icicles" do a great job of capturing light. I could hang many of these from a panel 
which either may be recessed in place of a ceiling panel or hung from a ceiling
panel. This panel could be made of a flexible material that already exists:
or out of a material that is laser cut to become flexible. The panel could be animated from
a number of control points using Byron's mechatronics. This would create an effect similar to the
combination of:
+
= potential light installation
We then could create an array of panels or clusters of panels which could take in rhythm data
via xOSC and output them into movement of the "icicles" positions in space. 

it is important to have a certain amount of light in the space to create a visual equilibrium and
relieve tension in the eyes. These "icicles" will not provide an significant introduction of light in
to the space. However they will produce a refraction of the existing light and a dynamic and non-
intrusive way. If this light is paired with Josh's light tube system or the already existing natural light 
I believe it could create an small amplification of that light but more importantly a psychological affect 
that may have equal weight on the users of the space. I think it is important to have subtle but
aesthetic pleasing sources of light in the workspace and this may function in that regard.

Any thoughts or criticisms?

I could fabricate these panels and install a small cluster by the workshop date.
Byron do you think it would be possible to animate them by this time?
I will leave the bag of "icicles" on your desk at synthesis for reference.

Side note: I leave Monday for the AR2U conference in Ames, Iowa.
So I won't be able to continue on this project until I get back on the 10th.

thanks,

-matthew briggs
 undergraduate researcher @synthesiscenter

On Sun, Oct 26, 2014 at 6:41 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Dear Matt, Kevin, and Josh,

I’d like to (re)introduce you and Omar Faleh, who is coming from the TML for the LRR in November.

In parallel to the iStage work, we should push forward on the animation of lighting in the Brickyard by
• Bringing natural sunlight into the BY commons (Josh)
• Modulating that natural light using Byron’s mechatronics
• Make motorized desk lamps using Byron’s mechatronics
• Designing rhythms (ideas from Omar, using Julian’s rhythm kit)

Can we see what you have come up with this coming week?

Meanwhile let me encourage you to communicate with Omar to see what you can create by the time of his arrival circa Nov 17.

To be clear, this is distinct from the LRR in the iStage, but as strategically important in that it is the extension of that work into everyday space, which is one of Synthesis’ main research extensions this year.

I’d like to write this work into a straw proposal for external sponsorship of your work… so it’d be great to have some pix soon!

Cheers,
Xin Wei

From body shadows to bodily attention: Automatic orienting of tactile attention driven by cast shadows

(Thanks to David Morris )
From body shadows to bodily attention: Automatic orienting of tactile attention driven by cast shadows
• Francesco Pavanian
• Paola Rigob, 
• Giovanni Galfanoc
 http://www.sciencedirect.com/science/article/pii/S1053810014001202

Abstract
Body shadows orient attention to the body-part casting the shadow. We have investigated the automaticity of this phenomenon, by addressing its time-course and its resistance to contextual manipulations. When targets were tactile stimuli at the hands (Exp.1) or visual stimuli near the body-shadow (Exp.2), cueing effects emerged regardless of the delay between shadow and target onset (100, 600, 1200, 2400 ms). This suggests a fast and sustained attention orienting to body-shadows, that involves both the space occupied by shadows (extra-personal space) and the space the shadow refers to (own body). When target type became unpredictable (tactile or visual), shadow-cueing effects remained robust only for tactile targets, as visual stimuli showed no overall reliable effects, regardless of whether they occurred near the shadow (Exp.3) or near the body (Exp.4). We conclude that mandatory attention shifts triggered by body-shadows are limited to tactile targets and, instead, are less automatic for visual stimuli.

Keywords
• Body perception; 
• Shadow; 
• Spatial attention; 
• Multisensory; 
• Touch; 
• Vision

[Synthesis] Protentive and retentive temporality (Was: Notes for Lighting and Rhythm Residency Nov (13)17-26)

Casting the net widely to AME + TML + Synthesis, here follows very raw notes that will turn into the plans for Synthesis Center’s Lighting and Rhythm Residency Nov (13)17-26.  Forgive me for the roughness, but I wanted to give as early a note as possible about what we are trying to do here.   Think of the work here as live, experientially rich yet expertly built experiments on temporality -- a sense of change, dynamic, rhythm or more generally temporal texture.

Please propose experiential provocations relevant to temporality, especially those that use modulated, animate lighting. 

I draw special attention to the phenomenological, Husserlian proposition of:

“… something more ambitious attempted along the lines of things we have played with using sound. By tracking feet we can produce the anticipatory  sound of a foot fall which messes with the neat and tidy
notion of retentions and protentions. Can you rework a visual representation of oneself (shadow, image, silouette, ghost) to move in anticipation of where one moves?“

This would be an apt application of bread and butter statistical DSP methods!

TODOS:

• Chris Z. and I are reaching toward a strand of propositions that are both artistically relevant and phenomenologically informative.    We can start w a double strand of fragments of prior art and prior questions relevant to “first person” / geworfen temporality (versus spectatorship as a species of vorhanden attitude which is not in Synthesis’ scope, by mandate).
We need to seed this with some micro-etudes, but we expect that we’ll discover more as we go.    This requires that we do all  the tech development prior to the event, all gear acquisition, installation,  sw engineering should be done  prior to Nov 13.   The micro is a way to fight the urge to make a performance or an installation out of this scratch work.

Informed by conversation w Chris Z and all parties interested in contributing ideas on what we do during the LRR Nov 17-26, Chris R (and I) will
• Agreed on outcomes
• Timeline
• Organize teams
• Plan Publicity, Documentation


Begin forwarded message:

From: Xin Wei Sha <Xinwei.Sha@asu.edu>
Subject: Re: [Synthesis] Notes for Lighting and Rhythm Residency Nov (13)17-26
Date: October 4, 2014 at 2:31:10 PM MST

Please please please before we dive into more gear specs

What are the experiential provocations being proposed?

For example, Omar, everyone, can you please write into some common space more example micro-studies  similar to Adrian’s examples?
(See the movement exercises that MM has drawn up for past experiments for more examples.)
Here at Synthesis, I must insist on this practice, prior to buying gear, so that we have a much greater ratio of
propositions : gadgets.

Thank you, let’s play.
Xin Wei

_________________________________________________________________________________________________

On Oct 4, 2014, at 12:53 PM, Omar Faleh <omar@morscad.com> wrote:

I got the chance lately to work with the Philips Nitro strobes which is intensely stronger than the atomics 3000 for example. It is an LED strobe,  so you can pulse, flicker, and keep on for quite a while without having to worry about discharge and re-charge.. and being an all-led strobe, it isn't as voltage- hungry as the atomics..

 The LED surface is split into 6 sub rectangle that you can address individually or can animate by preset effects, which allows for a nice play with shadows with only one light (all DMX-controlled)
and there is an RGB version of it too.. so no need for gels and colour changers.

I am also looking into some individually-addressable RGB LED strips . Placing the order today so I will hopefully be able to test and report the findings soon


_________________________________________________________________________________________________

On 2014-10-04, at 3:30 PM, Adrian Freed <adrian@adrianfreed.com> wrote:

Sounds like a fun event!

Does the gear support simple temporal displacement modulations, e.g., delaying one's shadow or a projected image of oneself?

This is rather easy to do with the right gear.

I would like to see something more ambitious attempted along the lines of things we have played with using sound. By tracking feet we can produce the anticipatory  sound of a foot fall which messes with the neat and tidy
notion of retentions and protentions. Can you rework a visual representation of oneself (shadow, image, silouette, ghost) to move in anticipation of where one moves?

It would also be interesting to modulate a scrambling of oneself and connect its intensity to movement intensity. Navid has done things similar to this with sound. The experience was rather predictible but might well be different visually.


_________________________________________________________________________________________________

On Oct 4, 2014, at 11:53 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:

Chris, Garrett, Julian, Omar, Chris Z, Evan, Byron, Prashan, Althea, Mike B, Ian S, Aniket, et al.

This is a preliminary note to sound out who is down for what for the coming residency on Lighting and Rhythm (LRR).  

The goal is to continue work on temporality from the IER last Feb March, and this time really seriously experimentally mucking with your sense of time by modulating lighting or your vision as you physically move.  First-person experience, NOT designing for spectator.    

We need to identify a more rigorous scientific direction for this residency.  Having been asking people for ideas — I’ll go ahead and decide soon!

Please think carefully about:
Core Questions to extend:  http://improvisationalenvironments.weebly.com/about.html
Playing around with lights: https://vimeo.com/tml/videos/search:light/sort:date
Key Background:  http://textures.posthaven.com


The idea is to invite Chris and his students to work [richly] on site in the iStage and have those of us who are hacking time via lighting play in parallel with Chris.   Pavan & students and interested scientist/engineers should be explicitly invited to kibbutz.




• Lighting and Rhythm
The way things are shaping up — we are gathering some gadgets to prepare for .

Equipment requested (some already installed thanks to Pete Ozzie and TML)
Ozone media system in iStage
Chris Ziegler’s Wald Forest system (MUST be able to lift off out of way as necessary within minutes — can an inexpensive motorized solution be installed ?)
3 x 6 ? grid of light fixtures with RGB gels, beaming onto floor
IR illuminators and IR-pass camera for tracking
Robe Robin MiniMe Moving Light/ Projector
Hazer (?)
Strobe + diffuser (bounce?)
+ Oculus DK1, (Mike K knows )
+ Google Glass (Chris R can ask Cooper , Ruth @ CSI)

We need to make sure we have a few rich instruments (NOT one-off hacked tableaux!) coded up ahead of time -- hence the call to Max-literate students who would like to try out what we have in order to adapt them for playing in the LRR by November.

Note 1:
Let’s be sure to enable multiplex of iStage to permit two other groups:
• Video portal - windows : Prashan, Althea Pergakis, Jen Weiler
•  shadow puppetting, Prashan working with Byron

Note 2:
Garth’s Singing Bowls are there.  Think about how to integrate such field effects.
Mike can you provide a Max patch to control them — ideally OSC -- but at least to fade up/down without having to physically touch any of the SB hardware.

Note 3:
This info should go on the lightingrhythm.weebly.com  experiment website that the LRR leads should create Monday unless someone has a better solution — it must be editable by the researchers and experiment leads themselves.  Clone from http://improvisationalenvironments.weebly.com !

Xin Wei

_________________________________________________________________________________________________


On Sep 4, 2014, at 8:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I am afraid I can't be very helpful here. I don't do MIR work myself. The field for the most part does
offline analyses of large data sets using musicologically naive Western musical concepts of pitch and rhythm.

One exception to the realtime/offline choice is from our most recent graduate student to work on the
beat tracking problem, Eric Battenburg. Here is his dissertation: http://escholarship.org/uc/item/6jf2g52n#page-3
There is interesting machine learning going on in that work but it presumes that one can make a reliable
onset detector which is a reasonable (but narrow) assumption for certain percussion sounds and drumming practice.

The questions of phase and "in sync." raised below interest me greatly. There are is no ground truth to the beat
(up or down or on the "beat"). I remember being shocked recently to discover that a bunch of research on dance/music entrainment relied as a reference on hand-labeled visual beat markings from "expert listeners in the computer music lab next door" . Various concepts such as "perceptual onset time" have been developed to sufficiently complicate this question and explain the difficulty people have observing concensus on musical event timing and relating a particular beat measurement to features of the acoustic signals.
Even a "simple" case, bass and drums, is extremely difficult to unravel. The bass being a low frequency instrument complicates the question of "onset" or moment of the beat. The issue of who in this pair is determining the tempo
is challenging and the usual handwaving that the tempo is an emergent coproduction of the performers is not very helpful in itself in elaborating the process or identifying which features of the action and sound are relevant to the entrainment. My guess is that we will find models like the co-orbital arrangment of Saturn's moons Epimetheus and Janus.
What are the system identification tools to reveal these sorts of entrainment structures? Can this be done from the sound
alone or do we have to model  embodied motions that produce the sounds?


NOTE from Adrian XW Mike Krzyzaniak on Percival-Tzanetakis Tempo Estimator :

_________________________________________________________________________________________________


On Sep 3, 2014, at 6:38 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.

Yes, it would be great to have a different measure.  For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks.  This is a weaker criterion than being in phase, and does not require periodicity.

Xin Wei


_________________________________________________________________________________________________


On Sep 2, 2014, at 5:07 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

Of course we could reduce the 6 second lag by reducing the window sizes and increasing the hop sizes, at the expense of resolution. Also, rather than using the OSS calculation provided, perhaps we could perhaps just use a standard amplitude follower that sums the absolute value of the signal with the absolute value of the Hilbert Transform of the signal and filtering the result. This would save us from decimating the signal on input and reduce the amount of time needed to gather enough samples for autocorrelation (at the expense of accuracy, particularly for slow tempi).

What are you ultimately using this algorithm for? Percival-Tzanetakis also doesn't keep track of phase. If you plan on using it to take some measure of metaphorical rhythm between, say, humans as they interact with each other or the environment, then it seems like phase would be highly important. Are we in sync or syncopated? Am I on your upbeats or do we together make a flam on the downbeats?

Mike

_________________________________________________________________________________________________


On Tue, Sep 2, 2014 at 4:09 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi Adrian,

Mike pointed out what for me is a serious constraint in the Percival-Tzanetakis tempo estimator : it is not realtime.
I wonder if you have any suggestion on how to modify the algorithm to run more “realtime” with less buffering if that’s the right word for it…

Anyway I’d trust Mike to talk with you since this is more your than my competence.   cc me for my edification and interest!

Xin Wei

_________________________________________________________________________________________________

On Sep 2, 2014, at 12:06 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

Hi Xin Wei,

I read the paper last night and downloaded the Marsyas source, but only the MATLAB implememtation is there. I can work on getting the c++ version and porting it, but the algorithm has some serious caveats that I want to run by you before I get my hands too dirty.

The main caveat is that it was not intended to run in real-time. The implementations they provide take an audio file, process the whole thing, and spit back one number representing the overall tempo.

"our algorithm is more accurate when these estimates are accumulated for an entire audio track"

It could be adapted to run in sort-of real time, but at 44.1k the tempo estimation will always lag by 6 seconds, and at a control rate of 30ms (i.e. the rate touchOSC uses to send accelerometer data from iPhone) the  algorithm as described will have to gather data for over 2 hours to make an initial tempo estimation and will only update once every 5 minutes.

Once I get the c++ source I can give an estimation of how difficult it might be to adapt (in the worst-case scenario it would be time-consuming but not terribly difficult to re-implement the whole thing in your language of choice).

If you would still like me to proceed let me know and I will contact the authors about the source.

Mike

________________________________________________________________________________________________



On Mon, Sep 1, 2014 at 3:45 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
beat~ hasn't worked well for our research purposes so I'm looking for a better instrument.

I'm no expert but P & T carefully analyze the extant techniques.
the keyword is 'streamlined'

Read the paper.  Ask Adrian and John.

Xin Wei

IMU's and sensor fusion source

a thread i wanted to share from Mariantina Adrian et al

Begin forwarded message:

From: Adrian Freed <adrian@cnmat.berkeley.edu>
Subject: Re: good comparison of IMU's and sensor fusion source
Date: August 23, 2014 at 12:02:17 PM MST
To: Sha Xin Wei <shaxinwei@gmail.com>
Cc: Vangelis Lympouridis <vl_artcode@yahoo.com>, John MacCallum <john@cnmat.berkeley.edu>, Garth Paine <Garth.Paine@asu.edu>, Todd Ingalls <TestCase@asu.edu>, Assegid Kidane <Assegid.Kidane@asu.edu>, post@synthesis.posthaven.com, Teoma Naccarato <teomajn@gmail.com>

Vangelis has been tracking the ready-to-wear IMU space more carefully than I.
I am hoping IMU's are a temporary bootstrap and that we will have less encumbering techniques with
absolute position measurements such as the upcoming Sixense Stem system.

My fear is that we will be surrounded by even cheaper, slower,  uncalibratable IMU's before the situation
improves substantially.

Keep an eye out for the next-gen x-OSC with a built-in charger and better IMU.


On Aug 23, 2014, at 6:50 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

cool. thanks.  Adrian suggested last year a ready-to-wear IMU that went for ~ $200- $250.   
Can’t recall the make.
Xin Wei


On Aug 22, 2014, at 7:11 PM, Vangelis Lympouridis <vl_artcode@yahoo.com> wrote:

That's great! Thanks a lot Adrian.

Vangelis Lympouridis, PhD
Visiting Scholar,
School of Cinematic Arts
University of Southern California

Senior Research Consultant,
Creative Media & Behavioral Health Center
University of Southern California
http://cmbhc.usc.edu

Whole Body Interaction Designer
www.inter-axions.com

vangelis@lympouridis.gr
Tel: +1 (415) 706-2638

-----Original Message-----
From: Adrian Freed [mailto:adrian@cnmat.berkeley.edu]
Sent: Friday, August 22, 2014 10:47 AM
To: Xin Wei Sha; Vangelis L
Cc: John MacCallum
Subject: good comparison of IMU's and sensor fusion source

https://github.com/kriswiner/MPU-6050/wiki/Affordable-9-DoF-Sensor-Fusion





On Aug 23, 2014, at 11:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I would add that the observation/performance pair problems are connected to the problems of signal/noise - both dependent
on POV and preschema. Another tactic I have started to explore is the material agency of "lenses" (or filters as lenses are framed in the signal processing literature). This points to bringing in the material aspects of intersubjectivity - one of the key conundrums of quantum theory that has had to invoke a lot of magic around the macroscopic and microscopic properties
of "apparatus" to keep the rest of the theory coherent.

[Synthesis] signal correlation as a measure of synchronicity in dance

On the hw wireless sensor side of the story — 

For the  work reported in Ubicomp and ISWC 200, TML @ GaTech (Atlanta)  pioneered use of the TinyOS platforms in two form factors (size of xosc, and US quarter !).  
We had a crack team — Giovanni Iachello, Steven Dow (now prof at CMU, a friend to our incoming prof Stacey Kuznetsov :), and Yoichiro Serita (from Sony Labs).

There followed 10 more years of hackers in the movement art + tech world making their own naive solutions, 
and not even getting to either the interesting art or the significant problems to be solved in EE, which lay beyond their scientific judgment.*

The good news is that good-enough on-body IMU’s are affordable to AME.  So Pavan and Ozzie are going to get us a set (soon!)

I am in a hurry though to form a crack team to tackle the 
actual research challenges of 

(1) Understanding non-sonic rhythm as an example of apperception , and

(2) Scaffolding different senses of resonating temporal texture, using for example spectral analogies that can generalize from classical DSP to higher dimensional time-varying fields 
(this is not as deep as it sounds — it should be amenable to smart engineering of which ASU has an abundance. 
I don’t know if it’s an area with lots of readymade techniques.  Some expert needs to tell us.
But nothing should stop us form doing the first step ourselves : 
implement 
the Aylward-Paradiso in Max/MSP
and run multiple time series through it from whatever has adequate fps
from interestingly rich movement (Chris Ziegler + students; maybe with our Visiting Artist friends from Montreal and Copenhagen )
)

Thanks for the great reference Mike!  
( Can someone show me that MSP external connected with Adrian and John’s oriented normal odot kit, so we can advance the temporality research? )


Political-Epistemological Rant:
AME and its cousins CIDSE, ECEE, SEMTE can get ahead of the shallow uses that artists and engineers have made of each other to date.

Political-Economic Rant:
The foundations in EU and Quebec that funded us lucky bastards gave too much and not enough funding:  
So much $ that artists could hire their own students with just enough EE / CS knowledge to hack naive solutions;
not enough $$ to fund cohorts of grad student that could make it worthwhile for a EE / CS professor to dedicate 2-3 MA students over 6+ years of continuous trial and error projects in daily studio +bench work with movement artists who could be subsidized to NOT produce productions for a significant % of that time.


On Oct 4, 2014, at 4:18 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:

Yes this is a tool I wanted since 2003 in  the oz/math/  section of Ozone


We need all these conditions:
robust hardware with lightweight battery (ASU has some good battery guys ), 
high sensor-ensemble fps ,  
low latency transmission,  
some maths like Aylward-Paradiso  to play with in our Max toolkit

What we would do in place of Paradiso’s naive notion of music is to map to electroacoustics synthesis etc.

In fact if Julian or Mike or ...could point us to the Max external that implements cross-correlation (not auto-correlation) we could play with it right away on acoustic input
and think about how to handle control rate data…  I think there  is one already in McGill or IRCAM’s vector processing toolkits.

If someone is interested, I’d be happy to work with him/her to implement this and map it more directly to organized sound (with the help of our sound artists) for rich feedback.

On Oct 4, 2014, at 3:50 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

Hi,

Do you guys know this paper? They put gyroscopes on dancers and used realtime (windowed) cross-covariance to measure time lag between several dancers. I believe this is similar the what Xin Wei has in mind as part of studying temporality.

Mike

Notes for Lighting and Rhythm Residency Nov (13)17-26

Chris, Garrett, Julian, Omar, Chris Z, Evan, Byron, Prashan, Althea, Mike B, Ian S, Aniket, et al.

This is a preliminary note to sound out who is down for what for the coming residency on Lighting and Rhythm (LRR).  

The goal is to continue work on temporality from the IER last Feb March, and this time really seriously experimentally mucking with your sense of time by modulating lighting or your vision as you physically move.  First-person experience

NOT

 designing for spectator
.    

We need to identify a more rigorous scientific direction for this residency.  Having been asking people for ideas — I’ll go ahead and decide soon!

Please think carefully about:


The idea is to invite Chris and his students to work on site in the iStage and have those of us who are hacking time via lighting play in parallel with Chris.   Pavan & students and interested scientist/engineers should be explicitly invited to kibbutz.


Lighting and Rhythm 
The way things are shaping up — we are gathering some gadgets to prepare for .

Equipment requested (some already installed thanks to Pete Ozzie and TML)
Ozone media system in iStage
Chris Ziegler’s Wald Forest system (MUST be able to lift off out of way as necessary within minutes — can an inexpensive motorized solution be installed ?)
3 x 6 ? grid of light fixtures with RGB gels, beaming onto floor
IR illuminators and IR-pass camera for tracking
Robe Robin MiniMe Moving Light/ Projector 
Hazer (?)
Strobe + diffuser (bounce?)
+ Oculus DK1, (Mike K knows )
+ Google Glass (Chris R can ask Cooper , Ruth @ CSI)

We need to make sure we have a few rich instruments (NOT one-off hacked tableaux!) coded up ahead of time -- hence the call to Max-literate students who would like to try out what we have in order to adapt them for playing in the LRR by November.

Note 1:
Let’s be sure to enable multiplex of iStage to permit two other groups:
• Video portal - windows : Prashan, Althea Pergakis, Jen Weiler
•  shadow puppetting, Prashan working with Byron

Note 2:
Garth’s Singing Bowls are there.  Think about how to integrate such field effects.
Mike can you provide a Max patch to control them — ideally OSC -- but at least to fade up/down without having to physically touch any of the SB hardware.

Note 3:
This info should go on the lightingrhythm.weebly.com  experiment website that the LRR leads should create Monday unless someone has a better solution — it must be editable by the researchers and experiment leads themselves.  Clone from http://improvisationalenvironments.weebly.com !

Xin Wei

On Sep 4, 2014, at 8:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I am afraid I can't be very helpful here. I don't do MIR work myself. The field for the most part does
offline analyses of large data sets using musicologically naive Western musical concepts of pitch and rhythm.

One exception to the realtime/offline choice is from our most recent graduate student to work on the
beat tracking problem, Eric Battenburg. Here is his dissertation: http://escholarship.org/uc/item/6jf2g52n#page-3
There is interesting machine learning going on in that work but it presumes that one can make a reliable
onset detector which is a reasonable (but narrow) assumption for certain percussion sounds and drumming practice.

The questions of phase and "in sync." raised below interest me greatly. There are is no ground truth to the beat
(up or down or on the "beat"). I remember being shocked recently to discover that a bunch of research on dance/music entrainment relied as a reference on hand-labeled visual beat markings from "expert listeners in the computer music lab next door" . Various concepts such as "perceptual onset time" have been developed to sufficiently complicate this question and explain the difficulty people have observing concensus on musical event timing and relating a particular beat measurement to features of the acoustic signals.
Even a "simple" case, bass and drums, is extremely difficult to unravel. The bass being a low frequency instrument complicates the question of "onset" or moment of the beat. The issue of who in this pair is determining the tempo
is challenging and the usual handwaving that the tempo is an emergent coproduction of the performers is not very helpful in itself in elaborating the process or identifying which features of the action and sound are relevant to the entrainment. My guess is that we will find models like the co-orbital arrangment of Saturn's moons Epimetheus and Janus.
What are the system identification tools to reveal these sorts of entrainment structures? Can this be done from the sound
alone or do we have to model  embodied motions that produce the sounds?


NOTE from Adrian XW Mike Krzyzaniak on Percival-Tzanetakis Tempo Estimator :

On Sep 3, 2014, at 6:38 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.

Yes, it would be great to have a different measure.  For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks.  This is a weaker criterion than being in phase, and does not require periodicity.

Xin Wei

[Synthesis] Transmutations Online

YES YES!

hence: Transmutations Online:  I attach the  proposal


I'm talking with various possible allies, or sibling projects in Copenhagen, Moscow, and Beijing…

But Dehlia and I would like to push forward our own issue #2, which can take forms appropriate to the theme and contributions, quite different from the inaugural issue:


We wanted to use Jhave Johnston’s MUPS code, which Jhave graciously offered years ago.  Now it may be good to recode it in HTML 5. (Who is good enough to do that — Jen?)



__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2846
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________