Dear Pete, David, Nicole, (technical advisor Ozzie)I would like Pete to be in charge of installations for WIP Dec 2 following my event design — (For the dramaturgy I’ll of course rely on experienced folks — Chris Z as well as Pete, Garth, Todd, Ed, David.)For Dec 2, I would like monitors mounted in portrait on some walls in Stauffer and iStage: hence the name DOOR PORTALS, DoorPortsBEHAVIOREach single-sided Door Portal has a camera facing out from its front.In iStage: when you stand 20’ away from the Door Portal you see live feed of the receptionist desk at Stauffer. When there is noone present, the DoorPortal should show past people, faintly.As you occlude the Door Portal (background subtract, use tml.jit.Rokeby), it turns into a mirror.As you walk toward this Door Portal, at 15' the mirror fades to live feed from another Door Portal BAt 10’ the live feed fades to live feed from kn Door Portal CLOCATIONS, SIZES(1) In iStage in the location that we discussed — next to the door to offices, flush with the floor. Largest size.(2) On a wall in Stauffer B, facing the reception desk (in addition or instead of the display behind receptionist’s head.)Cables must be hidden — we cannot have exposed cable in the receptionist area. Let the Space committee take note.(3) Mounted on an wall in the dark Commons of the BY.All these should be PORTRAIT — no landscape, please — let’s avoid knee-jerk screenic response! Bordered images are boring.NICOLE, your idea for prototyping is very smart. if you want, please feel free to make paper mockups — justask Ozzie or Pete for dimensions of our monitors, cut paper to 1-1 scale, and bring some mock doors that we can pin to the walls in the BY and iStage.Re. scale, I’m ok with the doors not human height — 1-1 body size is too literal (= boring). Plus I want children-sized apparatus in our office spaces. Our office spaces are all designed for 5-6’ tall people, nominally adults — this is socially sterile.NOTE: I think we do NOT need a big screen on the foyer — it is too expensive and a waste of a monitor (unless CSI buys one :)For that Brickyard foyer, if people — e.g. Chris R — wants a porthole (≠ portal) then let’s put a pair of iPads one on each side of that foyer sub wall, and stream live video between them.TIMETABLE (ideal)Nov 15prototypes deliveredNov 21bomb tested,Nov 24Deployed for WIP event(If is documentation day for LRR, then this is when we should invite Sarah H to send her team to shoot the Door Portal along with the best of LRR for Herberger (gratis!))________________________________________________________________________________________Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASUskype: shaxinwei • mobile: +1-650-815-9962Founding Director, Topological Media Lab • topologicalmedialab.net/_________________________________________________________________________________________________
I’d like to (re)introduce you and Omar Faleh, who is coming from the TML for the LRR in November.
In parallel to the iStage work, we should push forward on the animation of lighting in the Brickyard by • Bringing natural sunlight into the BY commons (Josh) • Modulating that natural light using Byron’s mechatronics • Make motorized desk lamps using Byron’s mechatronics • Designing rhythms (ideas from Omar, using Julian’s rhythm kit)
Can we see what you have come up with this coming week?
Meanwhile let me encourage you to communicate with Omar to see what you can create by the time of his arrival circa Nov 17.
To be clear, this is distinct from the LRR in the iStage, but as strategically important in that it is the extension of that work into everyday space, which is one of Synthesis’ main research extensions this year.
I’d like to write this work into a straw proposal for external sponsorship of your work… so it’d be great to have some pix soon!
Cheers, Xin Wei
A follow up,I just spoke with Byron and hypothesized some potential applications the "icicles."In the application above we could create movement based of the refraction of thenatural light or interject movement based on applied rhythm.We also spoke of using this light refraction as a feedback loop as a form of rhythm.This application could be applied to the movement in forms beyond the panels usingthe natural light as a parameter.A suggested form could be a desk lamp or a corner extension. The "icicles"could be stacked from top to bottom and rotated from the base to create a verticalor horizontal deflection. The "icicles" could also be installed in a flexible panel ina more rigid application to give direct control over their movement.We discussed the fact that the application of the "icicles" needs to be architecturalrather then sculptural. I think this can be achieved in all of these applicationsbut would like some input on these applications. Unfortunately I cannot findany examples of the other applications but I can create sketches if it is unclear.Byron also assured that the mechatronics would work in these scenarios.PS (I've attached the images in this email)On Wed, Oct 29, 2014 at 4:43 PM, Matthew Briggs <matthewjbriggsis@gmail.com> wrote:Hello all,I may have found a way to incorporate Byron's mechatronics to utilize natural light.I have many laser cut "icicles" of dichromatic acrylic (picture attached)These "icicles" do a great job of capturing light. I could hang many of these from a panelwhich either may be recessed in place of a ceiling panel or hung from a ceilingpanel. This panel could be made of a flexible material that already exists:or out of a material that is laser cut to become flexible. The panel could be animated froma number of control points using Byron's mechatronics. This would create an effect similar to thecombination of:+= potential light installation
We then could create an array of panels or clusters of panels which could take in rhythm data
via xOSC and output them into movement of the "icicles" positions in space.it is important to have a certain amount of light in the space to create a visual equilibrium and
relieve tension in the eyes. These "icicles" will not provide an significant introduction of light into the space. However they will produce a refraction of the existing light and a dynamic and non-intrusive way. If this light is paired with Josh's light tube system or the already existing natural lightI believe it could create an small amplification of that light but more importantly a psychological affectthat may have equal weight on the users of the space. I think it is important to have subtle butaesthetic pleasing sources of light in the workspace and this may function in that regard.Any thoughts or criticisms?I could fabricate these panels and install a small cluster by the workshop date.Byron do you think it would be possible to animate them by this time?I will leave the bag of "icicles" on your desk at synthesis for reference.Side note: I leave Monday for the AR2U conference in Ames, Iowa.So I won't be able to continue on this project until I get back on the 10th.thanks,-matthew briggsundergraduate researcher @synthesiscenterOn Sun, Oct 26, 2014 at 6:41 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:Dear Matt, Kevin, and Josh,I’d like to (re)introduce you and Omar Faleh, who is coming from the TML for the LRR in November.In parallel to the iStage work, we should push forward on the animation of lighting in the Brickyard by• Bringing natural sunlight into the BY commons (Josh)• Modulating that natural light using Byron’s mechatronics• Make motorized desk lamps using Byron’s mechatronics• Designing rhythms (ideas from Omar, using Julian’s rhythm kit)Can we see what you have come up with this coming week?Meanwhile let me encourage you to communicate with Omar to see what you can create by the time of his arrival circa Nov 17.To be clear, this is distinct from the LRR in the iStage, but as strategically important in that it is the extension of that work into everyday space, which is one of Synthesis’ main research extensions this year.I’d like to write this work into a straw proposal for external sponsorship of your work… so it’d be great to have some pix soon!Cheers,Xin Wei
• Francesco Pavanian
• Paola Rigob,
• Giovanni Galfanoc
http://www.sciencedirect.com/science/article/pii/S1053810014001202
Keywords
From: Xin Wei Sha <Xinwei.Sha@asu.edu>
Subject: Re: [Synthesis] Notes for Lighting and Rhythm Residency Nov (13)17-26
Date: October 4, 2014 at 2:31:10 PM MST
Please please please before we dive into more gear specsWhat are the experiential provocations being proposed?For example, Omar, everyone, can you please write into some common space more example micro-studies similar to Adrian’s examples?(See the movement exercises that MM has drawn up for past experiments for more examples.)Here at Synthesis, I must insist on this practice, prior to buying gear, so that we have a much greater ratio ofpropositions : gadgets.Thank you, let’s play.Xin Wei
_________________________________________________________________________________________________
On Oct 4, 2014, at 12:53 PM, Omar Faleh <omar@morscad.com> wrote:I got the chance lately to work with the Philips Nitro strobes which is intensely stronger than the atomics 3000 for example. It is an LED strobe, so you can pulse, flicker, and keep on for quite a while without having to worry about discharge and re-charge.. and being an all-led strobe, it isn't as voltage- hungry as the atomics..The LED surface is split into 6 sub rectangle that you can address individually or can animate by preset effects, which allows for a nice play with shadows with only one light (all DMX-controlled)
and there is an RGB version of it too.. so no need for gels and colour changers.I am also looking into some individually-addressable RGB LED strips . Placing the order today so I will hopefully be able to test and report the findings soon
_________________________________________________________________________________________________
On 2014-10-04, at 3:30 PM, Adrian Freed <adrian@adrianfreed.com> wrote:Sounds like a fun event!
Does the gear support simple temporal displacement modulations, e.g., delaying one's shadow or a projected image of oneself?
This is rather easy to do with the right gear.
I would like to see something more ambitious attempted along the lines of things we have played with using sound. By tracking feet we can produce the anticipatory sound of a foot fall which messes with the neat and tidy
notion of retentions and protentions. Can you rework a visual representation of oneself (shadow, image, silouette, ghost) to move in anticipation of where one moves?
It would also be interesting to modulate a scrambling of oneself and connect its intensity to movement intensity. Navid has done things similar to this with sound. The experience was rather predictible but might well be different visually.
_________________________________________________________________________________________________
On Oct 4, 2014, at 11:53 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:Chris, Garrett, Julian, Omar, Chris Z, Evan, Byron, Prashan, Althea, Mike B, Ian S, Aniket, et al.
This is a preliminary note to sound out who is down for what for the coming residency on Lighting and Rhythm (LRR).
The goal is to continue work on temporality from the IER last Feb March, and this time really seriously experimentally mucking with your sense of time by modulating lighting or your vision as you physically move. First-person experience, NOT designing for spectator.
We need to identify a more rigorous scientific direction for this residency. Having been asking people for ideas — I’ll go ahead and decide soon!
Please think carefully about:
Core Questions to extend: http://improvisationalenvironments.weebly.com/about.html
Playing around with lights: https://vimeo.com/tml/videos/search:light/sort:date
Key Background: http://textures.posthaven.comThe idea is to invite Chris and his students to work [richly] on site in the iStage and have those of us who are hacking time via lighting play in parallel with Chris. Pavan & students and interested scientist/engineers should be explicitly invited to kibbutz.
• Lighting and Rhythm
The way things are shaping up — we are gathering some gadgets to prepare for .
Equipment requested (some already installed thanks to Pete Ozzie and TML)
Ozone media system in iStage
Chris Ziegler’s Wald Forest system (MUST be able to lift off out of way as necessary within minutes — can an inexpensive motorized solution be installed ?)
3 x 6 ? grid of light fixtures with RGB gels, beaming onto floor
IR illuminators and IR-pass camera for tracking
Robe Robin MiniMe Moving Light/ Projector
Hazer (?)
Strobe + diffuser (bounce?)
+ Oculus DK1, (Mike K knows )
+ Google Glass (Chris R can ask Cooper , Ruth @ CSI)
We need to make sure we have a few rich instruments (NOT one-off hacked tableaux!) coded up ahead of time -- hence the call to Max-literate students who would like to try out what we have in order to adapt them for playing in the LRR by November.
Note 1:
Let’s be sure to enable multiplex of iStage to permit two other groups:
• Video portal - windows : Prashan, Althea Pergakis, Jen Weiler
• shadow puppetting, Prashan working with Byron
Note 2:
Garth’s Singing Bowls are there. Think about how to integrate such field effects.
Mike can you provide a Max patch to control them — ideally OSC -- but at least to fade up/down without having to physically touch any of the SB hardware.
Note 3:
This info should go on the lightingrhythm.weebly.com experiment website that the LRR leads should create Monday unless someone has a better solution — it must be editable by the researchers and experiment leads themselves. Clone from http://improvisationalenvironments.weebly.com !
Xin Wei
_________________________________________________________________________________________________
On Sep 4, 2014, at 8:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:
I am afraid I can't be very helpful here. I don't do MIR work myself. The field for the most part does
offline analyses of large data sets using musicologically naive Western musical concepts of pitch and rhythm.
One exception to the realtime/offline choice is from our most recent graduate student to work on the
beat tracking problem, Eric Battenburg. Here is his dissertation: http://escholarship.org/uc/item/6jf2g52n#page-3
There is interesting machine learning going on in that work but it presumes that one can make a reliable
onset detector which is a reasonable (but narrow) assumption for certain percussion sounds and drumming practice.
The questions of phase and "in sync." raised below interest me greatly. There are is no ground truth to the beat
(up or down or on the "beat"). I remember being shocked recently to discover that a bunch of research on dance/music entrainment relied as a reference on hand-labeled visual beat markings from "expert listeners in the computer music lab next door" . Various concepts such as "perceptual onset time" have been developed to sufficiently complicate this question and explain the difficulty people have observing concensus on musical event timing and relating a particular beat measurement to features of the acoustic signals.
Even a "simple" case, bass and drums, is extremely difficult to unravel. The bass being a low frequency instrument complicates the question of "onset" or moment of the beat. The issue of who in this pair is determining the tempo
is challenging and the usual handwaving that the tempo is an emergent coproduction of the performers is not very helpful in itself in elaborating the process or identifying which features of the action and sound are relevant to the entrainment. My guess is that we will find models like the co-orbital arrangment of Saturn's moons Epimetheus and Janus.
What are the system identification tools to reveal these sorts of entrainment structures? Can this be done from the sound
alone or do we have to model embodied motions that produce the sounds?
NOTE from Adrian XW Mike Krzyzaniak on Percival-Tzanetakis Tempo Estimator :
_________________________________________________________________________________________________
On Sep 3, 2014, at 6:38 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.
Yes, it would be great to have a different measure. For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks. This is a weaker criterion than being in phase, and does not require periodicity.
Xin Wei
_________________________________________________________________________________________________
On Sep 2, 2014, at 5:07 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:Of course we could reduce the 6 second lag by reducing the window sizes and increasing the hop sizes, at the expense of resolution. Also, rather than using the OSS calculation provided, perhaps we could perhaps just use a standard amplitude follower that sums the absolute value of the signal with the absolute value of the Hilbert Transform of the signal and filtering the result. This would save us from decimating the signal on input and reduce the amount of time needed to gather enough samples for autocorrelation (at the expense of accuracy, particularly for slow tempi).
What are you ultimately using this algorithm for? Percival-Tzanetakis also doesn't keep track of phase. If you plan on using it to take some measure of metaphorical rhythm between, say, humans as they interact with each other or the environment, then it seems like phase would be highly important. Are we in sync or syncopated? Am I on your upbeats or do we together make a flam on the downbeats?
Mike
_________________________________________________________________________________________________
On Tue, Sep 2, 2014 at 4:09 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi Adrian,
Mike pointed out what for me is a serious constraint in the Percival-Tzanetakis tempo estimator : it is not realtime.
I wonder if you have any suggestion on how to modify the algorithm to run more “realtime” with less buffering if that’s the right word for it…
Anyway I’d trust Mike to talk with you since this is more your than my competence. cc me for my edification and interest!
Xin Wei
_________________________________________________________________________________________________
On Sep 2, 2014, at 12:06 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:Hi Xin Wei,
I read the paper last night and downloaded the Marsyas source, but only the MATLAB implememtation is there. I can work on getting the c++ version and porting it, but the algorithm has some serious caveats that I want to run by you before I get my hands too dirty.
The main caveat is that it was not intended to run in real-time. The implementations they provide take an audio file, process the whole thing, and spit back one number representing the overall tempo.
"our algorithm is more accurate when these estimates are accumulated for an entire audio track"
It could be adapted to run in sort-of real time, but at 44.1k the tempo estimation will always lag by 6 seconds, and at a control rate of 30ms (i.e. the rate touchOSC uses to send accelerometer data from iPhone) the algorithm as described will have to gather data for over 2 hours to make an initial tempo estimation and will only update once every 5 minutes.
Once I get the c++ source I can give an estimation of how difficult it might be to adapt (in the worst-case scenario it would be time-consuming but not terribly difficult to re-implement the whole thing in your language of choice).
If you would still like me to proceed let me know and I will contact the authors about the source.
Mike
________________________________________________________________________________________________
On Mon, Sep 1, 2014 at 3:45 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
beat~ hasn't worked well for our research purposes so I'm looking for a better instrument.
I'm no expert but P & T carefully analyze the extant techniques.
the keyword is 'streamlined'
Read the paper. Ask Adrian and John.
Xin Wei
From: Adrian Freed <adrian@cnmat.berkeley.edu>
Subject: Re: good comparison of IMU's and sensor fusion source
Date: August 23, 2014 at 12:02:17 PM MST
To: Sha Xin Wei <shaxinwei@gmail.com>
Cc: Vangelis Lympouridis <vl_artcode@yahoo.com>, John MacCallum <john@cnmat.berkeley.edu>, Garth Paine <Garth.Paine@asu.edu>, Todd Ingalls <TestCase@asu.edu>, Assegid Kidane <Assegid.Kidane@asu.edu>, post@synthesis.posthaven.com, Teoma Naccarato <teomajn@gmail.com>
Vangelis has been tracking the ready-to-wear IMU space more carefully than I.
I am hoping IMU's are a temporary bootstrap and that we will have less encumbering techniques with
absolute position measurements such as the upcoming Sixense Stem system.
My fear is that we will be surrounded by even cheaper, slower, uncalibratable IMU's before the situation
improves substantially.
Keep an eye out for the next-gen x-OSC with a built-in charger and better IMU.
On Aug 23, 2014, at 6:50 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:cool. thanks. Adrian suggested last year a ready-to-wear IMU that went for ~ $200- $250.
Can’t recall the make.
Xin Wei
On Aug 22, 2014, at 7:11 PM, Vangelis Lympouridis <vl_artcode@yahoo.com> wrote:That's great! Thanks a lot Adrian.
Vangelis Lympouridis, PhD
Visiting Scholar,
School of Cinematic Arts
University of Southern California
Senior Research Consultant,
Creative Media & Behavioral Health Center
University of Southern California
http://cmbhc.usc.edu
Whole Body Interaction Designer
www.inter-axions.com
vangelis@lympouridis.gr
Tel: +1 (415) 706-2638
-----Original Message-----
From: Adrian Freed [mailto:adrian@cnmat.berkeley.edu]
Sent: Friday, August 22, 2014 10:47 AM
To: Xin Wei Sha; Vangelis L
Cc: John MacCallum
Subject: good comparison of IMU's and sensor fusion source
https://github.com/kriswiner/MPU-6050/wiki/Affordable-9-DoF-Sensor-Fusion
I would add that the observation/performance pair problems are connected to the problems of signal/noise - both dependent
on POV and preschema. Another tactic I have started to explore is the material agency of "lenses" (or filters as lenses are framed in the signal processing literature). This points to bringing in the material aspects of intersubjectivity - one of the key conundrums of quantum theory that has had to invoke a lot of magic around the macroscopic and microscopic properties
of "apparatus" to keep the rest of the theory coherent.