new materials : printed sheets of LED's

Here is Prof. Ellen Hansen’s Lighting Design program @ Aalborg University:

Ozzie, can you please add this to a database of active materials?

This could be another element in our future lighting and rhythm research work:
http://www.rohinni.com/#technology

Xin Wei

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

Re: machine diagram for O4 in iStage, launch protocols in each machine's desktop, three level user accounts

Hi Garrett, Connor, Pete,

Can you all please just update the whiteboard in the iStage with what you know —
perhaps redraw it on the larger whiteboard  in the iStage ?

Then email a photo snapshot of it to synthesis-operations@googlegroups.com  
for a timely report.

Then we’ll get someone to draw the diagram.

Thanks,
Xin Wei

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________




On Dec 2, 2014, at 8:53 PM, Garrett Laroy Johnson <garrett.laroy.johnson@gmail.com> wrote:

Hi all, 
  Just bumping this thread to keep it in mind. I’m happy to draw something up, but it would be great if someone with some inDesign (or Adobe Illustrator, or Gimp) chops could render it as something a bit more legible. I’m afraid my drawings tend to be a bit crude ! 
Garrett L. Johnson
Musicology MA candidate @ ASU  
Synthesis Center - research assistant  
LORKAS (laptop orchestra of arizona state) - director 
__

On Nov 24, 2014, at 11:20 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:

Hi Garrett, Chris, Mike,

I asked Connor to label the machines and draw a machine diagram.
Can you please update this — drawing a more complete one on the same whiteboard.
Then someone can take a photo and xfer to the SC private website.*

Let’s make up the launch cheat sheets first as paper post-its, then write up as a desktop README?

That way we can customize the launch protocol and instructions by type of user:

Administrator
Synthesis Developer
Synthesis Guest

Xin Wei

* Here is the PUBLIC website, launched as of yesterday (!) http://synthesis.ame.asu.edu


<IMG_3521.jpeg>


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

[Synthesis] LRR final runs Monday and Tuesday collecting rhythms from quotidian activities with ALL sensor channels in parallel

Mike, Garrett, Rushil, Qiao, Pavan, Jessica, 
Todd if you’re up for it,

For the capstone LRR experiments, my available times are
Monday  10:00 - 1:00
2:30 - 5:00

Tuesday 11:00 - 1:30*
2:15 - 3:30

• Experiments final runs
Can we get together sometime to do group everyday movements 
exploring entrainment, anticipation, retrospection, recording all available
sensor channels  in parallel?

• Outcomes: plans 
I would like to also review this correlation work with Mike, Rushil, Qiao, et al in Pavan’s group
to plan the next steps in the correlation experiments through February.

Garrett, Mike, let's integrate multiple sources into Ozone:

(1) Jessica Rajko’s xOSC’s that she may bring in,
(2) all Garrett's sources
(3) IMU’s when Ozzie gets them in.

Can we please set it up with mics + camera so we — everyone — can try introducing rhythms extemporaneously.

Remember the point of the LRR is also to study also quotidian movement,
so it is very important Monday Tuesday to try some everyday movement
coordinated by furniture or small everyday  props,  —
Pete may we set up a test table on a white board in the middle of the floor?  We’ll do it ourselves with your OK.

Let’s think of some everyday activities.  Jessica suggested lining up arranging chairs…
Garrett can we set up to record ALL sensors in parallel without restricting to just one sensor modality.

Chris R: can you, Ozzie help all participants make sure we save all the data and videos in the Synthesis afp net archive** that Tain set up?

* Sylvia:  Can we push a meeting with JV Tuesday 11:15-11:45 till Wed or after Thanksgiving ?

** TAIN BARZSO: HOW TO connect to the Synthesis net fileserver shares:

1. Browse to sslvpn.asu.edu

This site should attempt an automatic installation of the Cisco AnyConnect
Secure Mobility Client. From my experience, it rarely works.  If/when it
fails, you will have the option to manually download the software.  Please
do so.

2. After installing and running, it will ask you for a server name.  This
will be, again, sslvpn.asu.edu

3. When it asks for username and password, use your ASURITE credentials.

At this point, you will be connected to the ASU VPN network. 

4. Connect to the share  afp://amenas.dhcp.asu.edu
Username: synthesis
Password:  ( ask Ozzie, assuming  you are eligible according to our research access policy )

Connect to share name "Synthesis"


Entrainment, anticipation, retrospection!
Xin Wei

Fwd: Rhythm workshop # 2: Heartbeat, February or January 2015

Hi,

Let’s plan a visit from Teoma Naccarato and John MacCallum for the Rhythm workshop #2
with AME PhD’s interested in pursuing some “correlation” studies toward 
understanding coordinated, non-isomorphic gesture.    (Maybe Adrian if he’s up to it in February ? )

What dates would work for the scientific team?

Let’s first talk about what experiment we might try to run with John, Teoma, and local undergrad / grad performers.

We definitely should work with quotidian as well as rehearsed movers (aka dance students)
in everyday as well as choreographed scenarios.

Like I said to Mike, et al.   At the end of the workshop, 
i.e. on Monday and Tuesday, I’d like each hosted research cluster to draft some outcome, like

• Research proposal (e.g. notes for a grant )
• Notes on experience for future publication
• Data — o4.track — video + OSC streams recorded by Garrett
should be archived in Synthesis webspace accessible to
Pavan’s group, Mike and related SC researchers

as well as
• Documentation (video, images, text) usable for print as well as web pub.
• Code for Synthesis-TML commons
• Gear for AME-Synthesis space
• Know-how as tech documentation

Xin Wei

Begin forwarded message:

From: Teoma Naccarato <teomajn@gmail.com>
Subject: Re: Rhythm workshop # 2: Heartbeat, February or January 2015
Date: November 19, 2014 at 1:41:34 AM MST
To: Sha Xin Wei <shaxinwei@gmail.com>
Cc: John MacCallum <john.m@ccallum.com>, Pavan Turaga <pturaga@asu.edu>, Michael Krzyzaniak <mkrzyzan@asu.edu>, Julian Stein <julian.stein@gmail.com>, synthesis-operations@googlegroups.com

Hi Xin Wei and company,

 

Apologies for our delayed response… we just returned (to Paris) from England where we gave talks at the MIPTL lab at Sussex University, and the Digital Studio Lab at Goldsmiths (Freida says hello!). I hope the Rhythm Workshop is going well this week! 

  

Xin Wei, we are looking forward to the “Rhythm Workshop Part 2” you have proposed.  It will be great to learn about the rhythm measurement and analysis tools being developed at AME. Our software and hardware tools for sensing, processing, and documenting cardiac and respiratory activity during a range of body motion are developing well, thanks to John and Adrian, as well Emmanuel Fletty here at IRCAM. 

 

We are wondering if it would be a possibility to spend one or two weeks either prior to or after the collective workshop as a residency at the Synthesis Lab to focus on questions and tools specific to our project. We would love to give a presentation of our work to date, and guide focused research experiments with interested participants at the lab, and in music and dance. Would this be of interest?

 

Some questions that are guiding our current research at IRCAM, and that we hope to share with collaborators AME in the New Year include:

 

1.     What patterns of temporal correlation between cardiac, respiratory, and nervous function - during a range of physical activities and conditions - can be measured and observed via bio-sensing?

2.     How can such patterns be used to choreograph and compose intentional arcs in the heart activity of performers over time?

3.     How do humans and media negotiate between:

                                                        i.     Internal, felt sense of time - as perceived

                                                       ii.     Performative time - as realized

                                                     iii.     Score time - as prescribed 

4.     To what extent does the existence of a feedback loop between humans and media – real or perceived – impact the experience of performers, and of observers? 

We look forward to discussing further. Thank you very much!

 

Best wishes,

Teoma and John

Julian's o4.track records video concurrent with (OSC) data streams

o4.track
records video concurrent with OSC data streams
now in TML-Synthesis  github.
Note happy end to this thread?

Let’s make sure we use this during LRR group work this coming week!

Xin Wei

On Nov 10, 2014, at 1:07 AM, Julian Stein <julian.stein@gmail.com> wrote:

Hello all,

I just pushed some new things to the O4.synthesis github repository. This features a major overhaul of the rhythm tools which has been in the works for several months now. This system as a whole is both more stable and flexible, and should be better catered to the research upcoming this month and beyond. Several examples are included within the bundle, and more documentation will come soon (hopefully this week). As discussed with a few of you, I'll try to make myself as available I can to help get these things up and running from afar. :)

Also included in the O4.rhyth_abs is a folder labeled o4.track. This features a simple system for recording and playing video with a synchronized osc data stream.

best,

Julian



On Oct 29, 2014, at 3:36 PM, Evan Montpellier <evan.montpellier@gmail.com> wrote:

Jinx, Julian! Please send that patch my way as well.

Evan

On Wed, Oct 29, 2014 at 6:35 PM, Evan Montpellier <evan.montpellier@gmail.com> wrote:
Hello all,

Mike, forwarding a conversation from February on a related topic (see below) - apologies if you've seen it before. A decent video recording patch (never mind one with recording and syncing of accompanying data) is a glaring absence from the current ASU/TML repertoire, at least as far as I'm aware. In terms of the video aspect, I'd like to build something around the Hap codec developed by Vidvox (Max implementationhere), since it seems like the best solution for high-efficiency playback (Hap clips are read directly onto the GPU) - although I welcome alternate suggestions!

Evan

On Thu, Feb 6, 2014 at 4:24 PM, <adrian@adrianfreed.com> wrote:
qmetro  just adds jitter and latency unless the machine is loaded in
which case an indeterminate number of frames will be dropped. In fact
the whole
notion of frame rate is corrupted in Jitter because when times get tough
it just grabs a frame from a buffer that is being overwritten.
I vaguely remember that there is a flag in one of the objects that tries
to suppress this (jit.grab or jit.record or something like that).

As for Xin Wei's wish I can say that there is nothing but silly
technical silos in the way of recording video streams along with the
OSC context with o.record. Transcoding to OSC blobs is the quick way to
do this and was in fact why the BLOB type is part of OSC.
Jeff Lubow here at CNMAT is interested in this sort of thing. We should
add a function to the "o." library that transcodes a jitter matrix
as a BLOB. It is more glamorous to truly translate between the formats
but for Xin Wei's purpose simply viewing OSC bundles as an
encapsulation format is sufficient.

We have another project that will soon need this so I will try to scare
up some cycles to move it along.


> -------- Original Message --------
> Subject: Re: documenting the discussions and exercises
> From: Sha Xin Wei <shaxinwei@gmail.com>
> Date: Thu, February 06, 2014 3:32 am

>
>
> [I’m widening this very thin technical thread to AME colleagues who may also have a stake in journaled monitored video + sensor data streams in OSC / Max-MSP-Jitter for research purposes.  - Xin Wei ]
>
> Thanks Julian.   Does qmetro introduce accumulating drift or “merely” local indeterminacy in time?   If it’s accumulating then we have a problem that may require ugly hacks like restart and stitching files.
>
> Just remember that for this work I do not care to have msec sync.     Very coarse sync between low-res video and the OSC stream is good enough for the sort of eyeballing I have in mind.   We need the OSC streams to stay in sync but the heavy media (video+video) stream is merely there for the human to imagine and talk about what was going on on the floor at that time.   Imagine scrolling through time (in Jamoma CueManager or autopattr interpolator) and always being able to see a small video showing what was going on at that time.
>
> If I had truly godlike power — like the authors of OSC and o.dot -- I’d like to not only build time tags  into the OSC standard, but some human-readable representation of the scene — i.e. a “photo / video" track.  So by default, in addition to the torrents of numbers, there’s always a way to see what the hell was going on in human-legible physical space at any OSC-moment.   This is not a rational engineering desire but an experimentalist’s desire :)
>
> Ideally the video would be the same as the input video feeds, so you can develop offline months later, and basically run the recorded input video and all sensor input against the modified code.   That way you can “reuse” the live activity in order to develop refined activity feature detectors and new synthesis instruments ourselves with out the expense of flying together everyone for technical work.   It’s a  practical workflow matter of letting the technical development and the physical play interleave with less stuttering.
>
> IMHO, it is crucial for TML  that O4 engineers make instruments that collaborating dancers, paper artists, interior design students, stroke rehab experimenters all over the world can use without you guys babysitting at the console.
>
>
> Julian, if you run short on time, you might confer with local experts — students of Garth and Todd or Pavan — or John?
>
> Happy to see you all soon!
> Xin Wei
>
> On Feb 5, 2014, at 7:33 PM, Julian Stein <julian.stein@gmail.com> wrote:
>
> > sure, I'll see what I can do with this -- probably makes most sense to use the o.table object. I'll keep in mind Adrian's concerns with qmetro. Perhaps I can synchronize the osc data and the jitter material with the same clock?
> >
> > best,
> >
> > Julian
> >
> >
> > On Wed, Feb 5, 2014 at 7:49 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
> > Speaking of videos,
> >
> > A lot of the videos we shoot for “documentation” is inadequate for scientific / phenomenological work .
> > What we must do this time around is to simultaneously record
> >
> > VIDEO (low res for human eyeballing)
> > All OSC streams.
> >
> > Is this easy with Jamoma CueManager?
> >
> > Julian do you have, or can you write this tool that stores all OSC streams in sync with al input videos (which can be low resolution), consulting Navid?  It’d be good to test it with an ASU student before  coming here  — Chinmay perhaps.
> >
> > • We need to recruit a reliable team of documenters to
> > — baby sit the  video streams or time-lapse  (it could even be 4 iPhone cameras running time lapse of still shots, or we could borrow some fancier gear)
> > — replay the buffer at the end o f every session (or whenever asked for my the participants)
> > — Take text notes at the Recap hours(12-1, and 4-5 every day.  There’s an error in the calendar which had the Afternoon sessions too long 2-6.)
> >
> > I cc Chris as the coordinator.  "Freed, Inigo" <inigo.freed@prescott.edu>, Adrian’s son, offered to bring a video team down from Prescott AZ to  ASU for the workshop.   I’m not sure that will work bc we need volunteer  participants to ruin these cameras 4 + hours  day x 3 weeks.
> >
> > IN fact, Adrian said based on his experience, that to really capture a lot of subtleties we should roll cameras before anyone steps in to the room.   I totally agree.   It introduces a lot of phenomenological framing to “set things up” and then check ___ then “roll camera”   This model works of reframed art like cinema production, but does not work for studying the phenomenology of event
> >
> > Q: Do we need moving video?  I think lo-res motion or time-lapse medium res stills should suffice.  Remember this is merely to stimulate Recap discussion after every two hours .  If we run out of space we could throw away all but salient stills which would need to be chosen on the spot and copied to a common drive…  etc. etc. etc.
> >
> > Anyway, the  Coordinators can plan this out in consultation with me or Adrian, and we need a team.
> >
> > Cheers,
> > Xin Wei

Re: Lighting Ecology: Darkener and Lighteners game.

Byron's experience brings us back to Ozzie's caution.  Maybe we should not switch out the tower since we know it works for demo purposes.
And we have no time to fix hardware and do LRR and prep for WIP Dec 2 a in the same period.

We can still order a computer, but we don't have to change hw unless it is crucial for the LRR research Études ...

Xin Wei

Re: Lighting Ecology: Darkener and Lighteners game.

Maybe Ozzie and you can see with Caroline if you can borrow a Firewire Mac for this for at least the rest of Fall term, assuming all the Macs are imaged with Max?

Xin Wei


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________




On Nov 10, 2014, at 11:20 AM, Byron Lahey <byron.lahey@asu.edu> wrote:

I thought I had ironed out all the issues with the ReacTIVision fiducial CV system last week when I discovered the new update of the software, but I've hit another snag. Maybe someone has an idea to solve or work around this problem. The problem is that the new computers all have Thunderbolt ports rather than native Firewire and ReacTIVision does not recognize the cameras once they are connected with an adapter. I didn't encounter this issue with my earlier testing because my computer is old enough to still have a Firewire port. Interestingly, Max has no problem seeing the Firewire camera when it is connected through the Thunderbolt adapter, so it is not just a power issue or complete incompatibility. The simplest solution I can think of is to purchase new cameras that are compatible with both the computer at a hardware level and with the ReacTIVision software. Tracking down cameras that we have a reasonable guarantee of working shouldn't be too difficult, but I don't know if we want to prioritize this as an investment or not. The key (at least for my camera-projector system) is to have a camera with a variable zoom so that the imaging sensing can be closely matched to the projected image. For an overhead tracking system, the key factors would be a wide angle lens, high resolution (to minimize the required size for the fiducials) and reasonably high speed (for stable image tracking). 

Another solution to this is to simply used older computers that have firewire ports so that we can use existing cameras. This would certainly be feasible for the camera-projector system. We would have to test this out for the overhead tracking system. The best thing to do of the overhead system is to try out the existing camera arrangement to see how well it works with the fiducials. I can try the out this afternoon (assuming the space is open for testing). 

In general I think a fiducial based computer vision system is a good way to reliably track individuals and to afford conscious interaction with the system. 

Best,
Byron

On Sun, Nov 9, 2014 at 8:19 PM, Sha Xin Wei <Xinwei.Sha@asu.edu> wrote:

After the Monday tests of hardware and networking mapping to lights in iStage with Pete, Omar and I can code the actual logic the rest of the week with Garrett, and maybe with some scheduled consults w Evan and Julian for ideas.  Omar’s got code already working in on the body lights.  I want to see it working via the overhead lights beaming onto the floor.  For this game we need to know who’s who. 

Byron, Ozzie anyone:  What is the easiest way in the iStage for us to track individual identities?   Not blob because they’ll  confuse.

Byron: IF indeed you’ve got the fiducial code working.   Can you show this to Omar (and Garrett) so they can work on the   Darkener and Lighteners ecology with me?   Can we have people carry fiducials visible to the camera from above?  Maybe they can place a fiducial next to them in order to draw light.   Can the fiducuals be printed in a legible size that is not ludicrously large?   How about if we print a bunch of  fiducials as “LIGHT COINS” of three different denominations — RGB — each good for a certain amount of elapsed time.   Exposing a fiducial to the camera charges it against your account (it can be used only once per session).  Maybe waving it can increase  the intensity but shorten the time (reciprocal constraint).   People can then choose to band together or in the +/- version, subtract etc.

Vangelis Lympouridis Greek notions relevant to Lighting & Rhythm Residency Workshop Nov 17-26

Thanks Vangelis @ USC for 
expanding on Adrian Freed's suggestive semblance typology of entrainment.


On Nov 10, 2014, at 12:22 AM, "Vangelis Lympouridis" <vl_artcode@yahoo.com> wrote:

Oh well, I did my best. I wish I could be more helpful but you may get something out of that...

Syn-tonic - From the Greek «Τόνος» (Tonos) which is the word for the accent mark you see often in Greek words (‘) so it can refer to the dominant frequency- emphasis. The Greek Τονικότητα (eng. Tonality) refers to the harmonic epicenter of a musical system (gamut). The closest to syntonic is συντονισμός which is close to tuning (sync) (same frequency, same phase). Makes sense as “together in frequency”.

Syn-chronic – From the Greek «Χρόνος» (Chronos) eng. Time. The Greek equivalent is συγχρονισμός (synchronization) meaning at the same exact time; maybe more appropriate for “together in time”.

Syn-tropic  - From the Greek «Τρόπος» Tropos. Τροπος is the way something is happening, emerging, evolving. There is no word in Greek that combines these two but there is a term that was introduced in 2006 at the EU called Comodalité  that got translated in Greek as συν-τροπικ-ότητα (meaning every way possible). If I am right orientation and direction is the same thing except that direction implies movement. Tropic and tropical refers also to a cyclical direction so in a way it can refer to the same direction.

Iso-tropic – There is a word called ισοτροπία in Greek (eng. Isotropy) (mainly found in Physics and Chemistry) which is the condition where the properties of something are not related to the way that they are measured or that you get the same results regardless the direction or dimension of the study. I think that “together in orientation” could be based in the other big family of Παρα- prefix (Para- parallax), that means at the side of, facing the same direction. Two things can move in parallel or point in parallel ways. ...although I cannot think of something that sounds good. Parallelic sounds really bad...

Syn-tenic – Τείνω(verb) Ταινία , refers to length, trajectory and we also call α movie «ταινία». Great use for “along the same path”.

Syn-Plectic – From the Greek Πλέξη which means knitting, interlacing, braids. Συμπλέκτης (syn and plexis) is the car clutch. Very good for “plaited together”

Syn-gamic – From the Greek verb Γαμέω (eng. gametes) and refers to fertilization (conception). Syngamic might bring giggles as syn and gameo is like a threesome; gametes with companion!

Syn-detic – From the Greek verb Δένω (eng. binding) which exists in many derivatives of things combined. Συν-δετικό though refers to something that joins two or more things together; while two things combined are συν-δεδεμένα.  Syndemic sounds closer although the world is already in use in the health context.

Syn-biotic – From the Greek Bios ; things living together. Very nice.



Vangelis Lympouridis, PhD
Visiting Scholar,
School of Cinematic Arts
University of Southern California

Senior Research Consultant,
Creative Media & Behavioral Health Center
University of Southern California
http://cmbhc.usc.edu

Whole Body Interaction Designer
www.inter-axions.com

vangelis@lympouridis.gr
Tel: +1 (415) 706-2638 


-----Original Message

getting rhythms into Ozone OSC via Julian's rhythm kit

I’d say the exercise would be to plug M Bateman’s drum kit — or any MIDI drum pad that AME may have in its DC pile — to a Mac set aside for rhythm work as Garrett suggested.  

M Bateman:  Run my scratch patch on the Mac that Ozzie (or Pete) assign for rhythm work in iStage.  Tweak my dummy code to send the MIDI to some of Julian’s lighting control patch’s parameters.    You can ask Pete or any of the ultra-friendly AME / SC media artist-researchers for tips.   Or one of us (Pete or Garrett or ...) can do that for you.  Mapping MIDI note (or channel) to DMX channel  and the scaled velocity to light intensity would be a good and very very easy start.  (If the hardware is plugged into the Ozone lighting control, it should be a 5-10 minute exercise.)


Then we can talk —

I’d take you through  some simple ways to take a sequence and play it back  —
with delay, reverse, echo and loop etc.  Hint [ zl group ]  and some other [ zl ]  objects for Max’s standard list manipulation,  together with Max’s delay object, etc.  could be a start ….

But I’m sure Julian’s got many many such logics already — so it may be most efficient to simply schedule a G+ for Bateman, Julian and one local Max artist like Garrett or you
with screenshare.


Again the key is simply to do the easy step of connecting Bateman’s drum kit to iStage lights.  Everything else will follow in small easy steps with other folks doing the other mappings to sound and video etc.  with existing instruments. 

So we would have five ways of getting rhythms into the iStage network (Ozone)

1) video cam feed, 
2) mic feed, 
3)  drums would be third.
4) accelerometers from Chris Z’s iPods, maybe ?
5) accelerometers from Mike K’s kit


Omar, can you and Julian make sure you can use everything Julian’s latest shiny kit?  So you can bring Julian’s refinement to Phoenix ?   

I’d like to dedicate Monday morning to plugging cables and code together to see that we can get these different ways of getting rhythm into the iStage Ozone network, mapped to the lights.

Then try it out briefly with Mike K and dancers at 5:30, after Chris Z is finished.

This is just a test parallel to the Lighting Ecology: Darkener and Lighteners game. 

BTW, TouchDesigner would get in the way of getting rhythm exploration going.  And we don’t need it since we’re working collectively with other talents at AME + Synthesis + TML.

Onward!
Xin Wei


On Nov 9, 2014, at 6:33 PM, Peter Weisman <peter.weisman@asu.edu> wrote:

Hi All,
 
I will need a little guidance for this Thursday. I think I know how. But, I am open to any suggestions.
 
I will make arrangements to be available next Monday at 5:30pm.
 
I will be hanging the lights as soon as they get in this week. I will try to make myself available as much as I can.
 
Pete
AME Technical Director
(480)965-9041(O)
 
From: Xin Wei Sha 
Sent: Sunday, November 09, 2014 8:19 AM
To: Michael Krzyzaniak; Julie Akerly; Jessica Rajko; Varsha Iyengar (Student); Ian Shelanskey (Student); Michael Bateman; Peter Weisman; Garrett Johnson
Cc: synthesis-operations@googlegroups.com; Julian Stein; Assegid Kidane; Sylvia Arce
Subject: Adrian Freed: fuelling imagination for inventing entrainments
 
Garrett, Mike K, Julie, Jessica, Varsha, Ian, Bateman, Pete, 
 
Can we schedule time to create and try out some entrainment exercises during the LRR Nov 17 -26?
 
How about we make our first working session as a group on
 
Monday Nov 17 5:30 in the iStage.
 
I’ll work with people in smaller groups.
 
Tuesday afternoon Nov 12, I’ll putter around Brickyard,
or by appointment in the iStage.
 
Ian, Mike Bateman, Pete: let’s run through the lighting controls as Julian gives them to us
Let’s look at Julian’s Jitter interface together.
I asked Omar to prep a video to video mapper patch that he, you and I can 
modify during the workshop to implement different rhythm-mappings.
 
Pete can map Bateman's percussion midi into the lights (it’s a snap via our Max :)
Thursday 11/14, at 3:30  in iStage ?
 
Garrett, Mike K: can you work with Pete (or Ozzie re. video) to map the mic and video inputs in iStage into Julian's rhythm kit:
percussion midi
video camera on a tripod,
overhead video,
 
1 or more audio microphone on tripod,
wireless lapel microphones. 
Julian has code that maps video into rhythm.
Can we try that out Wed 11/12 or Thurs 11/13 in the iStage?
 
 
Let’s find rich language about rhythm, entrainment, temporality, processuality in movement and sound.
relevant to entrainment and co-ordinated rhythm, as the Lighting and Rhythm workshop looms.  
 
On Nov 8, 2014, at 11:54 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
I would like to enrich our imagination before we lock too much into talking about “synchronization” 
and regular periodic clocks in the Lighting and Rhythm workshop.
 
 
On Nov 8, 2014, at 9:41 PM, Adrian Freed <Adrian.Freed@asu.edu> wrote:
 

Adrian Freed: fuelling imagination for inventing entrainments

Here’s a note relevant to entrainment and co-ordinated rhythm, as the Lighting and Rhythm workshop looms.

Adrian Freed’s collected a list of what he calls “semblance typology of entrainments.”
Notice that he does NOT say “types” but merely a typology of semblances, which helps us avoid reification errors.

Let’s think of this as a way to enrich our vocabulary for rhythm, entrainment, temporality, processuality in movement and sound.
Let’s not use this — or any other list of categories — as an absolute universal set of categories sans context. 
See the comments below.


On Nov 8, 2014, at 11:54 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
I would like to enrich our imagination before we lock too much into talking about “synchronization” 
and regular periodic clocks in the Lighting and Rhythm workshop.




On Nov 8, 2014, at 10:01 PM, Adrian Freed <Adrian.Freed@asu.edu> wrote:
I haven't thought about this in a while but the move to organize the words using the term "together" which I did for the talk at Oxford is interesting because it allows a formalization in mereotopology a la Whitehead but I would have to provide an interpretation of enclosure and overlap that involves  correlation metrics in some structure, for example, CPCA 
(Correlational Principal Component Analysis): http://www.lv-nus.org/papers%5C2008%5C2008_J_6.pdf



On Nov 9, 2014, at 7:05 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:


__________________________________________________________________________________
Sha Xin Wei, Ph.D. • xinwei@mindspring.com • skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________



Thanks Adrian,

Then I wonder if rank statistics —  ordering vs cardinal metrics — could be a compromise way.
David Tinapple and Loren Olson here have invented a web system for peer-critique called CritViz
that has students rank each other’s projects.  It’s an attention orienting thing…

Of course there are all sorts of problems with it — the most serious one being 
herding toward mediocrity or at best herding toward spectacle

and it is a bandage invented by the necessity of dealing with high student / teacher ratios in studio classes.

The theoretical question is can we approximate a mereotopology on space of 
Whiteheadian or Simondonian  processes using rank ordering 
which may do away with the requirement for coordinate loci.

Axiom of Choice gives us a well ordering on any set, so that’s a start, 
but there is no effective decidable way to compute an ordering for an arbitrary set.
I think that’s a good thing.   And it should be drilled into every engineer.
This means that the responsibility for ordering 
shifts to ensembles in milieu rather than individual people or computers.

Hmmm, so where does that leave us?

We turn to our anthropologists, historians, 
and to the canaries in the cage — artists and poets…

There’s a group of faculty here including  Cynthia Selin, who are doing what they call scenario [ design | planning | imagining ]
as a way for people to deal with wicked messy situations like climate change, or developing economies.   They seem very prepared for 
narrative techniques applied to ensemble events but don’t know anything about theater or performance.  
It seems like a situation  ripe for exploration.  If we can get sat the naive phase of slapping conventional narrative genres from
community theater or gamification or info-visualization onto this.

Very hard to talk about, so I want to build examples here.

Xin Wei