Ozone LECE instruments post BuB review

It was fun to do the Atmosphere panel yesterday.   Let’s review the instruments…   

My main comment is that instead of creating custom code for a specific state / scenario that displaces the other instruments, we need to make the suite of instruments in Ozone available in parallel as well.  The way to do this is to continue to learn the suite of Ozone instruments which are a fairly small (5 or so) number of instruments, all resident in memory on the trashcan, where each can be varied widely (and wildly) in look and feel by parameterization and pre-sets.   Rather than code from scratch, let’s see what you can do by modifying parameters of the rich suite of code that already exists.

We’ll need the full suite for rehearsals this week, because the Dean’s invited the President and Provost for April 7-9.  A visit may or may not happen that week, but that is the earliest date by which we need to be able demo the full spectrum of Ozone++  including timespace and elastic.

Ideally addressable under Mira interface tabs on  multiple iPads  — say 3 iPads (ask Sylvia for the mini-iPad so we can install code on that one for me)  — so different people can be walking on the floor simultaneously varying different layers.

Visuals:
The visuals from yesterday were ok, but were far too sparse and simple and specialized to be adequate for the VIP shows to come.  We need the visuals to be cross-fadable to the older (and visually richer) instruments : 
timespace, 
particles, 
Navier-Stokes, 
elastic time,
live portal feed,
canned feeds (in a jit.submatrix + jit.rota),
vectorfield

We must be able to layer in the other instruments.   Connor: this was already written in Ozone so I’d like you to integrate your particular Jitter instruments into the framework what Evan left behind.

Project line art — Most of the textures do not work very well for demo purposes.  And they never appear as clearly as line art.  (Connor : I’ll  send separately the simple vectorfield patch so you can incorporate it into our kit of video instruments as a basic utility.   It would be worthwhile making a particle system that’s driven bib the vector field without Navier-Stokes, so that the particles will flow unbound. )

Sound:
Ditto.  This is in a better state of playability.  The main needs are
(1) True spatialization —  Julian and Navid had this so it must be exposed.  Don’t be shy to ask  Mike, Concordia, CNMAT, or IRCAM folks  for help.  The rich ambisonics recordings are pretty hard to appreciate mixed down as they are.

(2) Much clearer relation to movement (and location) — please for ideation purposes, let’s put in a sonic reticulation i.e.
 Julian’s jitgrid2snd
Navid’s wind , 
Garrett's tuned scales,
are the most successful.
ask Todd for the percussive code, or write it based on Julian’s rhythm.

Garrett has a great idea of writing a separate Mira tab for rhythm flow from instrument to instrument — cross-modally:
lights
fans
video
sound
Each of these instruments should be able to register i/o rhythm_type which is an FTM matrix a la Julian’s rhythm kit.

Connor: You should run the default sound instrument patches directly on your own visual computer, so that they run with sound even if they have to play out thru your own local computer speakers.   All video instruments must make (local) sound by default, if only as an experiential design aid.  This means that the video computer should have its own audio out to the sound mixer etc. etc.
Lighting:
All lights need to be addressed under a uniform interface.
right now the floor lamps, overhead LED’s, fans 
seem separate.   Even if the device level interfaces are distinct, they  should be addressable by a uniform channel map.

I’m not sure how to treat the iCue and Byron’s motorized mount.  I’m inclined to treat them as a new class of Ozone instruments: bodies.
For now these bodies are merely objects made of light.  They should take a parameter of type rhythm to introduce warbles in time (i.e. speed), and space (displacement).

The fans can be part of yet another new class of Ozone instruments : field.

Nikolaos Chandolias: choreographer Dimitris Papaioanou, Still Life

From Nikolaos Chandolias <nikos.chandolias@gmail.com>:

I just thought of these projects that are not necessarily concerned with the climate change but the techniques they are using could be re-appropriated and used in an artistic context to help us think of the climate changes.

Angela Morelli is an Italian information and graphic designer based in London. She became known through her research about The Global Water Footprint of Humanity at TEDxOslo, where she shared her research into the water crisis and she eloquently illustrates the value of information design to communicate global issues: http://www.angelamorelli.com/water/

The choreographer Dimitris Papaioanou and his performance titled STILL LIFE: where he had a nylon hanging from the ceiling that could be raised or lowered in the performance space filled with haze, creating a cloud sense. The performers were able to manipulate it with different props or their hands. I was thinking that a motorised movable nylon ceiling with some funs and some haze could be very suitable for the iMonsoon event. Please find attached some stunning images:




Finally, the water activated art, by Pererine Church. He decorates public spaces with graffiti that only appears in the rain. This kind of trick is achieved through copious used of superhydrophobic materials. In effect, the painting part of the sidewalks remain dry. This material can be bought in home depot stores, is called NeverWet and is fairly cheap. This could also be an analog prop that could be used in relation to the digital media in the space and create interesting explorations in the theme of climate.

Nikos Chandolias: Dimitris Papaioanou nylon + haze, Angela Morelli water, iMonsoon, Serra

I just thought of these projects that are not necessarily concerned with the climate change but the techniques they are using could be re-appropriated and used in an artistic context to help us think of the climate changes.

Angela Morelli is an Italian information and graphic designer based in London. She became known through her research about The Global Water Footprint of Humanity at TEDxOslo, where she shared her research into the water crisis and she eloquently illustrates the value of information design to communicate global issues: http://www.angelamorelli.com/water/

The choreographer Dimitris Papaioanou and his performance titled STILL LIFE: where he had a nylon hanging from the ceiling that could be raised or lowered in the performance space filled with haze, creating a cloud sense. The performers were able to manipulate it with different props or their hands. I was thinking that a motorised movable nylon ceiling with some funs and some haze could be very suitable for the iMonsoon event. Please find attached some stunning images:


Finally, the water activated art, by Pererine Church. He decorates public spaces with graffiti that only appears in the rain. This kind of trick is achieved through copious used of superhydrophobic materials. In effect, the painting part of the sidewalks remain dry. This material can be bought in home depot stores, is called NeverWet and is fairly cheap. This could also be an analog prop that could be used in relation to the digital media in the space and create interesting explorations in the theme of climate.

Cheers,
Nikos :)

Connor Rawls: O4 setup in iStage

On Jan 28, 2015, at 8:05 PM, Connor Rawls <Connor.Rawls@asu.edu> wrote:

I finished my diagram of how O4 is set up in the iStage environment (attached). Just some notes from when I was in there today:

-There are some extra power cables on the floor. I think they are from the setup for Tim Place's talk in the space.

-The Max 6 installation for the new Sound computer under the student account has been changed. There are now 2 installations of Max 6 (6.1.7 & 6.1.9) on the account and neither will load the sound patches. Had to run the sound today via the "Synthesis" user account which seems to have a normal installation (6.1.9)

-I have figured out why we've been having issues on both the old & new sound computers with iPad connectivity for Mira. It turns out that both computers run a keyed license of Max, which requires constant internet connection. When we make a wireless network for the iPad, internet connection is terminated and the key access error occurs. With this error, most of Max's features are terminated, including networking. Possible workarounds would be to put a fixed license on the machine, or turn the sound patches into executables.

Atmosphere and Place@Balance/Unbalance March 2015 Ideas

Atmosphere and Place@Balance/Unbalance
March 2015

The basic idea is run a set of events in AME’s iStage blackbox for Balanced/Unbalanced? — 

Synthesis and AME will host in the Matthews Center iStage[1] an atmosphere for thought in which we would stage a series of dialogues on sustainability topics.    We will configure the iStage as a responsive environment whose potential response to activity can be dialled to different microclimates. 

Each microclimate would host one presentation at a time.  For example one microclimate could be a Chorusing space in which vocal gestures or movements are subtly multiplied in sound or shadow.   Another microclimate could be a Chiaroscuro space in which people who speak draw light (or dark) to themselves.   Other microclimates may soften, elongate or shorten activity using sound and visuals and modest props (aka objects to think with). 

Depending on acoustic interference and how many people can fit comfortably in this large blackbox space, more than one microclimate may be active at the same time in the space.   The important point is that there is no fixed seating in the space so each presenter ensemble can arrange the visitors and themselves as they see fit: e.g. a discussion circle, a conventional speaker plus a turn-taking baton, or an event in which presenters and visitors remain in motion.

Within this space Atmosphere and Place would like to have a series of conversations — not 20 minute academic papers but dialogues among two or three people. These dialogues would be rehearsed ahead of time and like a jazz band or structured improvisational ensemble, the actually "performed" conversation would touch on the rehearsed points but not as formal prepared, written-and-read talks.[2] Several presenter ensembles would like to "perform"/think/talk in this space.   Conventional imagery (i.e. prepared slides of images) may be presented on the walls or floor or ceiling, depending on the availability of equipment.

Ron Broglio is working with the ASU Museum of Walking (Angela Ellsworth) in a conversation about walking as an environmental practice — invoking walking artists from Richard Long to Janet Cardiff.

Sha Xin Wei will work with an ensemble to create a zone in which small hand or walking movements will connote or conjure different textures of wind or rain in sound and lighting. In this zone, Sha, Dehlia Hannah and participants will discuss and experience the differences between representing versus inhabiting climate.

[
Other presenter ensemble topics include…
]



________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

El Bulli, Adrià Ferran. art research vs art production

Every year, Ferran closes his restaurant El Bulli for 6 months and moves to Barcelona to research ideas for his next season.   This video shows their painstaking, detailed, inch by inch, drop by drop research during those quiet months, with no production and no audience but the maker-researchers themselves.

This shows patience, exactitude, intricate teamwork and rigorous documentation that we would do well to emulate in our own research atelier.
 ________________________________________________
Adria Ferran listening to suggestions by the French sommelier.
The difference between art research —  as in TML or Synthesis — and art production as in a stage production for a “show” with an audience other than the makers.

26:49
Thats just the sort of input we want

Our problem is, there are a thousand combinations.
Now we know which direction.

At the moment, the taste doesn’t matter to us.
That comes later.

At the moment, what matters
is whether something is magical
and whether it opens up a new path.

And letter, in the restaurant, the dishes are created.

Constructed.
Yes.

Now it’s more research,
and there [at the El Bulli restaurant] it’s more: research with creativity.
27:25

 ________________________________________________
Senior chef advising junior chef on how to document work more meticulously so as to make the research available for others, later, to use in production.  — Document your own code and your own practice with this rigour converts hacking into research, generating information that is portable and sharable beyond the event of its first birth.

40:41
HERE THERES NOT EVEN A RECEIPT YET
Here there’s not even a receipt yet.
Let’s write the recipe here.

Thats the same dish.

Yes but let’s separate it, 
So you can see the two ideas.
At El Bulli, we won’t find anything any more.

Then somebody else should write it.

Nobody else, just more text.
Let’s not scrimp on text.
For dummies, so to speak.
At least for me.

Yes, and I’m dumb too.

I’m dumb and  I want things detailed.

Ferran: You should already start sorting the reports according to 1, 2, or 3 stars…
41:33
 ________________________________________________

HCII experiments, for paper write up for FEB 18

Hi

We agreed to timetable to do a  bit  more thoughtful experimenting 

Friday 11 AM (??)

Weekend implement solutions un BY390

Tuesday 12 noon - 1:00  EXPERIMENT

Wed 11:00 - 1:00 EXPERIMENT
night: post low res good audio raw video so SC + TML can see both videos

Thurs write up
individual paragraphs
lit reviews!

Fri review paper Skype
XW review weekend , Evan or XW submit




Please think about  SOCIAL etiquettes to solve these questions, and type ideas into the Google doc so we can look them over?

• How should people coordinate and interleave their interventions using tokens, gesture, vocal signals, etc.?

• How can we handle time zone differences: three hours between Montreal and Phoenix, 8 or 9 hours between Phoenix and London or Athens?

• How do people mix live events with recorded audio, video or documents?

• How do people mix the table with “foveal” media such as the talking-heads videos of remote interlocutors?


We have no more time for real engineering.  Only for hardware tweaks such as moving gear, installing better microphone etc.

Need MUCH clearer video on table top
use monitor on table??  We cannot solve monitor feedback problem this week, so
 we need to figure out a different solution to improve video quality:
install a much better projector OR mount projector CLOSE (1 meter image) to the table ?
Garrett: work with Connor + Ozzie  on video display. 

Try out Batons ideas
Fake fiducial markers 
try out 

Install  Omni mic 
borrow one now, order one for BY 390?

Think about how can we deal with time zone difference ?
and implement social etiquette?

Review other solutions for augmented office meetings 
Xerox PARC, MIT , Japan had HUGE amount of research on this: 
who can REVIEW that literature 
and write a paragraph for this paper ?

Survey freehand analog drawing-based communication :
See  solutions with chalkboard  and very smart people 

heartbeat workshop Feb 15-20; organizational meeting Monday Feb 16, afternoon iStage

Hi 

Let’s plan on meeting John & Teoma in iStage Monday Feb 16 for a planning session?   Hour TBA (probably after noon).

John and Teoma — can you check the preliminary schedule?   Please edit freely!   Baton is yours.

Please coordinate with 
Mike Krzyzaniak (PhD music grad) on the correlation + dance; 
Garrett on exchanging data into Julian Stein’s (FTM) matrix representation via OSC (like last year?) and finding musicians to work with;
Dr. Chris Roberts (Synthesis) for overall coordination.

I list the people and contacts I could imagine being interested / interesting & available this week.


Mike Julie + Varsha, (and Jessica) — Mike, can you organize / put out the word for some dancers to come work with Teoma and John next week regularly through the week?

For their “heartbeat" project  ( https://ccinternaltime.wordpress.com/ ) it is best to have the same two dancers work with the music and techniques over the entire period.  But some rotation is fun and fine too

John would like to work with some musicians as well.  So let me rely on Garrett and you to put out the word when John Teoma and you work out an improv session(s).  But I cc Prof. Kotoka Suzuki in the composition faculty and the Synthesis grad students.

Kristi, Garrett — please let people know the URL for the workshop again.

 


________________________________________________________________________________________
skype: shaxinwei • mobile: +1-650-815-9962
_________________________________________________________________________________________________

o4.track (video +sensor osc data), gyro (Was: Synthesis Center / Inertial Sensor Fusion)

Great!

Mike, Can you generate data in Julian’s data structure and store them in a shared directory for us all
along with the journaled video?  Julian Stein wrote the object for journalling data.

On Nov 10, 2014, at 1:07 AM, Julian Stein <julian.stein@gmail.com> wrote:
Also included in the O4.rhyth_abs is a folder labeled o4.track. This features a simple system for recording and playing video with a synchronized osc data stream.
I’ll cc this to the SC team so they can point out those utilities on our github.

It’d be great if you can give the Signal Processing group some Real Live Data to matlab offline this week, as a warmup to Teoma + John’s data the week of Feb 15.

We must have video journaled as well, always.

I’d be interested in seeing an informal brownbag talk about Lyapunov exponents one of those mornings of the week of Feb 15, together with some analysis of the data. 

Let’s cc Adrian Freed and John MacCallum on this “gyro" thread —
Adrian’s got the most insight into this  and could help us make some actual scientific headway
toward publishable results.

My question is : By doing some stats on clouds of orientation measurements
can we get some measure of collective intention (coordinated attention)
not necessarily at any one instant of time (a meaningless notion in a relativistic world like ours) — 
but in some generalized (collective) specious present?

Let’s plan John and Teoma’s workshop hour by hour schedule this coming week at a tea?

Kristi or Garrett, or __: please let us know when the “heartbeat”  workshop weebly site is posted and linked to the Synthesis research ok?

Cheers,
Xin Wei

On Feb 6, 2015, at 12:13 PM, Michael Krzyzaniak <mkrzyzan@asu.edu wrote:

I translated Seb's sensor fusion algorithm into Javascript to be used within Max/MSP:


There was still quite a bit of drift when I tested it, but I was only using 100Hz sample rate which I suspect may have been the main issue.

Mike


 On Sat, Jan 31, 2015 at 3:45 PM, Adrian Freed <adrian@adrianfreed.com
 wrote:
  Thanks Xin Wei.
  Tt would indeed to at least develop a road map for this important work. We should bring the folk from x-io
  into the discussion because they have moved  their considerable energies and skills further into this space in 2015.
 
  I also want to clarify my relative silence on this front. As well as weathering some perfect storms last year, I found
  the following project attractive from the perspective of separating concerns for this orientation work: http://store.sixense.com/collections/stem
  They are still unfortnuately in pre-order land with a 2-3 month shipping time. Such a system would complement commercial and inertial measuring systems well
  by providing a "ground truth" ("ground fib") anchored to their beacon transmitter.  The sixense system has limited range for many of our applications
  which brings up the question (again as a separation of concerns not for limiting our perspectives) of scale. Many folk are thinking about orientation and inertial
  sensing for each digit of the hand (via rings).
 
  For the meeting we should prepare to share something about our favored use scenarios.

 On Jan 31, 2015, at 1:37 PM, Xin Wei Sha <Xinwei.Sha@asu.edu
 wrote:
 
 
  Can you — Adrian and Mike —  Doodle a Skype to talk about who should do what when to get gyro information from multiple (parts of ) bodies
  into our Max platform so Mike and the signal processing maths folks can look at the data?
 
  This Skype should include at least one of our signal processing  Phd’s as well ?
 
  Mike can talk what he’s doing here, and get your advice on how we should proceedL
  write our own gyro (orientation) feature accumulator
  get pre-alpha version of xOSC hw + sw from Seb Madgewick that incorporates that data
  adapt something from the odot  package that we can use now.
  WAIT till orientation data can be integrated easily (when, 2015 ?)
 
  Half an hour should suffice.
  I don’t have to be at this Skype as long as there’s a precise outcome and productive decision that’ll lead us to computing some (cor)relations on streams of orientations as a start...
 
  Cheers,
  Xin Wei
 
  __________


On Jan 31, 2015, at 1:27 PM, Vangelis <vl_artcode@yahoo.com wrote:

 Hello!
  Yes, there is great demand for something that works in sensor fusion for inertial sensors but I think the best way to do so is as part of o. so to benefit every inertial setup out there. It will take ages for Seb to implement it for xosc and would be an exclusive benefit. Seb's PhD is out there and I am sure he will help sharing new code for solving the problem. The question is can we do this? :)
  My warm regards to everyone!
  v


On Jan 30, 2015 6:45 PM, Adrian Freed <adrian@adrianfreed.com
 wrote:

  Hi.
  The experts on your question work at x-io. Seb Madgewick wrote the code a lot of people around the world are using for sensor
  fusion in IMU's.
  Are you using their IMU (x-OSC) as a source of inertial data?
 
  We started to integrate Seb's code into Max/MSP but concluded it would be better to wait for Seb
  to build it into x-OSC itself. There are some important reasons that this is a better approach, e.g.,
  reasoning about sensor fusion in a context with packet loss is difficult.
 
  It is possible Vangelis persisted with the Max/MSP route
 
 
  On Jan 30, 2015, at 3:01 PM, Michael Krzyzaniak <mkrzyzan@asu.edu  wrote:
 
  Hi Adrian,
 
  I am a PhD student at ASU and I work with Xin Wei at the Synthesis Center. We are interested in fusing inertial sensor data (accel/gyro/mag) to give us reliable orientation (and possibly position) information. Do you have in implementation of such an algorithm that we can use in (or port to) Max/MSP?
 
  Thanks,
  Mike