3 orchestrations for Serra: TIME, BODY-MORPH, TENDRILS

Dear Oana, Todd, Chris,

How about these Working Definitions for Serra (from media choreography work): 

orchestration = a suite of instruments with a definite conditioning of the environment
(eg  breathing,  cusp of collapse, prone to chiaroscuro, prone to freezing  etc.)

instrument = a set of Jitter patches (abstractions) with definite set of parameters


To recap from our design meeting in my office, I think we only have 3 orchestrations to code for Sep:

• Time — explore fresh ways to use / generate index videos in timespace+elastic time (what Evan produced in April), but rigged with video from Oana (Janelle, Connor)

•  Body morph — like IL Y A (Evan update code he wrote for IL Y A)

•  Tropism / tendrils — inter-body 

By varying its parameters, a richly-designed Max abstraction (code) can manifest as widely different instruments,  in an infinite variety in a family.

The last orchestration — Horizon — is  not so much code, but scenography, lighting etc.
I do wish we could have the moving light fixed sooner, then I would’ve asked Julian to show Ian his moving light code from the 2012-13 Einsteins Dreams workshop, to show Ginette to explain how we would work expressively with lighting.     This should be a Fall project.

Connor Synthesis Core Dev / Jitter: First test of charged bodies with particles

This is really promising dynamics!  Good work.  
It’s worth reading up on actual lattice physics, as I recommended to Josh M. — e.g. chapters in Numerical Recipes, and google computational physics, lattice physics, computational chemistry.

(1) 
Can you show Josh how this works, so he can think about writing some code to compute some flow or density features according to our conversations with Melissa and Dee ?  I will want to work with Josh on the maths algorithms today approx 1:15 - 1:40 after my visit to Stauffer Lounge to see Ian and Garrett’s work.

(2) Everyone, please email cc. documentation of your work  to post@synthesis.posthaven.com


On Jul 1, 2015, at 12:05 PM, Connor Rawls <Connor.Rawls@asu.edu> wrote:

I attached the video of my first test of using the charged bodies code I've been working on with the fluids and particles. I used two of the puzzle pieces as objects to drive the interaction. The parameters could use some refinement to make the interaction more visible, but it looks neat in an abstract visual sense.

-Connor

<IMG_0141.MOV>

AME story pitches for ASU News

Thanks Kristi :)  You can also read and post in-house questions and notes post@synthesis.posthaven.com
This research blog is the scratch space for ongoing work at Synthesis.

On Jun 18, 2015, at 9:15 PM, Kristi Garboushian <Kristi.Garboushian@asu.edu> wrote:

Hi Everyone,

Marshall, “our” ASU News reporter, enjoys writing about the arts, so let’s give him something to write about! All the things, actually. Please send any story pitches to me, and I’ll pass them along to him for consideration. He actually asked me to contact him once a week with any ideas, so don’t hesitate if you have an idea that might be even remotely news-worthy!


Thank you,
Kristi

Today 1:00 - 2:00 PM (Phoenix time): check-in and plan Ozone visual instrument development

Dear Evan, Connor, Todd and Chris R, (FYI Oana):

Let’s Skype today 9 PM UK = 4 PM Montreal = 1 PM Phoenix to plan Ozone visual instrument development relevant to Serra.

Quorum: Evan, Connor, XW.
Ideally Todd & Chris R. too if you are available, so we can all be on the same page with your expertise and coordination.

One of us should email notes to   post@synthesis.posthaven.com  (posted on synthesis.posthaven.com ) for Oana and ones who cannot make it to this discussion.

I’ll be at my host’s home in Cambridge so hope to have wifi at that hour :

Cheers,
Xin Wei


On Jun 18, 2015, at 8:04 PM, Christopher Roberts
<Christopher.M.Roberts@asu.edu <mailto:Christopher.M.Roberts@asu.edu>>
wrote:

Hi Evan, 

Can you and Connor Rawls setup a time to talk about the current state
of the SERRA instruments and what has been done is waiting to be done etc.

Thanks

Chris

Christopher M Roberts
Assistant Research Professor
School of Arts, Media + Engineering
Arizona State University

two essays by Vera Bühlmann: "The creative conservativeness of computation" & "Reclaiming the Role of the Mathematical in Understanding Media [and the Technics of Digital Communication]" with Michel Serres

Vera Bühlmann is one of the most challenging new thinkers of media and technology around.  
(Applied Virtuality Lab, ETH Zürich, http://www.caad.arch.ethz.ch/  )
Well worth engaging.

The creative conservativeness of computation
http://monasandnomos.org/2015/01/29/the-creative-conservativeness-of-computation/

The Sun and its Inverse: Reclaiming the Role of the Mathematical in Understanding Media [and the Technics of Digital Communication] with Michel Serres

Fwd: Touch sensing traces

Begin forwarded message:

Subject: Re: Touch sensing traces

If you want proximity it has to be capacitive/e-field.
e-fields are non-linear so you will have to linearize the values.
Paper is environmentally sensitive especially to humidity. Some kind of calibration/compensation might be needed.

This ink works on most paper: http://www.electroninks.com/faq/
Here is that pen in action by yours truly: https://www.youtube.com/watch?v=ytDGMQSrJJ0

I am also exploring aginc that only works on certain coated papers.

As for the exact layout, I can't help you yet. My conductive ink jet printer hasn't arrived yet so I have done
0 experiments.

I can tell  you that the most efficient tiling for uniformly samping the plane is the triangular not the square one (by roughly 30% as I recall). This tiling may be harder to wire up than a square tiling though. I like to ground all the traces I am not sensing with and then read capacitance from each sensor area in turn. The keyboard patterns in the e-field sensing
article I circulated are pretty well thought out and a good starting point.

You can prototype this patterning problem with a sharp knife and conductive tape.  
I try to make it look easy in that video but it isn't unless you have good penmenship, lots of paper to experiment with
and quite a few pens. Mistakes are a challenge to deal with.

The main contribution of my video is to show that you can use connectors designed for flat cable in these paper applicaitons.
You may have to reinforce the paper with card stock if you expect to pull the paper in and out of the connector often.



On Sep 29, 2014, at 8:35 AM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

Dear Adrian and Natalie,
we have a need for interdigitized traces on paper on an area of about 6 inch by 6 inch that would be divided into four quadrants of equal size. We need to sense each quadrant with very high sensitivity so I need help in terms of what conductive material or paint to use, what distance to use between the interdigitized fingers and in what shape. The patterns need to  provide as much information as possible regarding the proximity and position of the hand and fingers, and since you have already done a lot of research in this area we are hoping to save ourselves much time. So each quadrant of the 6 by 6 area needs to be three by 3  and we need to know what type/brand of conductive paint to use, the spacing between the interdigitized traces, the width and length of the traces,  and the shape of the traces to achieve maximum sensitivity when a finger or a hand approaches the paper.  also hopefully the sensing data generated will be linear in terms of distance of the hand or fingers.

Thank you for your help.


Assegid

From mobile

Connor Rawls: Mira presets for video

This afternoon I managed to isolate and solve the issue that was causing the mira presets to irrevocably cause the particles to disappear. Apparently there were 3 shaders I missed on including in the executable build. With those files included in my latest compile, mira for video is fully functional again!

Also for the Synthesis Dev Team Etudes:
I put together 2 template patches for using the o4.net send/receive objects (1 for sending, 1 for receiving) and 1 large patch that show blob tracking from camera input all the way to sorting the blobs and normalizing blob data (x/y/mass). I put the patches on the desktop of the video computer in folders labeled "Network Templates" and "Video Templates".