AME story pitches for ASU News

Thanks Kristi :)  You can also read and post in-house questions and notes post@synthesis.posthaven.com
This research blog is the scratch space for ongoing work at Synthesis.

On Jun 18, 2015, at 9:15 PM, Kristi Garboushian <Kristi.Garboushian@asu.edu> wrote:

Hi Everyone,

Marshall, “our” ASU News reporter, enjoys writing about the arts, so let’s give him something to write about! All the things, actually. Please send any story pitches to me, and I’ll pass them along to him for consideration. He actually asked me to contact him once a week with any ideas, so don’t hesitate if you have an idea that might be even remotely news-worthy!


Thank you,
Kristi

Today 1:00 - 2:00 PM (Phoenix time): check-in and plan Ozone visual instrument development

Dear Evan, Connor, Todd and Chris R, (FYI Oana):

Let’s Skype today 9 PM UK = 4 PM Montreal = 1 PM Phoenix to plan Ozone visual instrument development relevant to Serra.

Quorum: Evan, Connor, XW.
Ideally Todd & Chris R. too if you are available, so we can all be on the same page with your expertise and coordination.

One of us should email notes to   post@synthesis.posthaven.com  (posted on synthesis.posthaven.com ) for Oana and ones who cannot make it to this discussion.

I’ll be at my host’s home in Cambridge so hope to have wifi at that hour :

Cheers,
Xin Wei


On Jun 18, 2015, at 8:04 PM, Christopher Roberts
<Christopher.M.Roberts@asu.edu <mailto:Christopher.M.Roberts@asu.edu>>
wrote:

Hi Evan, 

Can you and Connor Rawls setup a time to talk about the current state
of the SERRA instruments and what has been done is waiting to be done etc.

Thanks

Chris

Christopher M Roberts
Assistant Research Professor
School of Arts, Media + Engineering
Arizona State University

two essays by Vera Bühlmann: "The creative conservativeness of computation" & "Reclaiming the Role of the Mathematical in Understanding Media [and the Technics of Digital Communication]" with Michel Serres

Vera Bühlmann is one of the most challenging new thinkers of media and technology around.  
(Applied Virtuality Lab, ETH Zürich, http://www.caad.arch.ethz.ch/  )
Well worth engaging.

The creative conservativeness of computation
http://monasandnomos.org/2015/01/29/the-creative-conservativeness-of-computation/

The Sun and its Inverse: Reclaiming the Role of the Mathematical in Understanding Media [and the Technics of Digital Communication] with Michel Serres

Fwd: Touch sensing traces

Begin forwarded message:

Subject: Re: Touch sensing traces

If you want proximity it has to be capacitive/e-field.
e-fields are non-linear so you will have to linearize the values.
Paper is environmentally sensitive especially to humidity. Some kind of calibration/compensation might be needed.

This ink works on most paper: http://www.electroninks.com/faq/
Here is that pen in action by yours truly: https://www.youtube.com/watch?v=ytDGMQSrJJ0

I am also exploring aginc that only works on certain coated papers.

As for the exact layout, I can't help you yet. My conductive ink jet printer hasn't arrived yet so I have done
0 experiments.

I can tell  you that the most efficient tiling for uniformly samping the plane is the triangular not the square one (by roughly 30% as I recall). This tiling may be harder to wire up than a square tiling though. I like to ground all the traces I am not sensing with and then read capacitance from each sensor area in turn. The keyboard patterns in the e-field sensing
article I circulated are pretty well thought out and a good starting point.

You can prototype this patterning problem with a sharp knife and conductive tape.  
I try to make it look easy in that video but it isn't unless you have good penmenship, lots of paper to experiment with
and quite a few pens. Mistakes are a challenge to deal with.

The main contribution of my video is to show that you can use connectors designed for flat cable in these paper applicaitons.
You may have to reinforce the paper with card stock if you expect to pull the paper in and out of the connector often.



On Sep 29, 2014, at 8:35 AM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

Dear Adrian and Natalie,
we have a need for interdigitized traces on paper on an area of about 6 inch by 6 inch that would be divided into four quadrants of equal size. We need to sense each quadrant with very high sensitivity so I need help in terms of what conductive material or paint to use, what distance to use between the interdigitized fingers and in what shape. The patterns need to  provide as much information as possible regarding the proximity and position of the hand and fingers, and since you have already done a lot of research in this area we are hoping to save ourselves much time. So each quadrant of the 6 by 6 area needs to be three by 3  and we need to know what type/brand of conductive paint to use, the spacing between the interdigitized traces, the width and length of the traces,  and the shape of the traces to achieve maximum sensitivity when a finger or a hand approaches the paper.  also hopefully the sensing data generated will be linear in terms of distance of the hand or fingers.

Thank you for your help.


Assegid

From mobile

Connor Rawls: Mira presets for video

This afternoon I managed to isolate and solve the issue that was causing the mira presets to irrevocably cause the particles to disappear. Apparently there were 3 shaders I missed on including in the executable build. With those files included in my latest compile, mira for video is fully functional again!

Also for the Synthesis Dev Team Etudes:
I put together 2 template patches for using the o4.net send/receive objects (1 for sending, 1 for receiving) and 1 large patch that show blob tracking from camera input all the way to sorting the blobs and normalizing blob data (x/y/mass). I put the patches on the desktop of the video computer in folders labeled "Network Templates" and "Video Templates".

zeroconf

As for naming, can we fork a new branch of the codebase, and retain backward compatibility by using the same inlets / outlets (arguments ) as the old objects so there’s source level compatibility in Max patches?  
Can rewriting the zeroconf to use Max SDK threading be transparent to the Max coder who invokes the zeroconf object?

We could start inventing new series:
sc.zeroconf.*
…I suppose, to more clearly mark this new era.   But it seems less principled software engineering :)

Xin Wei

_________________________________________________________________________________________________

On May 21, 2015, at 9:13 AM, Todd Ingalls <TestCase@asu.edu> wrote:

Hi
The networking infrastructure of much of the tml code relies on zeroconf object which allows one to publish services such as osc streams which others can discover. it is a nice mechanism but the objects currently have problems when listening for services on a local wired network while also having wireless on. This means the computers used to run the system have no external network connection.  I have modified them to be able to listen to only a selected network interface but I have come across another problem and that is there is some compatibility between the current Max SDK and the threading code in the objects which cause them to crash max when the objects are freed. Since they do not sue the Max SDK threading api they should probably be switched to that. I am willing to put that work in because it has been 4 years since the original objects have been updated but it does bring up some questions for me. For sustainability of SC how much effort should go into objects that are not being actively updated/fixed by others (for instance, was this the best choice without having someone who could update objects if necessary) and secondly what do we want to cal objects written. i could simply call the new objects zeroconf2.* or do we call them sc.zeroconf.* or some other convention.

Re: Dollhouse Poincaré Section etude: (was track : optitrack system in istage)

Poincaré Section etude for Dollhouse:  (Ian, using Connor’s templates in Jitter): Project video from overhead onto a board as it is moved around held at  various heights above the floor.  

On May 21, 2015, at 3:54 PM, Todd Ingalls <TestCase@asu.edu> wrote:

I thought it was quite easy calibrate as long as cameras can see the makers. 

I thought so.  So, Ozzie can we set it up asap in the iStage?

The work then shifts downstream to computing an affine transform of arbitrary video input streams and projecting it from the overhead projector onto a 1m x 2m  handheld board with markers affixed to its edges.   Aim : realtime zero latency, max res, max framerate (to avoid jerky playback).

We can drop the last requirement and simply project static image

Xin Wei

_______________________________________________________________________________________

On May 21, 2015, at 3:26 PM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

The cameras produce their own IR beams they use for detecting the markers. In fact, the previous discussion is based on the assumption that other ambient artificial and natural light is reduced to a minimum. The performance of the system while projecting video on the floor will have to be tested. The difficulty in calibration results from not enough cameras detecting the three 1/2" markers on the calibration wand at the same time, so one has to move through the space long enough until adequate data samples are collected. If the plan is to use larger markers or objects covered with retroreflective tape, the cameras may have an easier time detecting the object although data accuracy may suffer a bit due to the larger size of the object.

From: Xin Wei Sha
Sent: Thursday, May 21, 2015 3:04 PM
To: Assegid Kidane
Cc: Todd Ingalls; Peter Weisman; Ian Shelanskey; Joshua Gigantino (Student); Christopher Roberts
Subject: Re: optitrack system in istage

15’ x 15’ is fine for demo purposes.
My bigger concern is the time it takes for calibration.  Can this be semi-automated?
Can we standardize lighting and target conditions (since we’d be using boards not bodies) and use presets ?

Let's invite Ian to be part of this conversation since he is a lighting expert.  
And Josh and Ian should be part of this etude of projecting textures  onto trackable objects, hence cc Josh.

Xin Wei


_________________________________________________________________________________________________

On May 21, 2015, at 2:51 PM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

Todd,

As you remember, we had 12 camera optitrack system installed in the center of Istage using mounts around a 16x16' area and an even smaller capture volume within that area. Even for that small volume calibration took several minutes and users were not very happy with the data quality. Depending on how many of those cameras left over from the flood are still dependable and adding the leftovers from MRR we may be able to create a capture volume 15x15x10'. Do you want to proceed?