Fwd: Touch sensing traces

Begin forwarded message:

Subject: Re: Touch sensing traces

If you want proximity it has to be capacitive/e-field.
e-fields are non-linear so you will have to linearize the values.
Paper is environmentally sensitive especially to humidity. Some kind of calibration/compensation might be needed.

This ink works on most paper: http://www.electroninks.com/faq/
Here is that pen in action by yours truly: https://www.youtube.com/watch?v=ytDGMQSrJJ0

I am also exploring aginc that only works on certain coated papers.

As for the exact layout, I can't help you yet. My conductive ink jet printer hasn't arrived yet so I have done
0 experiments.

I can tell  you that the most efficient tiling for uniformly samping the plane is the triangular not the square one (by roughly 30% as I recall). This tiling may be harder to wire up than a square tiling though. I like to ground all the traces I am not sensing with and then read capacitance from each sensor area in turn. The keyboard patterns in the e-field sensing
article I circulated are pretty well thought out and a good starting point.

You can prototype this patterning problem with a sharp knife and conductive tape.  
I try to make it look easy in that video but it isn't unless you have good penmenship, lots of paper to experiment with
and quite a few pens. Mistakes are a challenge to deal with.

The main contribution of my video is to show that you can use connectors designed for flat cable in these paper applicaitons.
You may have to reinforce the paper with card stock if you expect to pull the paper in and out of the connector often.



On Sep 29, 2014, at 8:35 AM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

Dear Adrian and Natalie,
we have a need for interdigitized traces on paper on an area of about 6 inch by 6 inch that would be divided into four quadrants of equal size. We need to sense each quadrant with very high sensitivity so I need help in terms of what conductive material or paint to use, what distance to use between the interdigitized fingers and in what shape. The patterns need to  provide as much information as possible regarding the proximity and position of the hand and fingers, and since you have already done a lot of research in this area we are hoping to save ourselves much time. So each quadrant of the 6 by 6 area needs to be three by 3  and we need to know what type/brand of conductive paint to use, the spacing between the interdigitized traces, the width and length of the traces,  and the shape of the traces to achieve maximum sensitivity when a finger or a hand approaches the paper.  also hopefully the sensing data generated will be linear in terms of distance of the hand or fingers.

Thank you for your help.


Assegid

From mobile

Connor Rawls: Mira presets for video

This afternoon I managed to isolate and solve the issue that was causing the mira presets to irrevocably cause the particles to disappear. Apparently there were 3 shaders I missed on including in the executable build. With those files included in my latest compile, mira for video is fully functional again!

Also for the Synthesis Dev Team Etudes:
I put together 2 template patches for using the o4.net send/receive objects (1 for sending, 1 for receiving) and 1 large patch that show blob tracking from camera input all the way to sorting the blobs and normalizing blob data (x/y/mass). I put the patches on the desktop of the video computer in folders labeled "Network Templates" and "Video Templates".

zeroconf

As for naming, can we fork a new branch of the codebase, and retain backward compatibility by using the same inlets / outlets (arguments ) as the old objects so there’s source level compatibility in Max patches?  
Can rewriting the zeroconf to use Max SDK threading be transparent to the Max coder who invokes the zeroconf object?

We could start inventing new series:
sc.zeroconf.*
…I suppose, to more clearly mark this new era.   But it seems less principled software engineering :)

Xin Wei

_________________________________________________________________________________________________

On May 21, 2015, at 9:13 AM, Todd Ingalls <TestCase@asu.edu> wrote:

Hi
The networking infrastructure of much of the tml code relies on zeroconf object which allows one to publish services such as osc streams which others can discover. it is a nice mechanism but the objects currently have problems when listening for services on a local wired network while also having wireless on. This means the computers used to run the system have no external network connection.  I have modified them to be able to listen to only a selected network interface but I have come across another problem and that is there is some compatibility between the current Max SDK and the threading code in the objects which cause them to crash max when the objects are freed. Since they do not sue the Max SDK threading api they should probably be switched to that. I am willing to put that work in because it has been 4 years since the original objects have been updated but it does bring up some questions for me. For sustainability of SC how much effort should go into objects that are not being actively updated/fixed by others (for instance, was this the best choice without having someone who could update objects if necessary) and secondly what do we want to cal objects written. i could simply call the new objects zeroconf2.* or do we call them sc.zeroconf.* or some other convention.

Re: Dollhouse Poincaré Section etude: (was track : optitrack system in istage)

Poincaré Section etude for Dollhouse:  (Ian, using Connor’s templates in Jitter): Project video from overhead onto a board as it is moved around held at  various heights above the floor.  

On May 21, 2015, at 3:54 PM, Todd Ingalls <TestCase@asu.edu> wrote:

I thought it was quite easy calibrate as long as cameras can see the makers. 

I thought so.  So, Ozzie can we set it up asap in the iStage?

The work then shifts downstream to computing an affine transform of arbitrary video input streams and projecting it from the overhead projector onto a 1m x 2m  handheld board with markers affixed to its edges.   Aim : realtime zero latency, max res, max framerate (to avoid jerky playback).

We can drop the last requirement and simply project static image

Xin Wei

_______________________________________________________________________________________

On May 21, 2015, at 3:26 PM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

The cameras produce their own IR beams they use for detecting the markers. In fact, the previous discussion is based on the assumption that other ambient artificial and natural light is reduced to a minimum. The performance of the system while projecting video on the floor will have to be tested. The difficulty in calibration results from not enough cameras detecting the three 1/2" markers on the calibration wand at the same time, so one has to move through the space long enough until adequate data samples are collected. If the plan is to use larger markers or objects covered with retroreflective tape, the cameras may have an easier time detecting the object although data accuracy may suffer a bit due to the larger size of the object.

From: Xin Wei Sha
Sent: Thursday, May 21, 2015 3:04 PM
To: Assegid Kidane
Cc: Todd Ingalls; Peter Weisman; Ian Shelanskey; Joshua Gigantino (Student); Christopher Roberts
Subject: Re: optitrack system in istage

15’ x 15’ is fine for demo purposes.
My bigger concern is the time it takes for calibration.  Can this be semi-automated?
Can we standardize lighting and target conditions (since we’d be using boards not bodies) and use presets ?

Let's invite Ian to be part of this conversation since he is a lighting expert.  
And Josh and Ian should be part of this etude of projecting textures  onto trackable objects, hence cc Josh.

Xin Wei


_________________________________________________________________________________________________

On May 21, 2015, at 2:51 PM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

Todd,

As you remember, we had 12 camera optitrack system installed in the center of Istage using mounts around a 16x16' area and an even smaller capture volume within that area. Even for that small volume calibration took several minutes and users were not very happy with the data quality. Depending on how many of those cameras left over from the flood are still dependable and adding the leftovers from MRR we may be able to create a capture volume 15x15x10'. Do you want to proceed? 

Ozone github history. O4_ASU ≠ tgvu-code

Synthesis and TML should have a common archival repository of code, 
which is different  from the reduced and cleaned up set that runs the iStage and Lounge called  Synthesis/O4_ASU/.

Unfortunately, I’m not sure the name of the most complete archival version,

MaxLibraries_ASU/tgvu-code/
does not have TML work by key Ozone authors:
Michael Fortin
JS Rousseau
Tim Sutton

Evan was most recently the Ozone master, before him Julian.

TML Ozone authors  started publishing their own specialist branches by medium x author, instead of making a synthesis architecture.  
Julian and then Evan had overall systems architecture synthesis as part of their brief.   Ozone programmers focussed on specific instrument kits  but did not rework all the old functionality (e.g. tgvu and JS Rousseau’s extensive toolkit)
as needed to prep the codebase for use by next generation of researchers.

Evan kindly uploaded the older tgvu-code codebase “as-is” — no warranty implied or explicit.
There are useful and clever instruments and externals, especially in Michael Fortin’s work, that need a programmer educated in C-level programming (CS) to unpack for us.  Examples: 
• Michael Fortin’s Python hook for custom forces in the state engine,
• Hooks for implementing chemistry between particles, 

History:

tgvu dates from TML @ Georgia Institute of Technology GVU (formerly Graphics, Visualization and Usability): principal author was Yoichiro Serita, with strong contributions from Erik Conrad, Delphine Nain, et al. (see TML papers from 2001-2005).  Many of the functionality wheels have been re-invented and made a little more robust in later layers of Max , MXR, and now Jamoma.  

However there are key utilities for e.g. state engine, affine remapping video, sensor sensitivity conditioning, scatter computations.  Julian and Evan were the last TMLabbers I know who spelunked through the archives.

The “t”  in “tgvu” comes from TGarden, the founding play space for the TML.

And “T” refers multiply to
transformation
topology
time
tea


On May 5, 2015, at 11:51 AM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

Greetings,

Can one of you give me a brief tutorial or point me to a material that describes how the .mxt files in MaxLibraries_ASU/tgvu-code/visuals on github are used? what does 'tgvu' stand for? There seem to be frequent changes to files in the tgvu-code directory.

Assegid Kidané
Engineer
School of Arts, Media and Engineering

Ozone video in Matthews Fishbowl: Timespace+Elastic time

Connor 

— or ideally Jitter understudy— Janelle or one of the grads ??

It would be good to run Timespace as well Caroline’s loop in the Fishbowl.

Maybe via IL Y A logic: 
 Run Caroline’s loop on the glass until we enter the camera view, then fade to Timespace (in proportion to presence inside the room)

Xin Wei

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab 
_________________________________________________________________________________________________

Ozone video in Matthews iStage: other "naturalistic" video to beam on the floor

Janelle, and Connor :

Can you please prep hi res videos for us? —>

I’d to beam two different "naturalistic" videos on the floor (as well as a vertical surface like the one that I hope Megan (Pete) will be able to hang).

(1) The first one is natural terrain: the Ohio river with flowing river texture clipped to the sinuous contour of the map. Let’s try to make it as high res as possible.

(2) The second is urban patterns, from overhead POV:

Here are some examples — but find better ones if you can.

2.1 Chicago aerial view day (daylight PREFERRED ) http://www.shutterstock.com/video/clip-4247351-stock-footage-aerial-sunset-cityscape-view-chicago-skyline-chicago-illinois-usa-shot-on-red-epic.html?src=recommended/4246655:8

2.2 Chicago aerial view night http://www.shutterstock.com/video/clip-4246655-stock-footage-aerial-vertical-illuminated-night-view-chicago-river-trump-tower-downtown-skyscrapers-chicago.html&download_comp=1

2.3 An absolute classic! The Social Life of Small Urban Spaces: by William H. Whyte The Vimeo is poor quality — we should get a high res archival version maybe w Library help?!

Pay special attention to the first 90 seconds and 6:52-11:10. https://vimeo.com/111488563

Fascinating observations.

Notice that Ozone:Urban aims to to present not abstract or gods’ eye view but the lived experience e.g. 10:02 - 10:28 So we can switch fluidly between Data view — vector fields God’s eye view — overhead video (Chicago from airplane) Live POV — project city walk (like a better quality version of Whyte 10:02 - 10:28) onto vertical screen suspended from grid.

Don’t worry about watermarks — let’s just grab highest res we can and project on the floor to try it out.

Cheers! Xin Wei

PS I will ask the Ozone core team to check in on Monday 4:30 in iStage - those who can come. At minimum I’d like to project a variety of videos on the floor, and on the walls.

HMM in Max

On Fri, Apr 24, 2015 at 5:12 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Where can we get the best publicly available HMM external for Max
as a general purpose HMM package

Should we extend / modify  gf (which we have via IRCAM license )
and can we use it easily for non-audio data?   People claim to have tried it on video.
It seems that the real work is the preliminary feature extraction where a lot of interpretation happens
What are examples of code that do this in interesting ways?

Xin Wei

Navid Navab wrote 

While FTM is somewhat discontinued, this all is being moved to IRCAM's free Mubu package:
download the package and quickly check some of their example patches.

poster: 


It contains optimized algorithms building on gf, FTM, cataRT, pipo, etc. While mubu is audio-centric, it is not necessarily audio-specific. mubu buffers can work with multiple data modalities and use a variety of correlation methods to move between these layers... This makes up a fairly wholesome platform without the need to move back and forth between gf, FTM, concatenative synthesis instruments, multimodal data handling, analysis, and etc.

As with most current IRCAM releases, it is highly under-documented. Besides gf that is distributed with their package, the mubu.hhmm object might be good place to start for what you are looking for:


also their xmm object might be of interest: