Begin forwarded message:Subject: Re: Touch sensing traces
If you want proximity it has to be capacitive/e-field.
e-fields are non-linear so you will have to linearize the values.
Paper is environmentally sensitive especially to humidity. Some kind of calibration/compensation might be needed.
This ink works on most paper: http://www.electroninks.com/faq/
Here is that pen in action by yours truly: https://www.youtube.com/watch?v=ytDGMQSrJJ0
I am also exploring aginc that only works on certain coated papers.
As for the exact layout, I can't help you yet. My conductive ink jet printer hasn't arrived yet so I have done
0 experiments.
I can tell you that the most efficient tiling for uniformly samping the plane is the triangular not the square one (by roughly 30% as I recall). This tiling may be harder to wire up than a square tiling though. I like to ground all the traces I am not sensing with and then read capacitance from each sensor area in turn. The keyboard patterns in the e-field sensing
article I circulated are pretty well thought out and a good starting point.
You can prototype this patterning problem with a sharp knife and conductive tape.
I try to make it look easy in that video but it isn't unless you have good penmenship, lots of paper to experiment with
and quite a few pens. Mistakes are a challenge to deal with.
The main contribution of my video is to show that you can use connectors designed for flat cable in these paper applicaitons.
You may have to reinforce the paper with card stock if you expect to pull the paper in and out of the connector often.
On Sep 29, 2014, at 8:35 AM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:Dear Adrian and Natalie,
we have a need for interdigitized traces on paper on an area of about 6 inch by 6 inch that would be divided into four quadrants of equal size. We need to sense each quadrant with very high sensitivity so I need help in terms of what conductive material or paint to use, what distance to use between the interdigitized fingers and in what shape. The patterns need to provide as much information as possible regarding the proximity and position of the hand and fingers, and since you have already done a lot of research in this area we are hoping to save ourselves much time. So each quadrant of the 6 by 6 area needs to be three by 3 and we need to know what type/brand of conductive paint to use, the spacing between the interdigitized traces, the width and length of the traces, and the shape of the traces to achieve maximum sensitivity when a finger or a hand approaches the paper. also hopefully the sensing data generated will be linear in terms of distance of the hand or fingers.
Thank you for your help.
Assegid
From mobile
Hi
The networking infrastructure of much of the tml code relies on zeroconf object which allows one to publish services such as osc streams which others can discover. it is a nice mechanism but the objects currently have problems when listening for services on a local wired network while also having wireless on. This means the computers used to run the system have no external network connection. I have modified them to be able to listen to only a selected network interface but I have come across another problem and that is there is some compatibility between the current Max SDK and the threading code in the objects which cause them to crash max when the objects are freed. Since they do not sue the Max SDK threading api they should probably be switched to that. I am willing to put that work in because it has been 4 years since the original objects have been updated but it does bring up some questions for me. For sustainability of SC how much effort should go into objects that are not being actively updated/fixed by others (for instance, was this the best choice without having someone who could update objects if necessary) and secondly what do we want to cal objects written. i could simply call the new objects zeroconf2.* or do we call them sc.zeroconf.* or some other convention.
I thought it was quite easy calibrate as long as cameras can see the makers.
The cameras produce their own IR beams they use for detecting the markers. In fact, the previous discussion is based on the assumption that other ambient artificial and natural light is reduced to a minimum. The performance of the system while projecting video on the floor will have to be tested. The difficulty in calibration results from not enough cameras detecting the three 1/2" markers on the calibration wand at the same time, so one has to move through the space long enough until adequate data samples are collected. If the plan is to use larger markers or objects covered with retroreflective tape, the cameras may have an easier time detecting the object although data accuracy may suffer a bit due to the larger size of the object.
From: Xin Wei Sha
Sent: Thursday, May 21, 2015 3:04 PM
To: Assegid Kidane
Cc: Todd Ingalls; Peter Weisman; Ian Shelanskey; Joshua Gigantino (Student); Christopher Roberts
Subject: Re: optitrack system in istage
15’ x 15’ is fine for demo purposes.My bigger concern is the time it takes for calibration. Can this be semi-automated?Can we standardize lighting and target conditions (since we’d be using boards not bodies) and use presets ?Let's invite Ian to be part of this conversation since he is a lighting expert.And Josh and Ian should be part of this etude of projecting textures onto trackable objects, hence cc Josh.Xin Wei
_________________________________________________________________________________________________On May 21, 2015, at 2:51 PM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:Todd,
As you remember, we had 12 camera optitrack system installed in the center of Istage using mounts around a 16x16' area and an even smaller capture volume within that area. Even for that small volume calibration took several minutes and users were not very happy with the data quality. Depending on how many of those cameras left over from the flood are still dependable and adding the leftovers from MRR we may be able to create a capture volume 15x15x10'. Do you want to proceed?
Greetings,Can one of you give me a brief tutorial or point me to a material that describes how the .mxt files in MaxLibraries_ASU/tgvu-code/visuals on github are used? what does 'tgvu' stand for? There seem to be frequent changes to files in the tgvu-code directory.Assegid KidanéEngineerSchool of Arts, Media and Engineering
Janelle, and Connor :
Can you please prep hi res videos for us? —>
I’d to beam two different "naturalistic" videos on the floor (as well as a vertical surface like the one that I hope Megan (Pete) will be able to hang).
(1) The first one is natural terrain: the Ohio river with flowing river texture clipped to the sinuous contour of the map. Let’s try to make it as high res as possible.
(2) The second is urban patterns, from overhead POV:
Here are some examples — but find better ones if you can.
2.1 Chicago aerial view day (daylight PREFERRED ) http://www.shutterstock.com/video/clip-4247351-stock-footage-aerial-sunset-cityscape-view-chicago-skyline-chicago-illinois-usa-shot-on-red-epic.html?src=recommended/4246655:8
2.2 Chicago aerial view night http://www.shutterstock.com/video/clip-4246655-stock-footage-aerial-vertical-illuminated-night-view-chicago-river-trump-tower-downtown-skyscrapers-chicago.html&download_comp=1
2.3 An absolute classic! The Social Life of Small Urban Spaces: by William H. Whyte The Vimeo is poor quality — we should get a high res archival version maybe w Library help?!
Pay special attention to the first 90 seconds and 6:52-11:10. https://vimeo.com/111488563
Fascinating observations.
Notice that Ozone:Urban aims to to present not abstract or gods’ eye view but the lived experience e.g. 10:02 - 10:28 So we can switch fluidly between Data view — vector fields God’s eye view — overhead video (Chicago from airplane) Live POV — project city walk (like a better quality version of Whyte 10:02 - 10:28) onto vertical screen suspended from grid.
Don’t worry about watermarks — let’s just grab highest res we can and project on the floor to try it out.
Cheers! Xin Wei
PS I will ask the Ozone core team to check in on Monday 4:30 in iStage - those who can come. At minimum I’d like to project a variety of videos on the floor, and on the walls.
Where can we get the best publicly available HMM external for Max
as a general purpose HMM package
Should we extend / modify gf (which we have via IRCAM license )
and can we use it easily for non-audio data? People claim to have tried it on video.
It seems that the real work is the preliminary feature extraction where a lot of interpretation happens
What are examples of code that do this in interesting ways?
Xin Wei
Navid Navab wrote