Hi
The networking infrastructure of much of the tml code relies on zeroconf object which allows one to publish services such as osc streams which others can discover. it is a nice mechanism but the objects currently have problems when listening for services on a local wired network while also having wireless on. This means the computers used to run the system have no external network connection. I have modified them to be able to listen to only a selected network interface but I have come across another problem and that is there is some compatibility between the current Max SDK and the threading code in the objects which cause them to crash max when the objects are freed. Since they do not sue the Max SDK threading api they should probably be switched to that. I am willing to put that work in because it has been 4 years since the original objects have been updated but it does bring up some questions for me. For sustainability of SC how much effort should go into objects that are not being actively updated/fixed by others (for instance, was this the best choice without having someone who could update objects if necessary) and secondly what do we want to cal objects written. i could simply call the new objects zeroconf2.* or do we call them sc.zeroconf.* or some other convention.
I thought it was quite easy calibrate as long as cameras can see the makers.
The cameras produce their own IR beams they use for detecting the markers. In fact, the previous discussion is based on the assumption that other ambient artificial and natural light is reduced to a minimum. The performance of the system while projecting video on the floor will have to be tested. The difficulty in calibration results from not enough cameras detecting the three 1/2" markers on the calibration wand at the same time, so one has to move through the space long enough until adequate data samples are collected. If the plan is to use larger markers or objects covered with retroreflective tape, the cameras may have an easier time detecting the object although data accuracy may suffer a bit due to the larger size of the object.
From: Xin Wei Sha
Sent: Thursday, May 21, 2015 3:04 PM
To: Assegid Kidane
Cc: Todd Ingalls; Peter Weisman; Ian Shelanskey; Joshua Gigantino (Student); Christopher Roberts
Subject: Re: optitrack system in istage
15’ x 15’ is fine for demo purposes.My bigger concern is the time it takes for calibration. Can this be semi-automated?Can we standardize lighting and target conditions (since we’d be using boards not bodies) and use presets ?Let's invite Ian to be part of this conversation since he is a lighting expert.And Josh and Ian should be part of this etude of projecting textures onto trackable objects, hence cc Josh.Xin Wei
_________________________________________________________________________________________________On May 21, 2015, at 2:51 PM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:Todd,
As you remember, we had 12 camera optitrack system installed in the center of Istage using mounts around a 16x16' area and an even smaller capture volume within that area. Even for that small volume calibration took several minutes and users were not very happy with the data quality. Depending on how many of those cameras left over from the flood are still dependable and adding the leftovers from MRR we may be able to create a capture volume 15x15x10'. Do you want to proceed?
Greetings,Can one of you give me a brief tutorial or point me to a material that describes how the .mxt files in MaxLibraries_ASU/tgvu-code/visuals on github are used? what does 'tgvu' stand for? There seem to be frequent changes to files in the tgvu-code directory.Assegid KidanéEngineerSchool of Arts, Media and Engineering
Janelle, and Connor :
Can you please prep hi res videos for us? —>
I’d to beam two different "naturalistic" videos on the floor (as well as a vertical surface like the one that I hope Megan (Pete) will be able to hang).
(1) The first one is natural terrain: the Ohio river with flowing river texture clipped to the sinuous contour of the map. Let’s try to make it as high res as possible.
(2) The second is urban patterns, from overhead POV:
Here are some examples — but find better ones if you can.
2.1 Chicago aerial view day (daylight PREFERRED ) http://www.shutterstock.com/video/clip-4247351-stock-footage-aerial-sunset-cityscape-view-chicago-skyline-chicago-illinois-usa-shot-on-red-epic.html?src=recommended/4246655:8
2.2 Chicago aerial view night http://www.shutterstock.com/video/clip-4246655-stock-footage-aerial-vertical-illuminated-night-view-chicago-river-trump-tower-downtown-skyscrapers-chicago.html&download_comp=1
2.3 An absolute classic! The Social Life of Small Urban Spaces: by William H. Whyte The Vimeo is poor quality — we should get a high res archival version maybe w Library help?!
Pay special attention to the first 90 seconds and 6:52-11:10. https://vimeo.com/111488563
Fascinating observations.
Notice that Ozone:Urban aims to to present not abstract or gods’ eye view but the lived experience e.g. 10:02 - 10:28 So we can switch fluidly between Data view — vector fields God’s eye view — overhead video (Chicago from airplane) Live POV — project city walk (like a better quality version of Whyte 10:02 - 10:28) onto vertical screen suspended from grid.
Don’t worry about watermarks — let’s just grab highest res we can and project on the floor to try it out.
Cheers! Xin Wei
PS I will ask the Ozone core team to check in on Monday 4:30 in iStage - those who can come. At minimum I’d like to project a variety of videos on the floor, and on the walls.
Where can we get the best publicly available HMM external for Max
as a general purpose HMM package
Should we extend / modify gf (which we have via IRCAM license )
and can we use it easily for non-audio data? People claim to have tried it on video.
It seems that the real work is the preliminary feature extraction where a lot of interpretation happens
What are examples of code that do this in interesting ways?
Xin Wei
Navid Navab wrote
Please join us to celebrate the launch of the Graduate Certificate in Critical Theory. This will be a chance to learn a bit about the certificate and to meet other theory leaning faculty and grad students and find out what we've planned for the future. Please circulate this info to other faculty and grad students and/or relevant listserves. There are a lot of folks campus wide working in this area; we want to extend theoretical and material hospitality to all.
Ron
Ron Broglio
Director of Graduate Studies
Department of English
Senior Scholar at the Global Institute of Sustainability
Provost Humanities Fellow
http://www.public.asu.edu/~rbroglio/