Re: Dollhouse Poincaré Section etude: (was track : optitrack system in istage)

Poincaré Section etude for Dollhouse:  (Ian, using Connor’s templates in Jitter): Project video from overhead onto a board as it is moved around held at  various heights above the floor.  

On May 21, 2015, at 3:54 PM, Todd Ingalls <TestCase@asu.edu> wrote:

I thought it was quite easy calibrate as long as cameras can see the makers. 

I thought so.  So, Ozzie can we set it up asap in the iStage?

The work then shifts downstream to computing an affine transform of arbitrary video input streams and projecting it from the overhead projector onto a 1m x 2m  handheld board with markers affixed to its edges.   Aim : realtime zero latency, max res, max framerate (to avoid jerky playback).

We can drop the last requirement and simply project static image

Xin Wei

_______________________________________________________________________________________

On May 21, 2015, at 3:26 PM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

The cameras produce their own IR beams they use for detecting the markers. In fact, the previous discussion is based on the assumption that other ambient artificial and natural light is reduced to a minimum. The performance of the system while projecting video on the floor will have to be tested. The difficulty in calibration results from not enough cameras detecting the three 1/2" markers on the calibration wand at the same time, so one has to move through the space long enough until adequate data samples are collected. If the plan is to use larger markers or objects covered with retroreflective tape, the cameras may have an easier time detecting the object although data accuracy may suffer a bit due to the larger size of the object.

From: Xin Wei Sha
Sent: Thursday, May 21, 2015 3:04 PM
To: Assegid Kidane
Cc: Todd Ingalls; Peter Weisman; Ian Shelanskey; Joshua Gigantino (Student); Christopher Roberts
Subject: Re: optitrack system in istage

15’ x 15’ is fine for demo purposes.
My bigger concern is the time it takes for calibration.  Can this be semi-automated?
Can we standardize lighting and target conditions (since we’d be using boards not bodies) and use presets ?

Let's invite Ian to be part of this conversation since he is a lighting expert.  
And Josh and Ian should be part of this etude of projecting textures  onto trackable objects, hence cc Josh.

Xin Wei


_________________________________________________________________________________________________

On May 21, 2015, at 2:51 PM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

Todd,

As you remember, we had 12 camera optitrack system installed in the center of Istage using mounts around a 16x16' area and an even smaller capture volume within that area. Even for that small volume calibration took several minutes and users were not very happy with the data quality. Depending on how many of those cameras left over from the flood are still dependable and adding the leftovers from MRR we may be able to create a capture volume 15x15x10'. Do you want to proceed?