RE: new materials : printed sheets of LED's

The Rohinni lighting looks similar to Elastolight techology but brighter. If it can be produced at affordable prices it could be big. I have sent them email asking if samples are available. I have bookmarked their site and will continue to follow their product updates.

From: Xin Wei Sha [Xinwei.Sha@asu.edu]
Sent: Wednesday, December 03, 2014 9:58 PM
To: Assegid Kidane; Xin Wei Sha; Ellen Kathrine Hansen; Joshua Gigantino; Todd Ingalls; Matthew Briggs
Cc: synthesis-operations@googlegroups.com; post@synthesis.posthaven.com
Subject: new materials : printed sheets of LED's

Here is Prof. Ellen Hansen’s Lighting Design program @ Aalborg University:

Ozzie, can you please add this to a database of active materials?

This could be another element in our future lighting and rhythm research work:
http://www.rohinni.com/#technology

Xin Wei

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

new materials : printed sheets of LED's

Here is Prof. Ellen Hansen’s Lighting Design program @ Aalborg University:

Ozzie, can you please add this to a database of active materials?

This could be another element in our future lighting and rhythm research work:
http://www.rohinni.com/#technology

Xin Wei

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

Re: machine diagram for O4 in iStage, launch protocols in each machine's desktop, three level user accounts

Hi Garrett, Connor, Pete,

Can you all please just update the whiteboard in the iStage with what you know —
perhaps redraw it on the larger whiteboard  in the iStage ?

Then email a photo snapshot of it to synthesis-operations@googlegroups.com  
for a timely report.

Then we’ll get someone to draw the diagram.

Thanks,
Xin Wei

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________




On Dec 2, 2014, at 8:53 PM, Garrett Laroy Johnson <garrett.laroy.johnson@gmail.com> wrote:

Hi all, 
  Just bumping this thread to keep it in mind. I’m happy to draw something up, but it would be great if someone with some inDesign (or Adobe Illustrator, or Gimp) chops could render it as something a bit more legible. I’m afraid my drawings tend to be a bit crude ! 
Garrett L. Johnson
Musicology MA candidate @ ASU  
Synthesis Center - research assistant  
LORKAS (laptop orchestra of arizona state) - director 
__

On Nov 24, 2014, at 11:20 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:

Hi Garrett, Chris, Mike,

I asked Connor to label the machines and draw a machine diagram.
Can you please update this — drawing a more complete one on the same whiteboard.
Then someone can take a photo and xfer to the SC private website.*

Let’s make up the launch cheat sheets first as paper post-its, then write up as a desktop README?

That way we can customize the launch protocol and instructions by type of user:

Administrator
Synthesis Developer
Synthesis Guest

Xin Wei

* Here is the PUBLIC website, launched as of yesterday (!) http://synthesis.ame.asu.edu


<IMG_3521.jpeg>


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

[Synthesis] LRR final runs Monday and Tuesday collecting rhythms from quotidian activities with ALL sensor channels in parallel

Mike, Garrett, Rushil, Qiao, Pavan, Jessica, 
Todd if you’re up for it,

For the capstone LRR experiments, my available times are
Monday  10:00 - 1:00
2:30 - 5:00

Tuesday 11:00 - 1:30*
2:15 - 3:30

• Experiments final runs
Can we get together sometime to do group everyday movements 
exploring entrainment, anticipation, retrospection, recording all available
sensor channels  in parallel?

• Outcomes: plans 
I would like to also review this correlation work with Mike, Rushil, Qiao, et al in Pavan’s group
to plan the next steps in the correlation experiments through February.

Garrett, Mike, let's integrate multiple sources into Ozone:

(1) Jessica Rajko’s xOSC’s that she may bring in,
(2) all Garrett's sources
(3) IMU’s when Ozzie gets them in.

Can we please set it up with mics + camera so we — everyone — can try introducing rhythms extemporaneously.

Remember the point of the LRR is also to study also quotidian movement,
so it is very important Monday Tuesday to try some everyday movement
coordinated by furniture or small everyday  props,  —
Pete may we set up a test table on a white board in the middle of the floor?  We’ll do it ourselves with your OK.

Let’s think of some everyday activities.  Jessica suggested lining up arranging chairs…
Garrett can we set up to record ALL sensors in parallel without restricting to just one sensor modality.

Chris R: can you, Ozzie help all participants make sure we save all the data and videos in the Synthesis afp net archive** that Tain set up?

* Sylvia:  Can we push a meeting with JV Tuesday 11:15-11:45 till Wed or after Thanksgiving ?

** TAIN BARZSO: HOW TO connect to the Synthesis net fileserver shares:

1. Browse to sslvpn.asu.edu

This site should attempt an automatic installation of the Cisco AnyConnect
Secure Mobility Client. From my experience, it rarely works.  If/when it
fails, you will have the option to manually download the software.  Please
do so.

2. After installing and running, it will ask you for a server name.  This
will be, again, sslvpn.asu.edu

3. When it asks for username and password, use your ASURITE credentials.

At this point, you will be connected to the ASU VPN network. 

4. Connect to the share  afp://amenas.dhcp.asu.edu
Username: synthesis
Password:  ( ask Ozzie, assuming  you are eligible according to our research access policy )

Connect to share name "Synthesis"


Entrainment, anticipation, retrospection!
Xin Wei

Fwd: Rhythm workshop # 2: Heartbeat, February or January 2015

Hi,

Let’s plan a visit from Teoma Naccarato and John MacCallum for the Rhythm workshop #2
with AME PhD’s interested in pursuing some “correlation” studies toward 
understanding coordinated, non-isomorphic gesture.    (Maybe Adrian if he’s up to it in February ? )

What dates would work for the scientific team?

Let’s first talk about what experiment we might try to run with John, Teoma, and local undergrad / grad performers.

We definitely should work with quotidian as well as rehearsed movers (aka dance students)
in everyday as well as choreographed scenarios.

Like I said to Mike, et al.   At the end of the workshop, 
i.e. on Monday and Tuesday, I’d like each hosted research cluster to draft some outcome, like

• Research proposal (e.g. notes for a grant )
• Notes on experience for future publication
• Data — o4.track — video + OSC streams recorded by Garrett
should be archived in Synthesis webspace accessible to
Pavan’s group, Mike and related SC researchers

as well as
• Documentation (video, images, text) usable for print as well as web pub.
• Code for Synthesis-TML commons
• Gear for AME-Synthesis space
• Know-how as tech documentation

Xin Wei

Begin forwarded message:

From: Teoma Naccarato <teomajn@gmail.com>
Subject: Re: Rhythm workshop # 2: Heartbeat, February or January 2015
Date: November 19, 2014 at 1:41:34 AM MST
To: Sha Xin Wei <shaxinwei@gmail.com>
Cc: John MacCallum <john.m@ccallum.com>, Pavan Turaga <pturaga@asu.edu>, Michael Krzyzaniak <mkrzyzan@asu.edu>, Julian Stein <julian.stein@gmail.com>, synthesis-operations@googlegroups.com

Hi Xin Wei and company,

 

Apologies for our delayed response… we just returned (to Paris) from England where we gave talks at the MIPTL lab at Sussex University, and the Digital Studio Lab at Goldsmiths (Freida says hello!). I hope the Rhythm Workshop is going well this week! 

  

Xin Wei, we are looking forward to the “Rhythm Workshop Part 2” you have proposed.  It will be great to learn about the rhythm measurement and analysis tools being developed at AME. Our software and hardware tools for sensing, processing, and documenting cardiac and respiratory activity during a range of body motion are developing well, thanks to John and Adrian, as well Emmanuel Fletty here at IRCAM. 

 

We are wondering if it would be a possibility to spend one or two weeks either prior to or after the collective workshop as a residency at the Synthesis Lab to focus on questions and tools specific to our project. We would love to give a presentation of our work to date, and guide focused research experiments with interested participants at the lab, and in music and dance. Would this be of interest?

 

Some questions that are guiding our current research at IRCAM, and that we hope to share with collaborators AME in the New Year include:

 

1.     What patterns of temporal correlation between cardiac, respiratory, and nervous function - during a range of physical activities and conditions - can be measured and observed via bio-sensing?

2.     How can such patterns be used to choreograph and compose intentional arcs in the heart activity of performers over time?

3.     How do humans and media negotiate between:

                                                        i.     Internal, felt sense of time - as perceived

                                                       ii.     Performative time - as realized

                                                     iii.     Score time - as prescribed 

4.     To what extent does the existence of a feedback loop between humans and media – real or perceived – impact the experience of performers, and of observers? 

We look forward to discussing further. Thank you very much!

 

Best wishes,

Teoma and John

Julian's o4.track records video concurrent with (OSC) data streams

o4.track
records video concurrent with OSC data streams
now in TML-Synthesis  github.
Note happy end to this thread?

Let’s make sure we use this during LRR group work this coming week!

Xin Wei

On Nov 10, 2014, at 1:07 AM, Julian Stein <julian.stein@gmail.com> wrote:

Hello all,

I just pushed some new things to the O4.synthesis github repository. This features a major overhaul of the rhythm tools which has been in the works for several months now. This system as a whole is both more stable and flexible, and should be better catered to the research upcoming this month and beyond. Several examples are included within the bundle, and more documentation will come soon (hopefully this week). As discussed with a few of you, I'll try to make myself as available I can to help get these things up and running from afar. :)

Also included in the O4.rhyth_abs is a folder labeled o4.track. This features a simple system for recording and playing video with a synchronized osc data stream.

best,

Julian



On Oct 29, 2014, at 3:36 PM, Evan Montpellier <evan.montpellier@gmail.com> wrote:

Jinx, Julian! Please send that patch my way as well.

Evan

On Wed, Oct 29, 2014 at 6:35 PM, Evan Montpellier <evan.montpellier@gmail.com> wrote:
Hello all,

Mike, forwarding a conversation from February on a related topic (see below) - apologies if you've seen it before. A decent video recording patch (never mind one with recording and syncing of accompanying data) is a glaring absence from the current ASU/TML repertoire, at least as far as I'm aware. In terms of the video aspect, I'd like to build something around the Hap codec developed by Vidvox (Max implementationhere), since it seems like the best solution for high-efficiency playback (Hap clips are read directly onto the GPU) - although I welcome alternate suggestions!

Evan

On Thu, Feb 6, 2014 at 4:24 PM, <adrian@adrianfreed.com> wrote:
qmetro  just adds jitter and latency unless the machine is loaded in
which case an indeterminate number of frames will be dropped. In fact
the whole
notion of frame rate is corrupted in Jitter because when times get tough
it just grabs a frame from a buffer that is being overwritten.
I vaguely remember that there is a flag in one of the objects that tries
to suppress this (jit.grab or jit.record or something like that).

As for Xin Wei's wish I can say that there is nothing but silly
technical silos in the way of recording video streams along with the
OSC context with o.record. Transcoding to OSC blobs is the quick way to
do this and was in fact why the BLOB type is part of OSC.
Jeff Lubow here at CNMAT is interested in this sort of thing. We should
add a function to the "o." library that transcodes a jitter matrix
as a BLOB. It is more glamorous to truly translate between the formats
but for Xin Wei's purpose simply viewing OSC bundles as an
encapsulation format is sufficient.

We have another project that will soon need this so I will try to scare
up some cycles to move it along.


> -------- Original Message --------
> Subject: Re: documenting the discussions and exercises
> From: Sha Xin Wei <shaxinwei@gmail.com>
> Date: Thu, February 06, 2014 3:32 am

>
>
> [I’m widening this very thin technical thread to AME colleagues who may also have a stake in journaled monitored video + sensor data streams in OSC / Max-MSP-Jitter for research purposes.  - Xin Wei ]
>
> Thanks Julian.   Does qmetro introduce accumulating drift or “merely” local indeterminacy in time?   If it’s accumulating then we have a problem that may require ugly hacks like restart and stitching files.
>
> Just remember that for this work I do not care to have msec sync.     Very coarse sync between low-res video and the OSC stream is good enough for the sort of eyeballing I have in mind.   We need the OSC streams to stay in sync but the heavy media (video+video) stream is merely there for the human to imagine and talk about what was going on on the floor at that time.   Imagine scrolling through time (in Jamoma CueManager or autopattr interpolator) and always being able to see a small video showing what was going on at that time.
>
> If I had truly godlike power — like the authors of OSC and o.dot -- I’d like to not only build time tags  into the OSC standard, but some human-readable representation of the scene — i.e. a “photo / video" track.  So by default, in addition to the torrents of numbers, there’s always a way to see what the hell was going on in human-legible physical space at any OSC-moment.   This is not a rational engineering desire but an experimentalist’s desire :)
>
> Ideally the video would be the same as the input video feeds, so you can develop offline months later, and basically run the recorded input video and all sensor input against the modified code.   That way you can “reuse” the live activity in order to develop refined activity feature detectors and new synthesis instruments ourselves with out the expense of flying together everyone for technical work.   It’s a  practical workflow matter of letting the technical development and the physical play interleave with less stuttering.
>
> IMHO, it is crucial for TML  that O4 engineers make instruments that collaborating dancers, paper artists, interior design students, stroke rehab experimenters all over the world can use without you guys babysitting at the console.
>
>
> Julian, if you run short on time, you might confer with local experts — students of Garth and Todd or Pavan — or John?
>
> Happy to see you all soon!
> Xin Wei
>
> On Feb 5, 2014, at 7:33 PM, Julian Stein <julian.stein@gmail.com> wrote:
>
> > sure, I'll see what I can do with this -- probably makes most sense to use the o.table object. I'll keep in mind Adrian's concerns with qmetro. Perhaps I can synchronize the osc data and the jitter material with the same clock?
> >
> > best,
> >
> > Julian
> >
> >
> > On Wed, Feb 5, 2014 at 7:49 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
> > Speaking of videos,
> >
> > A lot of the videos we shoot for “documentation” is inadequate for scientific / phenomenological work .
> > What we must do this time around is to simultaneously record
> >
> > VIDEO (low res for human eyeballing)
> > All OSC streams.
> >
> > Is this easy with Jamoma CueManager?
> >
> > Julian do you have, or can you write this tool that stores all OSC streams in sync with al input videos (which can be low resolution), consulting Navid?  It’d be good to test it with an ASU student before  coming here  — Chinmay perhaps.
> >
> > • We need to recruit a reliable team of documenters to
> > — baby sit the  video streams or time-lapse  (it could even be 4 iPhone cameras running time lapse of still shots, or we could borrow some fancier gear)
> > — replay the buffer at the end o f every session (or whenever asked for my the participants)
> > — Take text notes at the Recap hours(12-1, and 4-5 every day.  There’s an error in the calendar which had the Afternoon sessions too long 2-6.)
> >
> > I cc Chris as the coordinator.  "Freed, Inigo" <inigo.freed@prescott.edu>, Adrian’s son, offered to bring a video team down from Prescott AZ to  ASU for the workshop.   I’m not sure that will work bc we need volunteer  participants to ruin these cameras 4 + hours  day x 3 weeks.
> >
> > IN fact, Adrian said based on his experience, that to really capture a lot of subtleties we should roll cameras before anyone steps in to the room.   I totally agree.   It introduces a lot of phenomenological framing to “set things up” and then check ___ then “roll camera”   This model works of reframed art like cinema production, but does not work for studying the phenomenology of event
> >
> > Q: Do we need moving video?  I think lo-res motion or time-lapse medium res stills should suffice.  Remember this is merely to stimulate Recap discussion after every two hours .  If we run out of space we could throw away all but salient stills which would need to be chosen on the spot and copied to a common drive…  etc. etc. etc.
> >
> > Anyway, the  Coordinators can plan this out in consultation with me or Adrian, and we need a team.
> >
> > Cheers,
> > Xin Wei

Re: Lighting Ecology: Darkener and Lighteners game.

Byron's experience brings us back to Ozzie's caution.  Maybe we should not switch out the tower since we know it works for demo purposes.
And we have no time to fix hardware and do LRR and prep for WIP Dec 2 a in the same period.

We can still order a computer, but we don't have to change hw unless it is crucial for the LRR research Études ...

Xin Wei

Re: Lighting Ecology: Darkener and Lighteners game.

Thanks for the suggestion about borrowing a firewire generation Mac. I think that would work great for the Camera-Projector Robot system. 

Caroline, is it possible for me to borrow a firewire Mac for the remainder of this semester for this use?

For the iStage system, the current computer that is being used for video is a tower with firewire (as I recall), but we've talked about installing the new trashcan for this purpose. That would, in all likelihood, not work with the current cameras and reacTIVision. I can run multiple instances of reacTIVision simultaneously, so we should be able to use the existing composited video input system, assuming firewire inputs. The computer was not connected to the external network, so I was not able to download reacTIVision and test anything this evening (didn't want to change any hardware configurations). I'll bring it in on a flash drive tomorrow for a test. This will give us a sense of the feasibility of the approach in general, then we can consider camera (or software hacking) options if we swap out the computer. 

Best,
Byron

On Mon, Nov 10, 2014 at 12:32 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Maybe Ozzie and you can see with Caroline if you can borrow a Firewire Mac for this for at least the rest of Fall term, assuming all the Macs are imaged with Max?

Xin Wei


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________




On Nov 10, 2014, at 11:20 AM, Byron Lahey <byron.lahey@asu.edu> wrote:

I thought I had ironed out all the issues with the ReacTIVision fiducial CV system last week when I discovered the new update of the software, but I've hit another snag. Maybe someone has an idea to solve or work around this problem. The problem is that the new computers all have Thunderbolt ports rather than native Firewire and ReacTIVision does not recognize the cameras once they are connected with an adapter. I didn't encounter this issue with my earlier testing because my computer is old enough to still have a Firewire port. Interestingly, Max has no problem seeing the Firewire camera when it is connected through the Thunderbolt adapter, so it is not just a power issue or complete incompatibility. The simplest solution I can think of is to purchase new cameras that are compatible with both the computer at a hardware level and with the ReacTIVision software. Tracking down cameras that we have a reasonable guarantee of working shouldn't be too difficult, but I don't know if we want to prioritize this as an investment or not. The key (at least for my camera-projector system) is to have a camera with a variable zoom so that the imaging sensing can be closely matched to the projected image. For an overhead tracking system, the key factors would be a wide angle lens, high resolution (to minimize the required size for the fiducials) and reasonably high speed (for stable image tracking). 

Another solution to this is to simply used older computers that have firewire ports so that we can use existing cameras. This would certainly be feasible for the camera-projector system. We would have to test this out for the overhead tracking system. The best thing to do of the overhead system is to try out the existing camera arrangement to see how well it works with the fiducials. I can try the out this afternoon (assuming the space is open for testing). 

In general I think a fiducial based computer vision system is a good way to reliably track individuals and to afford conscious interaction with the system. 

Best,
Byron

On Sun, Nov 9, 2014 at 8:19 PM, Sha Xin Wei <Xinwei.Sha@asu.edu> wrote:

After the Monday tests of hardware and networking mapping to lights in iStage with Pete, Omar and I can code the actual logic the rest of the week with Garrett, and maybe with some scheduled consults w Evan and Julian for ideas.  Omar’s got code already working in on the body lights.  I want to see it working via the overhead lights beaming onto the floor.  For this game we need to know who’s who. 

Byron, Ozzie anyone:  What is the easiest way in the iStage for us to track individual identities?   Not blob because they’ll  confuse.

Byron: IF indeed you’ve got the fiducial code working.   Can you show this to Omar (and Garrett) so they can work on the   Darkener and Lighteners ecology with me?   Can we have people carry fiducials visible to the camera from above?  Maybe they can place a fiducial next to them in order to draw light.   Can the fiducuals be printed in a legible size that is not ludicrously large?   How about if we print a bunch of  fiducials as “LIGHT COINS” of three different denominations — RGB — each good for a certain amount of elapsed time.   Exposing a fiducial to the camera charges it against your account (it can be used only once per session).  Maybe waving it can increase  the intensity but shorten the time (reciprocal constraint).   People can then choose to band together or in the +/- version, subtract etc.

Disappearing cats, shadow people, and now ghosts!

http://www.telegraph.co.uk/science/science-news/11214511/Ghosts-created-by-scientists-in-disturbing-lab-experiment.html

Very interesting what an an asynchronous pseudo-self interaction can do. In this case, 500 ms seemed to be the delay that created the body dislocation and subsequent ghostly presences. Would be interesting to read the whole study to see more details on the timing in these experiments.    

Byron

Re: Lighting Ecology: Darkener and Lighteners game.

Maybe Ozzie and you can see with Caroline if you can borrow a Firewire Mac for this for at least the rest of Fall term, assuming all the Macs are imaged with Max?

Xin Wei


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________




On Nov 10, 2014, at 11:20 AM, Byron Lahey <byron.lahey@asu.edu> wrote:

I thought I had ironed out all the issues with the ReacTIVision fiducial CV system last week when I discovered the new update of the software, but I've hit another snag. Maybe someone has an idea to solve or work around this problem. The problem is that the new computers all have Thunderbolt ports rather than native Firewire and ReacTIVision does not recognize the cameras once they are connected with an adapter. I didn't encounter this issue with my earlier testing because my computer is old enough to still have a Firewire port. Interestingly, Max has no problem seeing the Firewire camera when it is connected through the Thunderbolt adapter, so it is not just a power issue or complete incompatibility. The simplest solution I can think of is to purchase new cameras that are compatible with both the computer at a hardware level and with the ReacTIVision software. Tracking down cameras that we have a reasonable guarantee of working shouldn't be too difficult, but I don't know if we want to prioritize this as an investment or not. The key (at least for my camera-projector system) is to have a camera with a variable zoom so that the imaging sensing can be closely matched to the projected image. For an overhead tracking system, the key factors would be a wide angle lens, high resolution (to minimize the required size for the fiducials) and reasonably high speed (for stable image tracking). 

Another solution to this is to simply used older computers that have firewire ports so that we can use existing cameras. This would certainly be feasible for the camera-projector system. We would have to test this out for the overhead tracking system. The best thing to do of the overhead system is to try out the existing camera arrangement to see how well it works with the fiducials. I can try the out this afternoon (assuming the space is open for testing). 

In general I think a fiducial based computer vision system is a good way to reliably track individuals and to afford conscious interaction with the system. 

Best,
Byron

On Sun, Nov 9, 2014 at 8:19 PM, Sha Xin Wei <Xinwei.Sha@asu.edu> wrote:

After the Monday tests of hardware and networking mapping to lights in iStage with Pete, Omar and I can code the actual logic the rest of the week with Garrett, and maybe with some scheduled consults w Evan and Julian for ideas.  Omar’s got code already working in on the body lights.  I want to see it working via the overhead lights beaming onto the floor.  For this game we need to know who’s who. 

Byron, Ozzie anyone:  What is the easiest way in the iStage for us to track individual identities?   Not blob because they’ll  confuse.

Byron: IF indeed you’ve got the fiducial code working.   Can you show this to Omar (and Garrett) so they can work on the   Darkener and Lighteners ecology with me?   Can we have people carry fiducials visible to the camera from above?  Maybe they can place a fiducial next to them in order to draw light.   Can the fiducuals be printed in a legible size that is not ludicrously large?   How about if we print a bunch of  fiducials as “LIGHT COINS” of three different denominations — RGB — each good for a certain amount of elapsed time.   Exposing a fiducial to the camera charges it against your account (it can be used only once per session).  Maybe waving it can increase  the intensity but shorten the time (reciprocal constraint).   People can then choose to band together or in the +/- version, subtract etc.