Disappearing cats, shadow people, and now ghosts!

http://www.telegraph.co.uk/science/science-news/11214511/Ghosts-created-by-scientists-in-disturbing-lab-experiment.html

Very interesting what an an asynchronous pseudo-self interaction can do. In this case, 500 ms seemed to be the delay that created the body dislocation and subsequent ghostly presences. Would be interesting to read the whole study to see more details on the timing in these experiments.    

Byron

Re: Lighting Ecology: Darkener and Lighteners game.

Maybe Ozzie and you can see with Caroline if you can borrow a Firewire Mac for this for at least the rest of Fall term, assuming all the Macs are imaged with Max?

Xin Wei


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________




On Nov 10, 2014, at 11:20 AM, Byron Lahey <byron.lahey@asu.edu> wrote:

I thought I had ironed out all the issues with the ReacTIVision fiducial CV system last week when I discovered the new update of the software, but I've hit another snag. Maybe someone has an idea to solve or work around this problem. The problem is that the new computers all have Thunderbolt ports rather than native Firewire and ReacTIVision does not recognize the cameras once they are connected with an adapter. I didn't encounter this issue with my earlier testing because my computer is old enough to still have a Firewire port. Interestingly, Max has no problem seeing the Firewire camera when it is connected through the Thunderbolt adapter, so it is not just a power issue or complete incompatibility. The simplest solution I can think of is to purchase new cameras that are compatible with both the computer at a hardware level and with the ReacTIVision software. Tracking down cameras that we have a reasonable guarantee of working shouldn't be too difficult, but I don't know if we want to prioritize this as an investment or not. The key (at least for my camera-projector system) is to have a camera with a variable zoom so that the imaging sensing can be closely matched to the projected image. For an overhead tracking system, the key factors would be a wide angle lens, high resolution (to minimize the required size for the fiducials) and reasonably high speed (for stable image tracking). 

Another solution to this is to simply used older computers that have firewire ports so that we can use existing cameras. This would certainly be feasible for the camera-projector system. We would have to test this out for the overhead tracking system. The best thing to do of the overhead system is to try out the existing camera arrangement to see how well it works with the fiducials. I can try the out this afternoon (assuming the space is open for testing). 

In general I think a fiducial based computer vision system is a good way to reliably track individuals and to afford conscious interaction with the system. 

Best,
Byron

On Sun, Nov 9, 2014 at 8:19 PM, Sha Xin Wei <Xinwei.Sha@asu.edu> wrote:

After the Monday tests of hardware and networking mapping to lights in iStage with Pete, Omar and I can code the actual logic the rest of the week with Garrett, and maybe with some scheduled consults w Evan and Julian for ideas.  Omar’s got code already working in on the body lights.  I want to see it working via the overhead lights beaming onto the floor.  For this game we need to know who’s who. 

Byron, Ozzie anyone:  What is the easiest way in the iStage for us to track individual identities?   Not blob because they’ll  confuse.

Byron: IF indeed you’ve got the fiducial code working.   Can you show this to Omar (and Garrett) so they can work on the   Darkener and Lighteners ecology with me?   Can we have people carry fiducials visible to the camera from above?  Maybe they can place a fiducial next to them in order to draw light.   Can the fiducuals be printed in a legible size that is not ludicrously large?   How about if we print a bunch of  fiducials as “LIGHT COINS” of three different denominations — RGB — each good for a certain amount of elapsed time.   Exposing a fiducial to the camera charges it against your account (it can be used only once per session).  Maybe waving it can increase  the intensity but shorten the time (reciprocal constraint).   People can then choose to band together or in the +/- version, subtract etc.

Re: Lighting Ecology: Darkener and Lighteners game.

I thought I had ironed out all the issues with the ReacTIVision fiducial CV system last week when I discovered the new update of the software, but I've hit another snag. Maybe someone has an idea to solve or work around this problem. The problem is that the new computers all have Thunderbolt ports rather than native Firewire and ReacTIVision does not recognize the cameras once they are connected with an adapter. I didn't encounter this issue with my earlier testing because my computer is old enough to still have a Firewire port. Interestingly, Max has no problem seeing the Firewire camera when it is connected through the Thunderbolt adapter, so it is not just a power issue or complete incompatibility. The simplest solution I can think of is to purchase new cameras that are compatible with both the computer at a hardware level and with the ReacTIVision software. Tracking down cameras that we have a reasonable guarantee of working shouldn't be too difficult, but I don't know if we want to prioritize this as an investment or not. The key (at least for my camera-projector system) is to have a camera with a variable zoom so that the imaging sensing can be closely matched to the projected image. For an overhead tracking system, the key factors would be a wide angle lens, high resolution (to minimize the required size for the fiducials) and reasonably high speed (for stable image tracking). 

Another solution to this is to simply used older computers that have firewire ports so that we can use existing cameras. This would certainly be feasible for the camera-projector system. We would have to test this out for the overhead tracking system. The best thing to do of the overhead system is to try out the existing camera arrangement to see how well it works with the fiducials. I can try the out this afternoon (assuming the space is open for testing). 

In general I think a fiducial based computer vision system is a good way to reliably track individuals and to afford conscious interaction with the system. 

Best,
Byron

On Sun, Nov 9, 2014 at 8:19 PM, Sha Xin Wei <Xinwei.Sha@asu.edu> wrote:

After the Monday tests of hardware and networking mapping to lights in iStage with Pete, Omar and I can code the actual logic the rest of the week with Garrett, and maybe with some scheduled consults w Evan and Julian for ideas.  Omar’s got code already working in on the body lights.  I want to see it working via the overhead lights beaming onto the floor.  For this game we need to know who’s who. 

Byron, Ozzie anyone:  What is the easiest way in the iStage for us to track individual identities?   Not blob because they’ll  confuse.

Byron: IF indeed you’ve got the fiducial code working.   Can you show this to Omar (and Garrett) so they can work on the   Darkener and Lighteners ecology with me?   Can we have people carry fiducials visible to the camera from above?  Maybe they can place a fiducial next to them in order to draw light.   Can the fiducuals be printed in a legible size that is not ludicrously large?   How about if we print a bunch of  fiducials as “LIGHT COINS” of three different denominations — RGB — each good for a certain amount of elapsed time.   Exposing a fiducial to the camera charges it against your account (it can be used only once per session).  Maybe waving it can increase  the intensity but shorten the time (reciprocal constraint).   People can then choose to band together or in the +/- version, subtract etc.

Vangelis Lympouridis Greek notions relevant to Lighting & Rhythm Residency Workshop Nov 17-26

Thanks Vangelis @ USC for 
expanding on Adrian Freed's suggestive semblance typology of entrainment.


On Nov 10, 2014, at 12:22 AM, "Vangelis Lympouridis" <vl_artcode@yahoo.com> wrote:

Oh well, I did my best. I wish I could be more helpful but you may get something out of that...

Syn-tonic - From the Greek «Τόνος» (Tonos) which is the word for the accent mark you see often in Greek words (‘) so it can refer to the dominant frequency- emphasis. The Greek Τονικότητα (eng. Tonality) refers to the harmonic epicenter of a musical system (gamut). The closest to syntonic is συντονισμός which is close to tuning (sync) (same frequency, same phase). Makes sense as “together in frequency”.

Syn-chronic – From the Greek «Χρόνος» (Chronos) eng. Time. The Greek equivalent is συγχρονισμός (synchronization) meaning at the same exact time; maybe more appropriate for “together in time”.

Syn-tropic  - From the Greek «Τρόπος» Tropos. Τροπος is the way something is happening, emerging, evolving. There is no word in Greek that combines these two but there is a term that was introduced in 2006 at the EU called Comodalité  that got translated in Greek as συν-τροπικ-ότητα (meaning every way possible). If I am right orientation and direction is the same thing except that direction implies movement. Tropic and tropical refers also to a cyclical direction so in a way it can refer to the same direction.

Iso-tropic – There is a word called ισοτροπία in Greek (eng. Isotropy) (mainly found in Physics and Chemistry) which is the condition where the properties of something are not related to the way that they are measured or that you get the same results regardless the direction or dimension of the study. I think that “together in orientation” could be based in the other big family of Παρα- prefix (Para- parallax), that means at the side of, facing the same direction. Two things can move in parallel or point in parallel ways. ...although I cannot think of something that sounds good. Parallelic sounds really bad...

Syn-tenic – Τείνω(verb) Ταινία , refers to length, trajectory and we also call α movie «ταινία». Great use for “along the same path”.

Syn-Plectic – From the Greek Πλέξη which means knitting, interlacing, braids. Συμπλέκτης (syn and plexis) is the car clutch. Very good for “plaited together”

Syn-gamic – From the Greek verb Γαμέω (eng. gametes) and refers to fertilization (conception). Syngamic might bring giggles as syn and gameo is like a threesome; gametes with companion!

Syn-detic – From the Greek verb Δένω (eng. binding) which exists in many derivatives of things combined. Συν-δετικό though refers to something that joins two or more things together; while two things combined are συν-δεδεμένα.  Syndemic sounds closer although the world is already in use in the health context.

Syn-biotic – From the Greek Bios ; things living together. Very nice.



Vangelis Lympouridis, PhD
Visiting Scholar,
School of Cinematic Arts
University of Southern California

Senior Research Consultant,
Creative Media & Behavioral Health Center
University of Southern California
http://cmbhc.usc.edu

Whole Body Interaction Designer
www.inter-axions.com

vangelis@lympouridis.gr
Tel: +1 (415) 706-2638 


-----Original Message

getting rhythms into Ozone OSC via Julian's rhythm kit

I’d say the exercise would be to plug M Bateman’s drum kit — or any MIDI drum pad that AME may have in its DC pile — to a Mac set aside for rhythm work as Garrett suggested.  

M Bateman:  Run my scratch patch on the Mac that Ozzie (or Pete) assign for rhythm work in iStage.  Tweak my dummy code to send the MIDI to some of Julian’s lighting control patch’s parameters.    You can ask Pete or any of the ultra-friendly AME / SC media artist-researchers for tips.   Or one of us (Pete or Garrett or ...) can do that for you.  Mapping MIDI note (or channel) to DMX channel  and the scaled velocity to light intensity would be a good and very very easy start.  (If the hardware is plugged into the Ozone lighting control, it should be a 5-10 minute exercise.)


Then we can talk —

I’d take you through  some simple ways to take a sequence and play it back  —
with delay, reverse, echo and loop etc.  Hint [ zl group ]  and some other [ zl ]  objects for Max’s standard list manipulation,  together with Max’s delay object, etc.  could be a start ….

But I’m sure Julian’s got many many such logics already — so it may be most efficient to simply schedule a G+ for Bateman, Julian and one local Max artist like Garrett or you
with screenshare.


Again the key is simply to do the easy step of connecting Bateman’s drum kit to iStage lights.  Everything else will follow in small easy steps with other folks doing the other mappings to sound and video etc.  with existing instruments. 

So we would have five ways of getting rhythms into the iStage network (Ozone)

1) video cam feed, 
2) mic feed, 
3)  drums would be third.
4) accelerometers from Chris Z’s iPods, maybe ?
5) accelerometers from Mike K’s kit


Omar, can you and Julian make sure you can use everything Julian’s latest shiny kit?  So you can bring Julian’s refinement to Phoenix ?   

I’d like to dedicate Monday morning to plugging cables and code together to see that we can get these different ways of getting rhythm into the iStage Ozone network, mapped to the lights.

Then try it out briefly with Mike K and dancers at 5:30, after Chris Z is finished.

This is just a test parallel to the Lighting Ecology: Darkener and Lighteners game. 

BTW, TouchDesigner would get in the way of getting rhythm exploration going.  And we don’t need it since we’re working collectively with other talents at AME + Synthesis + TML.

Onward!
Xin Wei


On Nov 9, 2014, at 6:33 PM, Peter Weisman <peter.weisman@asu.edu> wrote:

Hi All,
 
I will need a little guidance for this Thursday. I think I know how. But, I am open to any suggestions.
 
I will make arrangements to be available next Monday at 5:30pm.
 
I will be hanging the lights as soon as they get in this week. I will try to make myself available as much as I can.
 
Pete
AME Technical Director
(480)965-9041(O)
 
From: Xin Wei Sha 
Sent: Sunday, November 09, 2014 8:19 AM
To: Michael Krzyzaniak; Julie Akerly; Jessica Rajko; Varsha Iyengar (Student); Ian Shelanskey (Student); Michael Bateman; Peter Weisman; Garrett Johnson
Cc: synthesis-operations@googlegroups.com; Julian Stein; Assegid Kidane; Sylvia Arce
Subject: Adrian Freed: fuelling imagination for inventing entrainments
 
Garrett, Mike K, Julie, Jessica, Varsha, Ian, Bateman, Pete, 
 
Can we schedule time to create and try out some entrainment exercises during the LRR Nov 17 -26?
 
How about we make our first working session as a group on
 
Monday Nov 17 5:30 in the iStage.
 
I’ll work with people in smaller groups.
 
Tuesday afternoon Nov 12, I’ll putter around Brickyard,
or by appointment in the iStage.
 
Ian, Mike Bateman, Pete: let’s run through the lighting controls as Julian gives them to us
Let’s look at Julian’s Jitter interface together.
I asked Omar to prep a video to video mapper patch that he, you and I can 
modify during the workshop to implement different rhythm-mappings.
 
Pete can map Bateman's percussion midi into the lights (it’s a snap via our Max :)
Thursday 11/14, at 3:30  in iStage ?
 
Garrett, Mike K: can you work with Pete (or Ozzie re. video) to map the mic and video inputs in iStage into Julian's rhythm kit:
percussion midi
video camera on a tripod,
overhead video,
 
1 or more audio microphone on tripod,
wireless lapel microphones. 
Julian has code that maps video into rhythm.
Can we try that out Wed 11/12 or Thurs 11/13 in the iStage?
 
 
Let’s find rich language about rhythm, entrainment, temporality, processuality in movement and sound.
relevant to entrainment and co-ordinated rhythm, as the Lighting and Rhythm workshop looms.  
 
On Nov 8, 2014, at 11:54 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
I would like to enrich our imagination before we lock too much into talking about “synchronization” 
and regular periodic clocks in the Lighting and Rhythm workshop.
 
 
On Nov 8, 2014, at 9:41 PM, Adrian Freed <Adrian.Freed@asu.edu> wrote:
 

Adrian Freed: fuelling imagination for inventing entrainments

Here’s a note relevant to entrainment and co-ordinated rhythm, as the Lighting and Rhythm workshop looms.

Adrian Freed’s collected a list of what he calls “semblance typology of entrainments.”
Notice that he does NOT say “types” but merely a typology of semblances, which helps us avoid reification errors.

Let’s think of this as a way to enrich our vocabulary for rhythm, entrainment, temporality, processuality in movement and sound.
Let’s not use this — or any other list of categories — as an absolute universal set of categories sans context. 
See the comments below.


On Nov 8, 2014, at 11:54 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
I would like to enrich our imagination before we lock too much into talking about “synchronization” 
and regular periodic clocks in the Lighting and Rhythm workshop.




On Nov 8, 2014, at 10:01 PM, Adrian Freed <Adrian.Freed@asu.edu> wrote:
I haven't thought about this in a while but the move to organize the words using the term "together" which I did for the talk at Oxford is interesting because it allows a formalization in mereotopology a la Whitehead but I would have to provide an interpretation of enclosure and overlap that involves  correlation metrics in some structure, for example, CPCA 
(Correlational Principal Component Analysis): http://www.lv-nus.org/papers%5C2008%5C2008_J_6.pdf



On Nov 9, 2014, at 7:05 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:


__________________________________________________________________________________
Sha Xin Wei, Ph.D. • xinwei@mindspring.com • skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________



Thanks Adrian,

Then I wonder if rank statistics —  ordering vs cardinal metrics — could be a compromise way.
David Tinapple and Loren Olson here have invented a web system for peer-critique called CritViz
that has students rank each other’s projects.  It’s an attention orienting thing…

Of course there are all sorts of problems with it — the most serious one being 
herding toward mediocrity or at best herding toward spectacle

and it is a bandage invented by the necessity of dealing with high student / teacher ratios in studio classes.

The theoretical question is can we approximate a mereotopology on space of 
Whiteheadian or Simondonian  processes using rank ordering 
which may do away with the requirement for coordinate loci.

Axiom of Choice gives us a well ordering on any set, so that’s a start, 
but there is no effective decidable way to compute an ordering for an arbitrary set.
I think that’s a good thing.   And it should be drilled into every engineer.
This means that the responsibility for ordering 
shifts to ensembles in milieu rather than individual people or computers.

Hmmm, so where does that leave us?

We turn to our anthropologists, historians, 
and to the canaries in the cage — artists and poets…

There’s a group of faculty here including  Cynthia Selin, who are doing what they call scenario [ design | planning | imagining ]
as a way for people to deal with wicked messy situations like climate change, or developing economies.   They seem very prepared for 
narrative techniques applied to ensemble events but don’t know anything about theater or performance.  
It seems like a situation  ripe for exploration.  If we can get sat the naive phase of slapping conventional narrative genres from
community theater or gamification or info-visualization onto this.

Very hard to talk about, so I want to build examples here.

Xin Wei

"correlation" experiments with Mike K et al ( & Xin Wei ) during Lighting and Rhythm Workshop

Dear Mike , Julie, Varsha, Pavan,

Everyone: Beginning to carry out some "correlation” experiments as core scientific research of the LRR is a very high priority for Synthesis.  So thanks to Mike for dong this.  (We’ll want to continue this through the year until we generate some publishable insights. :)

I would  like to make sure that Pavan is in the loop here with his group.   The goal for "correlation experiments” during the LRR Nov 17-26 is pretty concrete.   We want to discover some non-isomorphic relations between co-movement, movement by ensembles (or individuals) that those people themselves claim are 

(1) “correlated”, mutually responding in some way, or
(2) anticipate or retrospect activity,

either among the humans or the light fields.

(1) Julie and Varsha and Mike can go through some movement “games” / studies responding to the questions laid out in the Lighting and Rhythm workshop .  See the Etudes Motivating Questions  .

(2) Xin Wei, Pavan and Pavan’s group can witness this work, talk with Julie Mike and Varsha,
review videos simultaneously with streams of data as well as correlation 
and look at Mike’s data from accelerometers and correlation computations.
Mike’s recorded data streams concurrent with video of activity. 

( Hint:  QT Screen-recording off of Jitter patches does the job: 
a lesson for TMLabbers who would like to do parallel research in this domain :)


Chris R : I’ll make myself available after 5:30 or on the weekends too, to fit Julie, Mike Varsha’s availability.

Xin Wei

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________



On Nov 7, 2014, at 3:29 PM, Julie Akerly <j.akerly@live.com> wrote:

Yes, I would really like to be a part of this, but things during the weekdays do not work for me.
Can you schedule some workshop times that are after 5:30 or on the weekends?

Julie Akerly
Co-Director of NueBox
Artistic Director of J.A.M.

The dancer's body is simply the luminous manifestation of the soul.
~Isadora Duncan




Date: Fri, 7 Nov 2014 15:12:52 -0700
Subject: Meetings with Xin Wei during Rhythm and Lighting Workshop
From: mkrzyzan@asu.edu
To: cmrober2@asu.eduj.akerly@live.comvarshaiyengar@hotmail.comshaxinwei@gmail.com

Chris,

Xin Wei wanted me to ask you if we can schedule time together in iStage during the Rhythm and Lighting workshop to work on dancer correlation. He suggested the Tuesday and Thursday afternoons (Nov 18, 20, 25, 27), and possibly on the afternoon of the 24th when Michael Montanaro is here.

Julie and Varsha, what are your schedules like at these times? Julie, I know you work during the day, but we can work around your schedules, both of you...

Mike 

-- 
Sine coffa, vita nihil est!!!

Re: Door Portals : prototypes delivered by Nov 15, bomb tested by Nov 21, and deployed by Nov 24 for WIP event

To be clear,
What happens inside the frame of the Door in terms of visual effect, beyond the DEFAULT behaviours
(0) Mirror
(1) Live streams: Stauffer Reception, iStage, Brickyard Commons

is really up to the ingenuity of the realtime video artists, starting with Prof. David Tinapple.
David can curate other realtime video “instruments” for Dec 2 or beyond.
All we need for Dec 2 is the realtime feed with blend between them.

Anything beyond that, such as David’s clever “gaze” tracking is icing on the cake.  That would be fantastic.

David (or Ozzie or Garrett):
Can you look at Ozzie’s code for the Stauffer display, and tweak it to serve as a robust SHELL program?
We need a good shell program that will switch among video instruments, either on a bang
(e.g. from a handclap so we should put in audio with calibration).  The video instruments will
be other students’ works.

Xin Wei

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________



On Nov 1, 2014, at 10:44 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:

Dear Pete, David, Nicole, (technical advisor Ozzie)

I would like Pete to be in charge of installations for WIP Dec 2 following my event design — (For the dramaturgy I’ll of course rely on experienced folks — Chris Z as well as Pete, Garth, Todd, Ed, David.)

For Dec 2, I would like monitors mounted in portrait on some walls in Stauffer and iStage: hence the name DOOR PORTALS, DoorPorts

BEHAVIOR

Each single-sided Door Portal has a camera facing out from its front.

In iStage: when you stand 20’ away from the Door Portal  you see live feed of the receptionist desk at Stauffer.   When there is noone present, the DoorPortal should show past people, faintly.

As you occlude the Door Portal (background subtract, use tml.jit.Rokeby), it turns into a mirror.
   As you walk toward this Door Portal, at 15' the mirror fades to live feed from another Door Portal B
At 10’ the live feed fades to live feed from kn Door Portal C


LOCATIONS, SIZES

(1) In iStage in the location that we discussed — next to the door to offices, flush with the floor.  Largest size.

(2) On a wall in Stauffer B, facing the reception desk  (in addition or instead of the display behind receptionist’s head.)
Cables must be hidden — we cannot have exposed cable in the receptionist area.  Let the Space committee take note.

(3) Mounted on an wall in the dark Commons of the BY.

All these should be PORTRAIT — no landscape, please — let’s avoid knee-jerk screenic response!   Bordered images are boring.

NICOLE, your idea for prototyping is very smart.   if you want, please feel free to make paper mockups — just 
ask Ozzie or Pete for dimensions of our monitors, cut paper to 1-1 scale, and bring some mock doors that we can pin to the walls in the BY and iStage.

Re. scale, I’m ok with the doors not human height — 1-1 body size is too literal (= boring).  Plus I want children-sized apparatus in our office spaces.  Our office spaces are all designed for 5-6’ tall people, nominally adults — this is socially sterile.

NOTE: I think we do NOT need a big screen on the foyer — it is too expensive and a waste of a monitor (unless CSI buys one :)  
For that Brickyard foyer, if people — e.g. Chris R — wants a porthole (≠ portal) then let’s put a pair of iPads one on each side of that foyer sub wall, and stream live video between them.


TIMETABLE (ideal)

Nov 15
prototypes delivered

Nov 21
bomb tested, 

Nov 24
Deployed for WIP event
(If is documentation day for LRR, then this is when we should invite Sarah H to send her team to shoot the Door Portal along with the best of LRR for Herberger (gratis!))


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

Door Portals : prototypes delivered by Nov 15, bomb tested by Nov 21, and deployed by Nov 24 for WIP event

Dear Pete, David, Nicole, (technical advisor Ozzie)

I would like Pete to be in charge of installations for WIP Dec 2 following my event design — (For the dramaturgy I’ll of course rely on experienced folks — Chris Z as well as Pete, Garth, Todd, Ed, David.)

For Dec 2, I would like monitors mounted in portrait on some walls in Stauffer and iStage: hence the name DOOR PORTALS, DoorPorts

BEHAVIOR

Each single-sided Door Portal has a camera facing out from its front.

In iStage: when you stand 20’ away from the Door Portal  you see live feed of the receptionist desk at Stauffer.   When there is noone present, the DoorPortal should show past people, faintly.

As you occlude the Door Portal (background subtract, use tml.jit.Rokeby), it turns into a mirror.
   As you walk toward this Door Portal, at 15' the mirror fades to live feed from another Door Portal B
At 10’ the live feed fades to live feed from kn Door Portal C


LOCATIONS, SIZES

(1) In iStage in the location that we discussed — next to the door to offices, flush with the floor.  Largest size.

(2) On a wall in Stauffer B, facing the reception desk  (in addition or instead of the display behind receptionist’s head.)
Cables must be hidden — we cannot have exposed cable in the receptionist area.  Let the Space committee take note.

(3) Mounted on an wall in the dark Commons of the BY.

All these should be PORTRAIT — no landscape, please — let’s avoid knee-jerk screenic response!   Bordered images are boring.

NICOLE, your idea for prototyping is very smart.   if you want, please feel free to make paper mockups — just 
ask Ozzie or Pete for dimensions of our monitors, cut paper to 1-1 scale, and bring some mock doors that we can pin to the walls in the BY and iStage.

Re. scale, I’m ok with the doors not human height — 1-1 body size is too literal (= boring).  Plus I want children-sized apparatus in our office spaces.  Our office spaces are all designed for 5-6’ tall people, nominally adults — this is socially sterile.

NOTE: I think we do NOT need a big screen on the foyer — it is too expensive and a waste of a monitor (unless CSI buys one :)  
For that Brickyard foyer, if people — e.g. Chris R — wants a porthole (≠ portal) then let’s put a pair of iPads one on each side of that foyer sub wall, and stream live video between them.


TIMETABLE (ideal)

Nov 15
prototypes delivered

Nov 21
bomb tested, 

Nov 24
Deployed for WIP event
(If is documentation day for LRR, then this is when we should invite Sarah H to send her team to shoot the Door Portal along with the best of LRR for Herberger (gratis!))


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

Matt Briggs: lighting instrument experiments in everyday space (i.e. Brickyard)



I may have found a way to incorporate Byron's mechatronics to utilize natural light.
I have many laser cut "icicles" of dichromatic acrylic (picture attached)
(i.e  ).

These "icicles" do a great job of capturing light. I could hang many of these from a panel 
which either may be recessed in place of a ceiling panel or hung from a ceiling
panel. This panel could be made of a flexible material that already exists:
( )
or out of a material that is laser cut to become flexible. The panel could be animated from
a number of control points using Byron's mechatronics. This would create an effect similar to the
combination of:
+
= potential light installation
We then could create an array of panels or clusters of panels which could take in rhythm data
via xOSC and output them into movement of the "icicles" positions in space. 

it is important to have a certain amount of light in the space to create a visual equilibrium and
relieve tension in the eyes. These "icicles" will not provide an significant introduction of light in
to the space. However they will produce a refraction of the existing light and a dynamic and non-
intrusive way. If this light is paired with Josh's light tube system or the already existing natural light 
I believe it could create an small amplification of that light but more importantly a psychological affect 
that may have equal weight on the users of the space. I think it is important to have subtle but
aesthetic pleasing sources of light in the workspace and this may function in that regard.

Any thoughts or criticisms?

I could fabricate these panels and install a small cluster by the workshop date.
Byron do you think it would be possible to animate them by this time?
I will leave the bag of "icicles" on your desk at synthesis for reference.

Side note: I leave Monday for the AR2U conference in Ames, Iowa.
So I won't be able to continue on this project until I get back on the 10th.

A follow up,

I just spoke with Byron and hypothesized some potential applications the "icicles."

In the application above we could create movement based of the refraction of the
natural light or interject movement based on applied rhythm. 

We also spoke of using this light refraction as a feedback loop as a form of rhythm.
This application could be applied to the movement in forms beyond the panels using
the natural light as a parameter.

A suggested form could be a desk lamp or a corner extension. The "icicles" 
could be stacked from top to bottom and rotated from the base to create a vertical
or horizontal deflection. The "icicles" could also be installed in a flexible panel in
a more rigid application to give direct control over their movement.

We discussed the fact that the application of the "icicles" needs to be architectural
rather then sculptural. I think this can be achieved in all of these applications
but would like some input on these applications. Unfortunately I cannot find
any examples of the other applications but I can create sketches if it is unclear.

Byron also assured that the mechatronics would work in these scenarios. 

PS (I've attached the images in this email)

thanks,

-matthew briggs
 undergraduate researcher @synthesiscenter