[TML][Synthesis] Plant-Thinking Meeting/Seminar: discuss Marder, November 1 - Dec 19 ?

Hi Michael, everyone,

All great! 

I’ve been talking with Oana and most recently Omar about the vegetal studies research
From Omar it seems that most of the interested folks are away or too busy in September.  And October there are other events (e.g. Listen(n) @ ASU; lighting animation workshop, Einstens Dreams workshop) planned related to Synthesis or TML. 

It’s a good idea to do it on a weekly basis.  But instead of stretching over a whole semester, how about we concentrate the Marder-based part of the seminar into 1.5 months during a period when people are prepared to really grapple with the Marder.

To take the reading of Marder seriously, I think it’d be necessary to do this in person, or as synchronously as our portals can deliver.   And we need time for each one to prepare himself/herself with absorbing related works.  I would strongly recommend some of the Aristotle and Goethe.   ( to make time we invest worth the investment. ) 

So our — Oana and my — suggestion is to prep readings and exchange references etc. in vegetal studies stream
now, and do the actual readings  of Marder over seven weeks: November 3 through Dec 19.   
We recommend 
Week 1 Chap 1
Week 2 Chap 2
Week 3-4 Chap 3 & 4
Week 5-6 Chap 4 & 5
Week 7 Papers and Crits (double long session)

It’d be great to aim to deliver some substantial multi-format responses — on the order of a paper, short video, sketches of experiments that really synthesize the insights form the seminar.


Here are two key starting operating rules for this game :

• Avoid allegory — not the depiction of “what plants look like" but how plants grow, and experience dynamical existence.   

• Avoid as radically as possible anthropomorphizing .


Perhaps I can come mid November and mid December.
On the other hand my duties this Fall may well be so heavy that it’d be easier if Synthesis hosted this theoretical phase of the joint TML-Synthesis vegetal studies research stream in Phoenix.

Suggestions?
Xin Wei


__________________________________________________________________________________
Sha Xin Wei, Ph.D. • xinwei@mindspring.com • skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________

Bill Forsythe: Nowhere and Everywhere at the same time No. 2 (pendulums)

Two works by two choreographers Dimitris Papaioanno, and Bill Forsythe,
with very different and interesting approaches to causality and temporal texture…

- Xin Wei

On Jul 20, 2014, at 12:55 AM, Michael Montanaro <michael.montanaro@concordia.ca> wrote:

A beautiful choreographed work: NOWHERE (2009) / central scene / for Pina
from Dimitris Papaioanno



Begin forwarded message:

From: "Vangelis Lympouridis" <vl_artcode@yahoo.com>
\Date: July 22, 2014 at 8:39:27 AM GMT+2
To: "Adrian Freed" <Adrian.Freed@asu.edu>, "'Sha Xin Wei'" <shaxinwei@gmail.com>, "'John MacCallum'" <john@cnmat.berkeley.edu>

When you have a second please watch this 2 min video with Forsythe’s piece Nowhere and Everywhere at the same time No2.

I think it is SO to the core of what we reasoning about… J

                                                                                                                                                                                                                                                        

 
 
Vangelis Lympouridis, PhD

Visiting Scholar,

School of Cinematic Arts

University of Southern California

 
Senior Research Consultant,
Creative Media & Behavioral Health Center

University of Southern California

 
Whole Body Interaction Designer

Tel: +1 (415) 706-2638 

Re: [Synthesis] Camera Obscura San Francisco, an analog way to bring the sky into the interior

Fyi, some interesting links of projects that seems relevant to our previous conversation. 

1. Magic of Flying
- Bringing movements in the sky into the interior of the Brickyard

2. Inflating Trash Bags Rhythmically Mimic Ocean Waves

Much appreciated,
Cooper


On Tue, Jul 1, 2014 at 2:31 AM, Garth Paine <Garth.Paine@asu.edu> wrote:
Hi all

There is some interesting work in sky lights in amplifying light amplitude. Can I suggest someone take a look at that area?  Networked Prisms might be an interesting approach for instance.  Perhaps they could be servo/stepper motor controller to redirect light streams into different parts of the space. Modeling this would be an interesting exercise. . 

Cheers Garth
Sent on the Move

On 1 Jul 2014, at 10:22, "Sha Xin Wei" <shaxinwei@gmail.com> wrote:

I love the idea.  But we explored that in TML circa 2002-2004 in Atlanta, and circa 2005-2006 in Montreal.
I’m happy to review the techniques that we investigated, all of which involved more labor than available without access to maquiladora hands.

It’s worth Matt or Kevin posting the architectural techniques that are in actual use for light guides.  Apparently the solutions cost nontrivial $ or labor.    James Turrell has apparently made some study of this ...

Xin Wei



__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________






On Jun 30, 2014, at 5:33 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

My favorite website, Alibaba.com (China's largest website... they keep trying to buy yahoo), has tons of plastic optical fibre for sale, which would be a great, energy efficient way to move light around in a modular way. Perhaps one end could be coupled to the window and the other end could be put wherever light is needed?

http://www.alibaba.com/trade/search?fsb=y&IndexArea=product_en&CatId=&SearchText=Plastic+optical+fiber

Mike


On Sun, Jun 29, 2014 at 7:33 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:
I have heard that it is possible to purchase enormous fibre-optic cables for cheap. Some DC capstones were working on just this a few semesters ago, but I think they sadly gave it up for something else...

Mike


On Sun, Jun 29, 2014 at 8:24 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Relevant to the discussion of how to bring natural light into the interior of the Brickyard commons:

Camera Obscura San Francisco 

is an analog way to bring the sky into the interior of a building.
The building is specially built for that purpose.

But pinholes do work too, and presage quantum mechanics.  Hence this could be another turn in the endless spiral between art, techne, and science.

Xin Wei
__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________







--
You received this message because you are subscribed to the Google Groups "synthesis-research" group.
To unsubscribe from this group and stop receiving emails from it, send an email to synthesis-research+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Sine coffa, vita nihil est!!!

RE: [Synthesis] networked cameras; potential response to activity vs actual activity

Cooper, if one of those black GoPros are available, may be we can find them useful under certain scenarios. Below I have a few network cameras along with their pro and con features.

Pro: wired and wireless camera with IR for night vision, pan tilt action, 2way audio, 1280x800wxga
Con: no POE

pro: POE, 2 way audio, wxga
con: no wireless, no optical zoom

pro: high resolution, POE, 2-way audio
con: fixed focus and zoom

pro: high and multiple resolution, POE, 2way audio, focus and zoom, supports RTP/RTSP/UDP/TCP/HTTP protocols
con: dome design, no pan tilt


From: synthesis-research@googlegroups.com [synthesis-research@googlegroups.com] on behalf of Sha Xin Wei [shaxinwei@gmail.com]
Sent: Saturday, June 28, 2014 11:23 AM
To: synthesis-research@googlegroups.com
Cc: post@synthesis.posthaven.com; ozone@concordia.ca
Subject: [Synthesis] networked cameras; potential response to activity vs actual activity

Given Tain, Cooper and other folks’ findings, IMHO we should use wifi networked cameras but not GoPro’s.  (Can we borrow two from somewhere hort term to get the video network going?)  Comments on networked cameras?  
But let’s keep the focus on portals for sound and gesture / rhythm.

Keep in mind the point of the video portal is not to look AT an image in a rectangle, but to suture different regions of space together. 

Cooper’s diagram (see attached) offers a chance to make another distinction about Synthesis research strategy, carrying on from the TML, that is experientially and conceptually quite different from representationalist or telementationalist uses of signs.  

(Another key tactic: to eschew allegorical art.)

A main Synthesis research interest here is not about looking-at an out-of-reach image far from the bodies of the inhabitants of the media space, 
but how to either:;
(1) use varying light fields to induce senses of rhythm and co-movement
(2) animate puppets (whether made of physical material, light, or light on matter).

The more fundamental research is not the actual image or sound, but the artful design of potential responsivity of the activated media to action.
This goes for projected video, lighting, acoustics, synthesized sound, as well as kinetic objects.
All we have to do this summer is to build out enough kits or aspects of the media system to demonstrate these ways of working and in the formate of some attractive installations installed in the ordinary space.

Re. the proto-rearch goals this month, we do NOT have to pre-think the design of the whole of the end-user experience, but to build some kits that would allow us to demonstrate the promise of this approach to VIP visitors in July, students in August, and to work with in September when everyone’s back.

Cheers for the smart work to date,
Xin Wei



On Jun 28, 2014, at 1:58 AM, Cooper Sanghyun Yoo <cooperyoo@asu.edu> wrote:

Hi all, 

I would also like to add some thoughts on using GoPro.  Well to begin with, I am a huge fan of GoPro cameras, and I was testing GoPro Hero 3+ Black Edition (the latest model) last April.  It would be great if AME purchase new GoPro cameras but maybe it is not the best choice for Brickyard projects.

<1012591_10152839329222119_7096611735257331402_n.jpg>
[Image 1] Six GoPros on a special mount made from a 3D printer.  Didn't really work under Arizona weather.  Photos taken near Tempe.

As far as I remember, ideas of using GoPro cameras started because we wanted to locate cameras outside of the building without any wires connected to the computers.  

GoPro cameras are small, light weight, feature loaded, and are designed to be put under extreme conditions.  However, as they are not built for long-term use,  they have battery issue, overheating issue, and Wifi range issues.  Battery last less than an hour, and if we connect a source of electricity the camera will overheat and turn off once a while.

I also found ways to stream full video into a web browser or VLC media player (https://www.youtube.com/watch?v=RhFzrYBXItc) and a Mac app controlling and doing low resolution live previews (https://www.youtube.com/watch?v=au5Yi6y-LmE).  Unfortunately, most of these solutions are for low resolution previews.  Trying GoPro cameras in Max/jitter might still be a very interesting and challenging project, but we can also consider different options.

For example, wifi network cameras can be another good choice.  They are designed specifically for long-term live streaming without being connected to the computer.  Whenever the camera is connected to power source and network, it doesn't even need to be located near the computers.  Image below shows my previous project that used two network cameras in an outdoor environment.
<Camera and Interaction Elevation View.jpg>
[Image 2] Network camera was located outside of the building.  All the machines controlling the LED ceiling were in a different building so we couldn't use wires to connect camera to computer.

Please let me know your ideas and thoughts.

Much appreciated,
Cooper


On Fri, Jun 27, 2014 at 9:21 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Very likely standard httpd would be a bad way to send video since it was designed for sending bytes of characters or a few embedded media objects.   But   Max jit.*  used to read   rtsp video streams as well  ( in the syntax  rtsp://  ?  )

Such long latencies would make GoPro impossible for synchronous action, but we should also think about how to intercalate activity based on durational rhythm, not point-sync’d instantaneous events.

Also, time delaying the feed by 3 hours would be interesting first step to intercalating Montreal and Phoenix

Xin Wei

On Jun 26, 2014, at 9:41 PM, Garrett Johnson <gljohns6@asu.edu> wrote:

There is a browser object in max. Not sure how it works with video but it sounds like it's worth a shot. 

Garrett L. Johnson
MA musicology 
Synthesis Center research team 
Arizona State University


Am Jun 26, 2014 um 15:15 schrieb Tain Barzso <Tain.Barzso@asu.edu>:

Should have forwarded to the group initially:

Doing some initial research which I will present to the group.  On the Go Pro:

While the wireless feature is designed specifically for controlling and doing low resolution live previews on the Go Pro app, there is a way to stream full video into a web browser via the camera's built in web server.  If any applications that synthesis is using can access this kind of stream (via http://) we should be able to grab video.  A caveat is that the live streamed video has very significant (seems to be > 1 second) lag.  This is likely dependent on resolution, and I would guess 720p video would have less lag than 1080p  - but we'd have to try it to see.

Also, it looks like there is a cable that allows us to both insert an external (3.5mm) microphone AND still utilize the camera's USB port as a power input as to avoid running off the battery.

Here is a video that a youtube user posted covering the video streaming accessibility:  https://www.youtube.com/watch?v=Y1XaBJZ8Xcg

Tain

[Synthesis] networked cameras; potential response to activity vs actual activity

Given Tain, Cooper and other folks’ findings, IMHO we should use wifi networked cameras but not GoPro’s.  (Can we borrow two from somewhere hort term to get the video network going?)  Comments on networked cameras?  
But let’s keep the focus on portals for sound and gesture / rhythm.

Keep in mind the point of the video portal is not to look AT an image in a rectangle, but to suture different regions of space together. 

Cooper’s diagram (see attached) offers a chance to make another distinction about Synthesis research strategy, carrying on from the TML, that is experientially and conceptually quite different from representationalist or telementationalist uses of signs.  

(Another key tactic: to eschew allegorical art.)

A main Synthesis research interest here is not about looking-at an out-of-reach image far from the bodies of the inhabitants of the media space, 
but how to either:;
(1) use varying light fields to induce senses of rhythm and co-movement
(2) animate puppets (whether made of physical material, light, or light on matter).

The more fundamental research is not the actual image or sound, but the artful design of potential responsivity of the activated media to action.
This goes for projected video, lighting, acoustics, synthesized sound, as well as kinetic objects.
All we have to do this summer is to build out enough kits or aspects of the media system to demonstrate these ways of working and in the formate of some attractive installations installed in the ordinary space.

Re. the proto-rearch goals this month, we do NOT have to pre-think the design of the whole of the end-user experience, but to build some kits that would allow us to demonstrate the promise of this approach to VIP visitors in July, students in August, and to work with in September when everyone’s back.

Cheers for the smart work to date,
Xin Wei

Re: Go pro fact finding research

Hi all, 

I would also like to add some thoughts on using GoPro.  Well to begin with, I am a huge fan of GoPro cameras, and I was testing GoPro Hero 3+ Black Edition (the latest model) last April.  It would be great if AME purchase new GoPro cameras but maybe it is not the best choice for Brickyard projects.


[Image 1] Six GoPros on a special mount made from a 3D printer.  Didn't really work under Arizona weather.  Photos taken near Tempe.

As far as I remember, ideas of using GoPro cameras started because we wanted to locate cameras outside of the building without any wires connected to the computers.  

GoPro cameras are small, light weight, feature loaded, and are designed to be put under extreme conditions.  However, as they are not built for long-term use,  they have battery issue, overheating issue, and Wifi range issues.  Battery last less than an hour, and if we connect a source of electricity the camera will overheat and turn off once a while.

I also found ways to stream full video into a web browser or VLC media player () and a Mac app controlling and doing low resolution live previews ().  Unfortunately, most of these solutions are for low resolution previews.  Trying GoPro cameras in Max/jitter might still be a very interesting and challenging project, but we can also consider different options.

For example, wifi network cameras can be another good choice.  They are designed specifically for long-term live streaming without being connected to the computer.  Whenever the camera is connected to power source and network, it doesn't even need to be located near the computers.  Image below shows my previous project that used two network cameras in an outdoor environment.

[Image 2] Network camera was located outside of the building.  All the machines controlling the LED ceiling were in a different building so we couldn't use wires to connect camera to computer.

Please let me know your ideas and thoughts.

Much appreciated,
Cooper


On Fri, Jun 27, 2014 at 9:21 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Very likely standard httpd would be a bad way to send video since it was designed for sending bytes of characters or a few embedded media objects.   But   Max jit.*  used to read   rtsp video streams as well  ( in the syntax  rtsp://  ?  )

Such long latencies would make GoPro impossible for synchronous action, but we should also think about how to intercalate activity based on durational rhythm, not point-sync’d instantaneous events.

Also, time delaying the feed by 3 hours would be interesting first step to intercalating Montreal and Phoenix

Xin Wei

__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________






On Jun 26, 2014, at 9:41 PM, Garrett Johnson <gljohns6@asu.edu> wrote:

There is a browser object in max. Not sure how it works with video but it sounds like it's worth a shot. 

Garrett L. Johnson
MA musicology 
Synthesis Center research team 
Arizona State University


Am Jun 26, 2014 um 15:15 schrieb Tain Barzso <Tain.Barzso@asu.edu>:

Should have forwarded to the group initially:

Doing some initial research which I will present to the group.  On the Go Pro:

While the wireless feature is designed specifically for controlling and doing low resolution live previews on the Go Pro app, there is a way to stream full video into a web browser via the camera's built in web server.  If any applications that synthesis is using can access this kind of stream (via http://) we should be able to grab video.  A caveat is that the live streamed video has very significant (seems to be > 1 second) lag.  This is likely dependent on resolution, and I would guess 720p video would have less lag than 1080p  - but we'd have to try it to see.

Also, it looks like there is a cable that allows us to both insert an external (3.5mm) microphone AND still utilize the camera's USB port as a power input as to avoid running off the battery.

Here is a video that a youtube user posted covering the video streaming accessibility:  https://www.youtube.com/watch?v=Y1XaBJZ8Xcg

Tain

streaming video from small cameras in a local net: Go pro fact finding research

There are many solutions for a suite of video cams.  It’s not bad to repurpose some commercial and cheaper solutions for a network of video+audio cameras. Analog solutions have much less latency compared to these digital solutions (even after an A2D converter), and they may still be a lot cheaper for a given equivalent resolution.   

Are there (still?) more variety of lens solutions available for analog cameras?

Be sure to get wide angle lenses for whatever camera solution you come up with.

Also, threaded lenses may make it easier to affix IR pass filters as necessary.

Cheers,
Xin Wei

__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Re: Brickyard media systems architecture; notes for Thursday BY checkin

Hey All, 

Julian and I are here. Are you guys on Skype or google + ? 


(sent from a phone)

On Jun 26, 2014, at 5:42 PM, Byron Lahey <byron.lahey@asu.edu> wrote:



On Thu, Jun 26, 2014 at 10:51 AM, Katie Jung <katiejung1@gmail.com> wrote:
Hi All,
I will beam in online.
See you then!


On Thu, Jun 26, 2014 at 11:48 AM, Peter Weisman <peter.weisman@asu.edu> wrote:
> Yes. Lets all meet at Brickyard.
>
> Pete
>
>
> Sent from my Galaxy S®III
>
>
> -------- Original message --------
> From: Garrett Laroy Johnson
> Date:06/26/2014 8:41 AM (GMT-07:00)
> To: Tain Barzso
> Cc: Peter Weisman , Sha Xin Wei , Katie Jung , Julian Stein , "Byron Lahey
> (Student)" , Assegid Kidane , Garrett Laroy Johnson , Kevin Klawinski ,
> "Matthew Briggs (Student)" , Xin Wei Sha , post@synthesis.posthaven.com
> Subject: Re: Brickyard media systems architecture; notes for Thursday BY
> checkin
>
> 2.30 is good for me too. Meet at BY?
>
>
> Garrett
>
>
> On Jun 26, 2014, at 8:34 AM, Tain Barzso <Tain.Barzso@asu.edu> wrote:
>
> 2:30 works for me, as well.
>
> From: Peter Weisman <peter.weisman@asu.edu>
> Date: Wed, 25 Jun 2014 21:07:34 -0700
> To: Sha Xin Wei <shaxinwei@gmail.com>, Peter Weisman
> <peter.weisman@asu.edu>, Katie Jung <katiejung1@gmail.com>, Julian Stein
> <julian.stein@gmail.com>, "Byron Lahey (Student)" <Byron.Lahey@asu.edu>,
> Assegid Kidane <Assegid.Kidane@asu.edu>, Tain Barzso <Tain.Barzso@asu.edu>,
> Garrett Laroy Johnson <gljohns6@asu.edu>, Kevin Klawinski
> <kklawins@asu.edu>, "Matthew Briggs (Student)" <Matthew.Briggs@asu.edu>
> Cc: Xin Wei Sha <Xinwei.Sha@asu.edu>, "post@synthesis.posthaven.com"
> <post@synthesis.posthaven.com>
> Subject: RE: Brickyard media systems architecture; notes for Thursday BY
> checkin
>
> I am free at 2:30pm AZ time on Thursday.  Can we all meet then?
>
> Pete
>
>
> Sent from my Galaxy S®III
>
>
> -------- Original message --------
> From: Sha Xin Wei
> Date:06/25/2014 8:14 PM (GMT-07:00)
> To: Peter Weisman , Katie Jung , Julian Stein , "Byron Lahey (Student)" ,
> Assegid Kidane , Tain Barzso , Garrett Laroy Johnson , Kevin Klawinski ,
> "Matthew Briggs (Student)"
> Cc: Xin Wei Sha , post@synthesis.posthaven.com
> Subject: Brickyard media systems architecture; notes for Thursday BY checkin
>
> Hi Everyone,
>
> This note is primarily about media systems, but should connect with the
> physical interior design, furniture and lighting etc.
>
> Thanks Pete for the proposed media systems architecture summary.  I started
> to edit the doc but then bogged down because the architecture was
> computer-centric, and it’s clearer as a diagram.
>
> Instead of designing around the 2 Mac Minis and the Mac Pro computers I’d
> like to design the architecture around relatively autonomous i/o devices :
>
> Effectively a network of:
> Lamps (for now addressed via DMX dimmers), floor and desk lamps as needed to
> provide comfortable lighting
> 5 Wifi GoPro audio-video cameras (3 in BY, 1 Stauffer, 1 iStage)
> xOSC-connected sensors and stepper motors,  ask Byron
> 1 black Mac Pro on internet switch
> processing video streams, e.g. extracting video feature emitted in OSC
> 1 Mac Minis on internet switch
> running lighting master control Max patch, ozone media state engine (later)
> 1 Mac Mini on internet switch
> processing audio, e.g. extracting audio features emitted in OSC
> 1 iPad running Ozone control panel under Mira
> 1 Internet switch in Synthesis incubator room for drop-in laptop processing
> 8 x transducers (or Genelecs — distinct from use in lab or iStage, I prefer
> not to have “visible” speaker boxes or projectors in BY — perhaps flown in
> corners or inserted inside Kevin’s boxes, but not on tables or the floor )
>
> The designer-inhabitants should be able to physically move the lamps and
> GoPros and stepper motor’ed objects around the room, replugging into few
> cabled interface devices (dimmer boxes) as necessary.
>
> Do the GoPro’s have audio in?   Can we run contact or boundary or air mic
> into one ?
>
> We could run ethernet from our 3 computers to the nearest wall ethernet
> port?
>
> Drop-in researchers should be able to receive via OSC whatever sensor data
> streams (e.g. video and audio feature streams cooked by the fixed black
> MacPro and audio Mini)
> and also emit audio / video / control OSC into this BY network.
>
> DITTO iStage.
>
> Can we organize an architecture session Thursday sometime ?
> I’m sorry since I have to get my mother into post-accident care here in
> Washington DC I don’t know when I’ll be free.
>
> Perhaps Thursday sometime between 2 and 5 PM Phoenix?  I will make myself
> available for this because I’d like to take up a more active role now thanks
> to your smart work.
>
> Looking forward to it!
> Xin Wei
>
>
> PS. Please let’s email our design-build to post@synthesis.posthaven.com  so
> we can capture the design process and build on accumulated knowledge in
> http://synthesis.posthaven.com
>
> __________________________________________________________________________________
> Sha Xin Wei
> Professor and Director • School of Arts, Media and Engineering • Herberger
> Institute for Design and the Arts / Director • Synthesis / ASU
> Founding Director, Topological Media Lab / topologicalmedialab.net/  /
> skype: shaxinwei / +1-650-815-9962
> __________________________________________________________________________________
>
>
>
>
>
>



--
***
Katie Jung
(514) 502 0205

Re: Brickyard media systems architecture; notes for Thursday BY checkin



On Thu, Jun 26, 2014 at 10:51 AM, Katie Jung <katiejung1@gmail.com> wrote:
Hi All,
I will beam in online.
See you then!


On Thu, Jun 26, 2014 at 11:48 AM, Peter Weisman <peter.weisman@asu.edu> wrote:
> Yes. Lets all meet at Brickyard.
>
> Pete
>
>
> Sent from my Galaxy S®III
>
>
> -------- Original message --------
> From: Garrett Laroy Johnson
> Date:06/26/2014 8:41 AM (GMT-07:00)
> To: Tain Barzso
> Cc: Peter Weisman , Sha Xin Wei , Katie Jung , Julian Stein , "Byron Lahey
> (Student)" , Assegid Kidane , Garrett Laroy Johnson , Kevin Klawinski ,
> "Matthew Briggs (Student)" , Xin Wei Sha , post@synthesis.posthaven.com
> Subject: Re: Brickyard media systems architecture; notes for Thursday BY
> checkin
>
> 2.30 is good for me too. Meet at BY?
>
>
> Garrett
>
>
> On Jun 26, 2014, at 8:34 AM, Tain Barzso <Tain.Barzso@asu.edu> wrote:
>
> 2:30 works for me, as well.
>
> From: Peter Weisman <peter.weisman@asu.edu>
> Date: Wed, 25 Jun 2014 21:07:34 -0700
> To: Sha Xin Wei <shaxinwei@gmail.com>, Peter Weisman
> <peter.weisman@asu.edu>, Katie Jung <katiejung1@gmail.com>, Julian Stein
> <julian.stein@gmail.com>, "Byron Lahey (Student)" <Byron.Lahey@asu.edu>,
> Assegid Kidane <Assegid.Kidane@asu.edu>, Tain Barzso <Tain.Barzso@asu.edu>,
> Garrett Laroy Johnson <gljohns6@asu.edu>, Kevin Klawinski
> <kklawins@asu.edu>, "Matthew Briggs (Student)" <Matthew.Briggs@asu.edu>
> Cc: Xin Wei Sha <Xinwei.Sha@asu.edu>, "post@synthesis.posthaven.com"
> <post@synthesis.posthaven.com>
> Subject: RE: Brickyard media systems architecture; notes for Thursday BY
> checkin
>
> I am free at 2:30pm AZ time on Thursday.  Can we all meet then?
>
> Pete
>
>
> Sent from my Galaxy S®III
>
>
> -------- Original message --------
> From: Sha Xin Wei
> Date:06/25/2014 8:14 PM (GMT-07:00)
> To: Peter Weisman , Katie Jung , Julian Stein , "Byron Lahey (Student)" ,
> Assegid Kidane , Tain Barzso , Garrett Laroy Johnson , Kevin Klawinski ,
> "Matthew Briggs (Student)"
> Cc: Xin Wei Sha , post@synthesis.posthaven.com
> Subject: Brickyard media systems architecture; notes for Thursday BY checkin
>
> Hi Everyone,
>
> This note is primarily about media systems, but should connect with the
> physical interior design, furniture and lighting etc.
>
> Thanks Pete for the proposed media systems architecture summary.  I started
> to edit the doc but then bogged down because the architecture was
> computer-centric, and it’s clearer as a diagram.
>
> Instead of designing around the 2 Mac Minis and the Mac Pro computers I’d
> like to design the architecture around relatively autonomous i/o devices :
>
> Effectively a network of:
> Lamps (for now addressed via DMX dimmers), floor and desk lamps as needed to
> provide comfortable lighting
> 5 Wifi GoPro audio-video cameras (3 in BY, 1 Stauffer, 1 iStage)
> xOSC-connected sensors and stepper motors,  ask Byron
> 1 black Mac Pro on internet switch
> processing video streams, e.g. extracting video feature emitted in OSC
> 1 Mac Minis on internet switch
> running lighting master control Max patch, ozone media state engine (later)
> 1 Mac Mini on internet switch
> processing audio, e.g. extracting audio features emitted in OSC
> 1 iPad running Ozone control panel under Mira
> 1 Internet switch in Synthesis incubator room for drop-in laptop processing
> 8 x transducers (or Genelecs — distinct from use in lab or iStage, I prefer
> not to have “visible” speaker boxes or projectors in BY — perhaps flown in
> corners or inserted inside Kevin’s boxes, but not on tables or the floor )
>
> The designer-inhabitants should be able to physically move the lamps and
> GoPros and stepper motor’ed objects around the room, replugging into few
> cabled interface devices (dimmer boxes) as necessary.
>
> Do the GoPro’s have audio in?   Can we run contact or boundary or air mic
> into one ?
>
> We could run ethernet from our 3 computers to the nearest wall ethernet
> port?
>
> Drop-in researchers should be able to receive via OSC whatever sensor data
> streams (e.g. video and audio feature streams cooked by the fixed black
> MacPro and audio Mini)
> and also emit audio / video / control OSC into this BY network.
>
> DITTO iStage.
>
> Can we organize an architecture session Thursday sometime ?
> I’m sorry since I have to get my mother into post-accident care here in
> Washington DC I don’t know when I’ll be free.
>
> Perhaps Thursday sometime between 2 and 5 PM Phoenix?  I will make myself
> available for this because I’d like to take up a more active role now thanks
> to your smart work.
>
> Looking forward to it!
> Xin Wei
>
>
> PS. Please let’s email our design-build to post@synthesis.posthaven.com  so
> we can capture the design process and build on accumulated knowledge in
> http://synthesis.posthaven.com
>
> __________________________________________________________________________________
> Sha Xin Wei
> Professor and Director • School of Arts, Media and Engineering • Herberger
> Institute for Design and the Arts / Director • Synthesis / ASU
> Founding Director, Topological Media Lab / topologicalmedialab.net/  /
> skype: shaxinwei / +1-650-815-9962
> __________________________________________________________________________________
>
>
>
>
>
>



--
***
Katie Jung
(514) 502 0205