Re: Synthesis next week: demonstrating results, extended campfire

I teach during this time (12:00 - 3:00) on Tuesdays. Maybe we could demo lounge systems last?

Best,
Byron

On Wed, Nov 21, 2018 at 7:03 AM sxw asu <sxwasu@gmail.com> wrote:
Hi Everyone,

Let's do as we can an extended campfire Tuesday Nov 27 afternoon 1:30 - 3:30 in iStage / Lounge to demonstrate what’s been done this semester.  

For example:

Stauffer
• Cafe / Leslie, Connor, Yanjun, Christy (walk to Stauffer Lounge)
• Stauffer Lounge tour / Byron
• ? Lamps / Alexia

iStage 
• Cafe / Leslie, Connor, Yanjun, Christy (walk to Stauffer Lounge)
• Lamps / Alexia
• Thermal economies; light games / Garrett, Andrew
• Modalys instruments; physical modeling /  Emiddio
• State engine workshop: Brandon (bring your own laptops + sc)

Please add to this what else you’d like to see / show from this Fall.
Since this week we don’t have Synthesis ops — American Thanksgiving! — confirm by email.

Megan, can we document this?  Everyone do also take your own pictures for respective project pages…


--
You received this message because you are subscribed to the Google Groups "synthesis-active" group.
To unsubscribe from this group and stop receiving emails from it, send an email to synthesis-active+unsubscribe@googlegroups.com.
To post to this group, send email to synthesis-active@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/synthesis-active/7F789B55-DE2F-4B50-9CB2-FAC3E4AB08F4%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


--
Byron Lahey, PhD
Arizona State University
School of Arts, Media and Engineering
Assistant Professor, Digital Culture
Honors Faculty
byron.lahey@asu.edu

Re: network of motors for "Theater of Things" project

I'm happy to discuss this tomorrow. I share Xin Wei's enthusiasm for seeing physical motion as a fundamental component of our systems. The main thing we need to do is decide what features to prioritize, then we can select hardware for prototypes. 

Cheers,
Byron

On Thu, Sep 17, 2015 at 7:33 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Hi Ozzie,

OK to follow up on our chat today.

It turns out that Zlaket’s Synthesis research time is already dedicated by to work with Chris Ziegler on Ziegler's motorized lights for forest3.  You may be already consulted on that :)

However I think we must not wait for that project to finish before getting a move on the "Place and Atmosphere: Space Design: Theater of Things” research project.

We can start with a provisional mechatronic foundation for the robotic animation of small lightweight objects like pieces of cardboard, ribbon, flowers, mirrors.   This would extend the Ozone platform to mechatronics that can be animated using logics from the media processing.

Would you be interested and available to do the first step: develop a small test network of 3 small motors that can be addressed from the Ozone system using an OSC protocol that Julian can decide (in consultation with you and Todd)?     I think it would smart for us to consult with Byron because I know he’s already thinking about streamlining his hw. 

Later we can discuss good “default” or initial behaviors some of which Byron already implemented two summers ago: imitation, delay, antisymmetry, repeat, reverse, etc. etc.

I’m imagining this should not take very much of your time, but it is of foundational research importance, as new element of our responsive environment toolkit.
Of course, this assumes that you have the time to take this on before the establishment of the new electronics labs downstairs in the former TV Studios.

We can touch base on the first goals maybe with Byron before the AME General Friday or at a Tea...
Xin Wei


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
Fellow: ASU-Santa Fe Center for Complex Biosocial Systems
Affiliate Professor: Future of Innovation in Society; Computer Science; English
Founding Director, Topological Media Lab
skype: shaxinwei • mobile: +1-650-815-9962
_________________________________________________________________________________________________

Re: Lighting Ecology: Darkener and Lighteners game.

Thanks for the suggestion about borrowing a firewire generation Mac. I think that would work great for the Camera-Projector Robot system. 

Caroline, is it possible for me to borrow a firewire Mac for the remainder of this semester for this use?

For the iStage system, the current computer that is being used for video is a tower with firewire (as I recall), but we've talked about installing the new trashcan for this purpose. That would, in all likelihood, not work with the current cameras and reacTIVision. I can run multiple instances of reacTIVision simultaneously, so we should be able to use the existing composited video input system, assuming firewire inputs. The computer was not connected to the external network, so I was not able to download reacTIVision and test anything this evening (didn't want to change any hardware configurations). I'll bring it in on a flash drive tomorrow for a test. This will give us a sense of the feasibility of the approach in general, then we can consider camera (or software hacking) options if we swap out the computer. 

Best,
Byron

On Mon, Nov 10, 2014 at 12:32 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Maybe Ozzie and you can see with Caroline if you can borrow a Firewire Mac for this for at least the rest of Fall term, assuming all the Macs are imaged with Max?

Xin Wei


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________




On Nov 10, 2014, at 11:20 AM, Byron Lahey <byron.lahey@asu.edu> wrote:

I thought I had ironed out all the issues with the ReacTIVision fiducial CV system last week when I discovered the new update of the software, but I've hit another snag. Maybe someone has an idea to solve or work around this problem. The problem is that the new computers all have Thunderbolt ports rather than native Firewire and ReacTIVision does not recognize the cameras once they are connected with an adapter. I didn't encounter this issue with my earlier testing because my computer is old enough to still have a Firewire port. Interestingly, Max has no problem seeing the Firewire camera when it is connected through the Thunderbolt adapter, so it is not just a power issue or complete incompatibility. The simplest solution I can think of is to purchase new cameras that are compatible with both the computer at a hardware level and with the ReacTIVision software. Tracking down cameras that we have a reasonable guarantee of working shouldn't be too difficult, but I don't know if we want to prioritize this as an investment or not. The key (at least for my camera-projector system) is to have a camera with a variable zoom so that the imaging sensing can be closely matched to the projected image. For an overhead tracking system, the key factors would be a wide angle lens, high resolution (to minimize the required size for the fiducials) and reasonably high speed (for stable image tracking). 

Another solution to this is to simply used older computers that have firewire ports so that we can use existing cameras. This would certainly be feasible for the camera-projector system. We would have to test this out for the overhead tracking system. The best thing to do of the overhead system is to try out the existing camera arrangement to see how well it works with the fiducials. I can try the out this afternoon (assuming the space is open for testing). 

In general I think a fiducial based computer vision system is a good way to reliably track individuals and to afford conscious interaction with the system. 

Best,
Byron

On Sun, Nov 9, 2014 at 8:19 PM, Sha Xin Wei <Xinwei.Sha@asu.edu> wrote:

After the Monday tests of hardware and networking mapping to lights in iStage with Pete, Omar and I can code the actual logic the rest of the week with Garrett, and maybe with some scheduled consults w Evan and Julian for ideas.  Omar’s got code already working in on the body lights.  I want to see it working via the overhead lights beaming onto the floor.  For this game we need to know who’s who. 

Byron, Ozzie anyone:  What is the easiest way in the iStage for us to track individual identities?   Not blob because they’ll  confuse.

Byron: IF indeed you’ve got the fiducial code working.   Can you show this to Omar (and Garrett) so they can work on the   Darkener and Lighteners ecology with me?   Can we have people carry fiducials visible to the camera from above?  Maybe they can place a fiducial next to them in order to draw light.   Can the fiducuals be printed in a legible size that is not ludicrously large?   How about if we print a bunch of  fiducials as “LIGHT COINS” of three different denominations — RGB — each good for a certain amount of elapsed time.   Exposing a fiducial to the camera charges it against your account (it can be used only once per session).  Maybe waving it can increase  the intensity but shorten the time (reciprocal constraint).   People can then choose to band together or in the +/- version, subtract etc.

Disappearing cats, shadow people, and now ghosts!

http://www.telegraph.co.uk/science/science-news/11214511/Ghosts-created-by-scientists-in-disturbing-lab-experiment.html

Very interesting what an an asynchronous pseudo-self interaction can do. In this case, 500 ms seemed to be the delay that created the body dislocation and subsequent ghostly presences. Would be interesting to read the whole study to see more details on the timing in these experiments.    

Byron

Re: Lighting Ecology: Darkener and Lighteners game.

I thought I had ironed out all the issues with the ReacTIVision fiducial CV system last week when I discovered the new update of the software, but I've hit another snag. Maybe someone has an idea to solve or work around this problem. The problem is that the new computers all have Thunderbolt ports rather than native Firewire and ReacTIVision does not recognize the cameras once they are connected with an adapter. I didn't encounter this issue with my earlier testing because my computer is old enough to still have a Firewire port. Interestingly, Max has no problem seeing the Firewire camera when it is connected through the Thunderbolt adapter, so it is not just a power issue or complete incompatibility. The simplest solution I can think of is to purchase new cameras that are compatible with both the computer at a hardware level and with the ReacTIVision software. Tracking down cameras that we have a reasonable guarantee of working shouldn't be too difficult, but I don't know if we want to prioritize this as an investment or not. The key (at least for my camera-projector system) is to have a camera with a variable zoom so that the imaging sensing can be closely matched to the projected image. For an overhead tracking system, the key factors would be a wide angle lens, high resolution (to minimize the required size for the fiducials) and reasonably high speed (for stable image tracking). 

Another solution to this is to simply used older computers that have firewire ports so that we can use existing cameras. This would certainly be feasible for the camera-projector system. We would have to test this out for the overhead tracking system. The best thing to do of the overhead system is to try out the existing camera arrangement to see how well it works with the fiducials. I can try the out this afternoon (assuming the space is open for testing). 

In general I think a fiducial based computer vision system is a good way to reliably track individuals and to afford conscious interaction with the system. 

Best,
Byron

On Sun, Nov 9, 2014 at 8:19 PM, Sha Xin Wei <Xinwei.Sha@asu.edu> wrote:

After the Monday tests of hardware and networking mapping to lights in iStage with Pete, Omar and I can code the actual logic the rest of the week with Garrett, and maybe with some scheduled consults w Evan and Julian for ideas.  Omar’s got code already working in on the body lights.  I want to see it working via the overhead lights beaming onto the floor.  For this game we need to know who’s who. 

Byron, Ozzie anyone:  What is the easiest way in the iStage for us to track individual identities?   Not blob because they’ll  confuse.

Byron: IF indeed you’ve got the fiducial code working.   Can you show this to Omar (and Garrett) so they can work on the   Darkener and Lighteners ecology with me?   Can we have people carry fiducials visible to the camera from above?  Maybe they can place a fiducial next to them in order to draw light.   Can the fiducuals be printed in a legible size that is not ludicrously large?   How about if we print a bunch of  fiducials as “LIGHT COINS” of three different denominations — RGB — each good for a certain amount of elapsed time.   Exposing a fiducial to the camera charges it against your account (it can be used only once per session).  Maybe waving it can increase  the intensity but shorten the time (reciprocal constraint).   People can then choose to band together or in the +/- version, subtract etc.

Re: [Synthesis] lighting instrument experiments in everyday space (i.e. Brickyard)

Hi everyone,

I spent a few minutes playing (!) with the dichromatic shafts that Matt created. These have a lot of interesting potential. I attached a few photos and a couple of very rough videos (one-handed/no-handed capture on an old phone) just to give a few hints at this potential. It goes without saying that these perceptual experiences are far richer in person. The static images show the effects on the wall of light reflected off of a random pile of these pieces. This static configuration produces a projection that changes shape and color as the sun moves across the sky. While this is cool, it is certainly nothing new. The videos show a couple of quick experiments moving the shafts. Rotating them, one can get shafts of light that move like clock hands. Imagine a large cloud of these on an wall all rotating at their own frequencies (all of which could be variable). Most interesting of all (at least to me) is the effects created by bending the shafts. This allows the light to be focused and spread in organic shapes. With several of these simultaneously bent (not well documented), the effect is quite striking since they all bend slightly differently and distort in a broadly cohesive, but individually independent way. This process could produce the "shafts of light through trees" effect that Xin Wei suggested. I would be very interested in seeing how this works with the light guide that Josh is designing.

Cheers,
Byron



On Thu, Oct 30, 2014 at 1:41 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Following Matt and Byron --

How about 

(0)  Shafts of sun light beaming through the space as if through forest canopy  — How can we focus diffuse sunlight from Josh’s mylar tunnels?

(1)  Accordioned paper lanterns, floor standing,, size of body?
Motorized to turn its head slowly tracking sun like a sunflower, but spilling light ..

(2) A layer of flat clouds of round curved “chips” about 5-10 cm wide, made of frosted diffuser material — or even better cut into intricate lace?
Each cloud could be pretty compact — no more than say 1m wide, so each array would be about 100 pieces at the very most.
Each piece could be suspended on a string.   Some strings are on motors.


Diverse images, none of which capture what I’m imagining.






Lenka Novakova

Suspended 1m wide cast class disks into which she projected video of rivers







On Oct 29, 2014, at 5:56 PM, Matthew Briggs <matthewjbriggsis@gmail.com> wrote:

A follow up,

I just spoke with Byron and hypothesized some potential applications the "icicles."

In the application above we could create movement based of the refraction of the
natural light or interject movement based on applied rhythm. 

We also spoke of using this light refraction as a feedback loop as a form of rhythm.
This application could be applied to the movement in forms beyond the panels using
the natural light as a parameter.

A suggested form could be a desk lamp or a corner extension. The "icicles" 
could be stacked from top to bottom and rotated from the base to create a vertical
or horizontal deflection. The "icicles" could also be installed in a flexible panel in
a more rigid application to give direct control over their movement.

We discussed the fact that the application of the "icicles" needs to be architectural
rather then sculptural. I think this can be achieved in all of these applications
but would like some input on these applications. Unfortunately I cannot find
any examples of the other applications but I can create sketches if it is unclear.

Byron also assured that the mechatronics would work in these scenarios. 

PS (I've attached the images in this email)

thanks,

-matthew briggs
 undergraduate researcher @synthesiscenter

On Wed, Oct 29, 2014 at 4:43 PM, Matthew Briggs <matthewjbriggsis@gmail.com> wrote:
Hello all,

I may have found a way to incorporate Byron's mechatronics to utilize natural light.
I have many laser cut "icicles" of dichromatic acrylic (picture attached)

These "icicles" do a great job of capturing light. I could hang many of these from a panel 
which either may be recessed in place of a ceiling panel or hung from a ceiling
panel. This panel could be made of a flexible material that already exists:
or out of a material that is laser cut to become flexible. The panel could be animated from
a number of control points using Byron's mechatronics. This would create an effect similar to the
combination of:
+
= potential light installation
We then could create an array of panels or clusters of panels which could take in rhythm data
via xOSC and output them into movement of the "icicles" positions in space. 

it is important to have a certain amount of light in the space to create a visual equilibrium and
relieve tension in the eyes. These "icicles" will not provide an significant introduction of light in
to the space. However they will produce a refraction of the existing light and a dynamic and non-
intrusive way. If this light is paired with Josh's light tube system or the already existing natural light 
I believe it could create an small amplification of that light but more importantly a psychological affect 
that may have equal weight on the users of the space. I think it is important to have subtle but
aesthetic pleasing sources of light in the workspace and this may function in that regard.

Any thoughts or criticisms?

I could fabricate these panels and install a small cluster by the workshop date.
Byron do you think it would be possible to animate them by this time?
I will leave the bag of "icicles" on your desk at synthesis for reference.

Side note: I leave Monday for the AR2U conference in Ames, Iowa.
So I won't be able to continue on this project until I get back on the 10th.

thanks,

-matthew briggs
 undergraduate researcher @synthesiscenter

On Sun, Oct 26, 2014 at 6:41 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Dear Matt, Kevin, and Josh,

I’d like to (re)introduce you and Omar Faleh, who is coming from the TML for the LRR in November.

In parallel to the iStage work, we should push forward on the animation of lighting in the Brickyard by
• Bringing natural sunlight into the BY commons (Josh)
• Modulating that natural light using Byron’s mechatronics
• Make motorized desk lamps using Byron’s mechatronics
• Designing rhythms (ideas from Omar, using Julian’s rhythm kit)

Can we see what you have come up with this coming week?

Meanwhile let me encourage you to communicate with Omar to see what you can create by the time of his arrival circa Nov 17.

To be clear, this is distinct from the LRR in the iStage, but as strategically important in that it is the extension of that work into everyday space, which is one of Synthesis’ main research extensions this year.

I’d like to write this work into a straw proposal for external sponsorship of your work… so it’d be great to have some pix soon!

Cheers,
Xin Wei

Re: Brickyard media systems architecture; notes for Thursday BY checkin



On Thu, Jun 26, 2014 at 10:51 AM, Katie Jung <katiejung1@gmail.com> wrote:
Hi All,
I will beam in online.
See you then!


On Thu, Jun 26, 2014 at 11:48 AM, Peter Weisman <peter.weisman@asu.edu> wrote:
> Yes. Lets all meet at Brickyard.
>
> Pete
>
>
> Sent from my Galaxy S®III
>
>
> -------- Original message --------
> From: Garrett Laroy Johnson
> Date:06/26/2014 8:41 AM (GMT-07:00)
> To: Tain Barzso
> Cc: Peter Weisman , Sha Xin Wei , Katie Jung , Julian Stein , "Byron Lahey
> (Student)" , Assegid Kidane , Garrett Laroy Johnson , Kevin Klawinski ,
> "Matthew Briggs (Student)" , Xin Wei Sha , post@synthesis.posthaven.com
> Subject: Re: Brickyard media systems architecture; notes for Thursday BY
> checkin
>
> 2.30 is good for me too. Meet at BY?
>
>
> Garrett
>
>
> On Jun 26, 2014, at 8:34 AM, Tain Barzso <Tain.Barzso@asu.edu> wrote:
>
> 2:30 works for me, as well.
>
> From: Peter Weisman <peter.weisman@asu.edu>
> Date: Wed, 25 Jun 2014 21:07:34 -0700
> To: Sha Xin Wei <shaxinwei@gmail.com>, Peter Weisman
> <peter.weisman@asu.edu>, Katie Jung <katiejung1@gmail.com>, Julian Stein
> <julian.stein@gmail.com>, "Byron Lahey (Student)" <Byron.Lahey@asu.edu>,
> Assegid Kidane <Assegid.Kidane@asu.edu>, Tain Barzso <Tain.Barzso@asu.edu>,
> Garrett Laroy Johnson <gljohns6@asu.edu>, Kevin Klawinski
> <kklawins@asu.edu>, "Matthew Briggs (Student)" <Matthew.Briggs@asu.edu>
> Cc: Xin Wei Sha <Xinwei.Sha@asu.edu>, "post@synthesis.posthaven.com"
> <post@synthesis.posthaven.com>
> Subject: RE: Brickyard media systems architecture; notes for Thursday BY
> checkin
>
> I am free at 2:30pm AZ time on Thursday.  Can we all meet then?
>
> Pete
>
>
> Sent from my Galaxy S®III
>
>
> -------- Original message --------
> From: Sha Xin Wei
> Date:06/25/2014 8:14 PM (GMT-07:00)
> To: Peter Weisman , Katie Jung , Julian Stein , "Byron Lahey (Student)" ,
> Assegid Kidane , Tain Barzso , Garrett Laroy Johnson , Kevin Klawinski ,
> "Matthew Briggs (Student)"
> Cc: Xin Wei Sha , post@synthesis.posthaven.com
> Subject: Brickyard media systems architecture; notes for Thursday BY checkin
>
> Hi Everyone,
>
> This note is primarily about media systems, but should connect with the
> physical interior design, furniture and lighting etc.
>
> Thanks Pete for the proposed media systems architecture summary.  I started
> to edit the doc but then bogged down because the architecture was
> computer-centric, and it’s clearer as a diagram.
>
> Instead of designing around the 2 Mac Minis and the Mac Pro computers I’d
> like to design the architecture around relatively autonomous i/o devices :
>
> Effectively a network of:
> Lamps (for now addressed via DMX dimmers), floor and desk lamps as needed to
> provide comfortable lighting
> 5 Wifi GoPro audio-video cameras (3 in BY, 1 Stauffer, 1 iStage)
> xOSC-connected sensors and stepper motors,  ask Byron
> 1 black Mac Pro on internet switch
> processing video streams, e.g. extracting video feature emitted in OSC
> 1 Mac Minis on internet switch
> running lighting master control Max patch, ozone media state engine (later)
> 1 Mac Mini on internet switch
> processing audio, e.g. extracting audio features emitted in OSC
> 1 iPad running Ozone control panel under Mira
> 1 Internet switch in Synthesis incubator room for drop-in laptop processing
> 8 x transducers (or Genelecs — distinct from use in lab or iStage, I prefer
> not to have “visible” speaker boxes or projectors in BY — perhaps flown in
> corners or inserted inside Kevin’s boxes, but not on tables or the floor )
>
> The designer-inhabitants should be able to physically move the lamps and
> GoPros and stepper motor’ed objects around the room, replugging into few
> cabled interface devices (dimmer boxes) as necessary.
>
> Do the GoPro’s have audio in?   Can we run contact or boundary or air mic
> into one ?
>
> We could run ethernet from our 3 computers to the nearest wall ethernet
> port?
>
> Drop-in researchers should be able to receive via OSC whatever sensor data
> streams (e.g. video and audio feature streams cooked by the fixed black
> MacPro and audio Mini)
> and also emit audio / video / control OSC into this BY network.
>
> DITTO iStage.
>
> Can we organize an architecture session Thursday sometime ?
> I’m sorry since I have to get my mother into post-accident care here in
> Washington DC I don’t know when I’ll be free.
>
> Perhaps Thursday sometime between 2 and 5 PM Phoenix?  I will make myself
> available for this because I’d like to take up a more active role now thanks
> to your smart work.
>
> Looking forward to it!
> Xin Wei
>
>
> PS. Please let’s email our design-build to post@synthesis.posthaven.com  so
> we can capture the design process and build on accumulated knowledge in
> http://synthesis.posthaven.com
>
> __________________________________________________________________________________
> Sha Xin Wei
> Professor and Director • School of Arts, Media and Engineering • Herberger
> Institute for Design and the Arts / Director • Synthesis / ASU
> Founding Director, Topological Media Lab / topologicalmedialab.net/  /
> skype: shaxinwei / +1-650-815-9962
> __________________________________________________________________________________
>
>
>
>
>
>



--
***
Katie Jung
(514) 502 0205