Wireless sensor networks

FYI see Adrian’s response re McGill's Sensestage miniBee gadgets.

I know the guys who did the SenseStage work at McGill from Marcelo’s lab.  
They were nice folks, but not the best, and the research application was misguided.
This device was not useful to advance movement / gesture research at the TML.

To check my own assessment, I asked Adrian.

If you want to buy them on your own research funds, chacun à son goût :)
But I would prefer not to throw general AME or Synthesis money at buying such things
unless there’s a specific legacy need for a critical research project that will lead to an concrete outcome in predictable future.

Otherwise, let me suggest that we track the state of the art with Adrian Freed <adrian@cnmat.berkeley.edu>
and Vangelis Lympouridis <vangelis@lympouridis.gr> in USC
and get the best devices for the job under cost time constraint just when we need them.

Cheers,
Xin Wei



Begin forwarded message:

Subject: RE: Fwd: Wireless sensor networks
Date: August 22, 2014 at 7:07:02 PM MST
To: "Sha Xin Wei" <shaxinwei@gmail.com>

I am sure they are good for something but I can't use them for various
reasons.
They just aren't reliable enough unless the performers are out of reach
of RF noise from the
audience/ambient sources.

+ Slow, old atmega cpu with too little memory,
+ old accelerometer instead of full IMU.

There are lots of smaller form factor things in the works like SparkCore
and all the bluetooth LE things coming out.
The problem is you have to look at the fully integrated size with
battery, the additional sensors you actually want, the case
etc etc. Small is 6 months away (BLE), small and fast enough for serious
movement work is still a few years away.

Sixense is a company getting this right with stem:
http://www.sixensestore.com/stemsystem-2.aspx

-------- Original Message --------
Subject: Fwd: Wireless sensor networks
From: Sha Xin Wei <shaxinwei@gmail.com>
Date: Fri, August 22, 2014 3:36 pm
To: Adrian Freed <adrian@adrianfreed.com>


Are these xBees any good?   would these be superseded by other common wireless microprocessors …?

We (at Synthesis and AME) are happy with the xOSC boards,
tho I do hope for a much smaller form factor.   

...
Xin Wei

[Synthesis] [TML] Founding documents of an atelier for ethico-aesthetic play: What we do. How we do it. Why we do it the way we do.

Dear Chris, Dehlia, Kristi, and Tamara,
(Hi Katie, Omar and JA, who know this well!)

Here are some texts that describe more completely how I envision what Synthesis is about, and how I would like it to be a home for radically empirical, ethico-aesthetic play.  I have developed a progressively more nuanced notion of play over the past decade of institutional experiment, funded thanks to the Canada and Quebec's more generous attitude toward experimental cultural work.

(1) The opening chapter of Poiesis, Enchantment, and Topological Matter (MIT Press 2013): 
gives a sense of how I see philosophical inquiry (which is not the same as philosophy as practiced conventionally in the United States academy) comes out of and feeds back into poetic, speculative practice.

(2) The second part of Chapter 7 gives an analysis of the political economy of the amodern atelier that I established in 2001 at Georgia Tech (in the Graphic Visualization and Usability Center, and the School of Literature Communication and Culture), but then moved to Montreal in 2005 with a Chair in critical studies of media arts and sciences in the Fine Arts and Computer Science.   My key meta-goal for the past 15 years has been to create an alternative ecology of practices based on collective, poetic knowledge practices.   I regard FoAM to be the lovelier sister to the Topological Media Lab.

This predecessor version links to a set of colour plates.

With the Synthesis Center I want to extend both the TML's central research streams and its model for how to go about doing that sort of transdisciplinary work.   To be very clear I came back to the States not to slip back into more conventional interpretations and practices of technology, art or humanities, but to harness Yankee enterprise to the much more radical work of ethico-aesthetic improvisation.   (I use radical both in its political sense and in the sense of William James’  radical empiricism.)

(3) Here are two one-page letters written at the invitation of the President of Concordia University about art practice versus art research.  They are not the same.  Many confusions abound here at Herberger as well.

 

(4)  Here’s the Synthesis Center pitch.   I invite your help to polish it before sending it up to the Provost and Engineering Dean. --- in the coming week




(5) Working ethos


(6) And finally, a letter that I share with people who want to study or work with me. 

Hope this gives you a more complete and substantial understanding of what I would like us to do, and why I would like to do it in certain ways.

I sincerely hope that after mulling this over, and considering that this has already actually flourished in two contexts, you will feel inspired to help me realize a third and even more beautiful atelier here at ASU.

Looking forward to working with you!
Xin Wei


__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2146
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Re: wearable x-osc biometric prototype

Seriously, how do we think techniques of observation together with techniques of performance? I know it may be confusing to use those pair of terms -- observation and performance...

we need a better vocabulary that retains some of the mechanisms of entanglement from quantum mechanics, but not this dualism.

Xin Wei

[Synthesis] Portals needed

Hi!

We need portals supporting concurrent conversation via common spaces like tabletops + audio… (no video!)
not talking-heads.     It may be useful to have audio muffle as a feature — continuous stream audio, but default is to  “content-filter” the speech.   (Research in 1970’s … showed which spectral filters to apply to speech to remove “semantics” but keep enough affect…)

Maybe we can invite Omar to work with Garrett or Byron or Ozzie to install Evan’s version in the Brickyard and Stauffer and iStage as a side effect of the Animated spaces: Amorphous lighting network workshop with Chris Ziegler and Synthesis researchers.

BUT we should have portals running now ideally on my desk and on a Brickyard surface.  
And that workshop remains to be planned (October ??)
And possibly running also on the two panel displays re-purposed from Il Y A — now moved to Stauffer...

Xin Wei


__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2146
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

[TML][Synthesis] Plant-Thinking Meeting/Seminar: discuss Marder, November 1 - Dec 19 ?

Hi Michael, everyone,

All great! 

I’ve been talking with Oana and most recently Omar about the vegetal studies research
From Omar it seems that most of the interested folks are away or too busy in September.  And October there are other events (e.g. Listen(n) @ ASU; lighting animation workshop, Einstens Dreams workshop) planned related to Synthesis or TML. 

It’s a good idea to do it on a weekly basis.  But instead of stretching over a whole semester, how about we concentrate the Marder-based part of the seminar into 1.5 months during a period when people are prepared to really grapple with the Marder.

To take the reading of Marder seriously, I think it’d be necessary to do this in person, or as synchronously as our portals can deliver.   And we need time for each one to prepare himself/herself with absorbing related works.  I would strongly recommend some of the Aristotle and Goethe.   ( to make time we invest worth the investment. ) 

So our — Oana and my — suggestion is to prep readings and exchange references etc. in vegetal studies stream
now, and do the actual readings  of Marder over seven weeks: November 3 through Dec 19.   
We recommend 
Week 1 Chap 1
Week 2 Chap 2
Week 3-4 Chap 3 & 4
Week 5-6 Chap 4 & 5
Week 7 Papers and Crits (double long session)

It’d be great to aim to deliver some substantial multi-format responses — on the order of a paper, short video, sketches of experiments that really synthesize the insights form the seminar.


Here are two key starting operating rules for this game :

• Avoid allegory — not the depiction of “what plants look like" but how plants grow, and experience dynamical existence.   

• Avoid as radically as possible anthropomorphizing .


Perhaps I can come mid November and mid December.
On the other hand my duties this Fall may well be so heavy that it’d be easier if Synthesis hosted this theoretical phase of the joint TML-Synthesis vegetal studies research stream in Phoenix.

Suggestions?
Xin Wei


__________________________________________________________________________________
Sha Xin Wei, Ph.D. • xinwei@mindspring.com • skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________

Bill Forsythe: Nowhere and Everywhere at the same time No. 2 (pendulums)

Two works by two choreographers Dimitris Papaioanno, and Bill Forsythe,
with very different and interesting approaches to causality and temporal texture…

- Xin Wei

On Jul 20, 2014, at 12:55 AM, Michael Montanaro <michael.montanaro@concordia.ca> wrote:

A beautiful choreographed work: NOWHERE (2009) / central scene / for Pina
from Dimitris Papaioanno



Begin forwarded message:

From: "Vangelis Lympouridis" <vl_artcode@yahoo.com>
\Date: July 22, 2014 at 8:39:27 AM GMT+2
To: "Adrian Freed" <Adrian.Freed@asu.edu>, "'Sha Xin Wei'" <shaxinwei@gmail.com>, "'John MacCallum'" <john@cnmat.berkeley.edu>

When you have a second please watch this 2 min video with Forsythe’s piece Nowhere and Everywhere at the same time No2.

I think it is SO to the core of what we reasoning about… J

                                                                                                                                                                                                                                                        

 
 
Vangelis Lympouridis, PhD

Visiting Scholar,

School of Cinematic Arts

University of Southern California

 
Senior Research Consultant,
Creative Media & Behavioral Health Center

University of Southern California

 
Whole Body Interaction Designer

Tel: +1 (415) 706-2638 

Re: [Synthesis] Camera Obscura San Francisco, an analog way to bring the sky into the interior

Fyi, some interesting links of projects that seems relevant to our previous conversation. 

1. Magic of Flying
- Bringing movements in the sky into the interior of the Brickyard

2. Inflating Trash Bags Rhythmically Mimic Ocean Waves

Much appreciated,
Cooper


On Tue, Jul 1, 2014 at 2:31 AM, Garth Paine <Garth.Paine@asu.edu> wrote:
Hi all

There is some interesting work in sky lights in amplifying light amplitude. Can I suggest someone take a look at that area?  Networked Prisms might be an interesting approach for instance.  Perhaps they could be servo/stepper motor controller to redirect light streams into different parts of the space. Modeling this would be an interesting exercise. . 

Cheers Garth
Sent on the Move

On 1 Jul 2014, at 10:22, "Sha Xin Wei" <shaxinwei@gmail.com> wrote:

I love the idea.  But we explored that in TML circa 2002-2004 in Atlanta, and circa 2005-2006 in Montreal.
I’m happy to review the techniques that we investigated, all of which involved more labor than available without access to maquiladora hands.

It’s worth Matt or Kevin posting the architectural techniques that are in actual use for light guides.  Apparently the solutions cost nontrivial $ or labor.    James Turrell has apparently made some study of this ...

Xin Wei



__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________






On Jun 30, 2014, at 5:33 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

My favorite website, Alibaba.com (China's largest website... they keep trying to buy yahoo), has tons of plastic optical fibre for sale, which would be a great, energy efficient way to move light around in a modular way. Perhaps one end could be coupled to the window and the other end could be put wherever light is needed?

http://www.alibaba.com/trade/search?fsb=y&IndexArea=product_en&CatId=&SearchText=Plastic+optical+fiber

Mike


On Sun, Jun 29, 2014 at 7:33 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:
I have heard that it is possible to purchase enormous fibre-optic cables for cheap. Some DC capstones were working on just this a few semesters ago, but I think they sadly gave it up for something else...

Mike


On Sun, Jun 29, 2014 at 8:24 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Relevant to the discussion of how to bring natural light into the interior of the Brickyard commons:

Camera Obscura San Francisco 

is an analog way to bring the sky into the interior of a building.
The building is specially built for that purpose.

But pinholes do work too, and presage quantum mechanics.  Hence this could be another turn in the endless spiral between art, techne, and science.

Xin Wei
__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________







--
You received this message because you are subscribed to the Google Groups "synthesis-research" group.
To unsubscribe from this group and stop receiving emails from it, send an email to synthesis-research+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Sine coffa, vita nihil est!!!

RE: [Synthesis] networked cameras; potential response to activity vs actual activity

Cooper, if one of those black GoPros are available, may be we can find them useful under certain scenarios. Below I have a few network cameras along with their pro and con features.

Pro: wired and wireless camera with IR for night vision, pan tilt action, 2way audio, 1280x800wxga
Con: no POE

pro: POE, 2 way audio, wxga
con: no wireless, no optical zoom

pro: high resolution, POE, 2-way audio
con: fixed focus and zoom

pro: high and multiple resolution, POE, 2way audio, focus and zoom, supports RTP/RTSP/UDP/TCP/HTTP protocols
con: dome design, no pan tilt


From: synthesis-research@googlegroups.com [synthesis-research@googlegroups.com] on behalf of Sha Xin Wei [shaxinwei@gmail.com]
Sent: Saturday, June 28, 2014 11:23 AM
To: synthesis-research@googlegroups.com
Cc: post@synthesis.posthaven.com; ozone@concordia.ca
Subject: [Synthesis] networked cameras; potential response to activity vs actual activity

Given Tain, Cooper and other folks’ findings, IMHO we should use wifi networked cameras but not GoPro’s.  (Can we borrow two from somewhere hort term to get the video network going?)  Comments on networked cameras?  
But let’s keep the focus on portals for sound and gesture / rhythm.

Keep in mind the point of the video portal is not to look AT an image in a rectangle, but to suture different regions of space together. 

Cooper’s diagram (see attached) offers a chance to make another distinction about Synthesis research strategy, carrying on from the TML, that is experientially and conceptually quite different from representationalist or telementationalist uses of signs.  

(Another key tactic: to eschew allegorical art.)

A main Synthesis research interest here is not about looking-at an out-of-reach image far from the bodies of the inhabitants of the media space, 
but how to either:;
(1) use varying light fields to induce senses of rhythm and co-movement
(2) animate puppets (whether made of physical material, light, or light on matter).

The more fundamental research is not the actual image or sound, but the artful design of potential responsivity of the activated media to action.
This goes for projected video, lighting, acoustics, synthesized sound, as well as kinetic objects.
All we have to do this summer is to build out enough kits or aspects of the media system to demonstrate these ways of working and in the formate of some attractive installations installed in the ordinary space.

Re. the proto-rearch goals this month, we do NOT have to pre-think the design of the whole of the end-user experience, but to build some kits that would allow us to demonstrate the promise of this approach to VIP visitors in July, students in August, and to work with in September when everyone’s back.

Cheers for the smart work to date,
Xin Wei



On Jun 28, 2014, at 1:58 AM, Cooper Sanghyun Yoo <cooperyoo@asu.edu> wrote:

Hi all, 

I would also like to add some thoughts on using GoPro.  Well to begin with, I am a huge fan of GoPro cameras, and I was testing GoPro Hero 3+ Black Edition (the latest model) last April.  It would be great if AME purchase new GoPro cameras but maybe it is not the best choice for Brickyard projects.

<1012591_10152839329222119_7096611735257331402_n.jpg>
[Image 1] Six GoPros on a special mount made from a 3D printer.  Didn't really work under Arizona weather.  Photos taken near Tempe.

As far as I remember, ideas of using GoPro cameras started because we wanted to locate cameras outside of the building without any wires connected to the computers.  

GoPro cameras are small, light weight, feature loaded, and are designed to be put under extreme conditions.  However, as they are not built for long-term use,  they have battery issue, overheating issue, and Wifi range issues.  Battery last less than an hour, and if we connect a source of electricity the camera will overheat and turn off once a while.

I also found ways to stream full video into a web browser or VLC media player (https://www.youtube.com/watch?v=RhFzrYBXItc) and a Mac app controlling and doing low resolution live previews (https://www.youtube.com/watch?v=au5Yi6y-LmE).  Unfortunately, most of these solutions are for low resolution previews.  Trying GoPro cameras in Max/jitter might still be a very interesting and challenging project, but we can also consider different options.

For example, wifi network cameras can be another good choice.  They are designed specifically for long-term live streaming without being connected to the computer.  Whenever the camera is connected to power source and network, it doesn't even need to be located near the computers.  Image below shows my previous project that used two network cameras in an outdoor environment.
<Camera and Interaction Elevation View.jpg>
[Image 2] Network camera was located outside of the building.  All the machines controlling the LED ceiling were in a different building so we couldn't use wires to connect camera to computer.

Please let me know your ideas and thoughts.

Much appreciated,
Cooper


On Fri, Jun 27, 2014 at 9:21 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Very likely standard httpd would be a bad way to send video since it was designed for sending bytes of characters or a few embedded media objects.   But   Max jit.*  used to read   rtsp video streams as well  ( in the syntax  rtsp://  ?  )

Such long latencies would make GoPro impossible for synchronous action, but we should also think about how to intercalate activity based on durational rhythm, not point-sync’d instantaneous events.

Also, time delaying the feed by 3 hours would be interesting first step to intercalating Montreal and Phoenix

Xin Wei

On Jun 26, 2014, at 9:41 PM, Garrett Johnson <gljohns6@asu.edu> wrote:

There is a browser object in max. Not sure how it works with video but it sounds like it's worth a shot. 

Garrett L. Johnson
MA musicology 
Synthesis Center research team 
Arizona State University


Am Jun 26, 2014 um 15:15 schrieb Tain Barzso <Tain.Barzso@asu.edu>:

Should have forwarded to the group initially:

Doing some initial research which I will present to the group.  On the Go Pro:

While the wireless feature is designed specifically for controlling and doing low resolution live previews on the Go Pro app, there is a way to stream full video into a web browser via the camera's built in web server.  If any applications that synthesis is using can access this kind of stream (via http://) we should be able to grab video.  A caveat is that the live streamed video has very significant (seems to be > 1 second) lag.  This is likely dependent on resolution, and I would guess 720p video would have less lag than 1080p  - but we'd have to try it to see.

Also, it looks like there is a cable that allows us to both insert an external (3.5mm) microphone AND still utilize the camera's USB port as a power input as to avoid running off the battery.

Here is a video that a youtube user posted covering the video streaming accessibility:  https://www.youtube.com/watch?v=Y1XaBJZ8Xcg

Tain

[Synthesis] networked cameras; potential response to activity vs actual activity

Given Tain, Cooper and other folks’ findings, IMHO we should use wifi networked cameras but not GoPro’s.  (Can we borrow two from somewhere hort term to get the video network going?)  Comments on networked cameras?  
But let’s keep the focus on portals for sound and gesture / rhythm.

Keep in mind the point of the video portal is not to look AT an image in a rectangle, but to suture different regions of space together. 

Cooper’s diagram (see attached) offers a chance to make another distinction about Synthesis research strategy, carrying on from the TML, that is experientially and conceptually quite different from representationalist or telementationalist uses of signs.  

(Another key tactic: to eschew allegorical art.)

A main Synthesis research interest here is not about looking-at an out-of-reach image far from the bodies of the inhabitants of the media space, 
but how to either:;
(1) use varying light fields to induce senses of rhythm and co-movement
(2) animate puppets (whether made of physical material, light, or light on matter).

The more fundamental research is not the actual image or sound, but the artful design of potential responsivity of the activated media to action.
This goes for projected video, lighting, acoustics, synthesized sound, as well as kinetic objects.
All we have to do this summer is to build out enough kits or aspects of the media system to demonstrate these ways of working and in the formate of some attractive installations installed in the ordinary space.

Re. the proto-rearch goals this month, we do NOT have to pre-think the design of the whole of the end-user experience, but to build some kits that would allow us to demonstrate the promise of this approach to VIP visitors in July, students in August, and to work with in September when everyone’s back.

Cheers for the smart work to date,
Xin Wei

Re: Go pro fact finding research

Hi all, 

I would also like to add some thoughts on using GoPro.  Well to begin with, I am a huge fan of GoPro cameras, and I was testing GoPro Hero 3+ Black Edition (the latest model) last April.  It would be great if AME purchase new GoPro cameras but maybe it is not the best choice for Brickyard projects.


[Image 1] Six GoPros on a special mount made from a 3D printer.  Didn't really work under Arizona weather.  Photos taken near Tempe.

As far as I remember, ideas of using GoPro cameras started because we wanted to locate cameras outside of the building without any wires connected to the computers.  

GoPro cameras are small, light weight, feature loaded, and are designed to be put under extreme conditions.  However, as they are not built for long-term use,  they have battery issue, overheating issue, and Wifi range issues.  Battery last less than an hour, and if we connect a source of electricity the camera will overheat and turn off once a while.

I also found ways to stream full video into a web browser or VLC media player () and a Mac app controlling and doing low resolution live previews ().  Unfortunately, most of these solutions are for low resolution previews.  Trying GoPro cameras in Max/jitter might still be a very interesting and challenging project, but we can also consider different options.

For example, wifi network cameras can be another good choice.  They are designed specifically for long-term live streaming without being connected to the computer.  Whenever the camera is connected to power source and network, it doesn't even need to be located near the computers.  Image below shows my previous project that used two network cameras in an outdoor environment.

[Image 2] Network camera was located outside of the building.  All the machines controlling the LED ceiling were in a different building so we couldn't use wires to connect camera to computer.

Please let me know your ideas and thoughts.

Much appreciated,
Cooper


On Fri, Jun 27, 2014 at 9:21 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Very likely standard httpd would be a bad way to send video since it was designed for sending bytes of characters or a few embedded media objects.   But   Max jit.*  used to read   rtsp video streams as well  ( in the syntax  rtsp://  ?  )

Such long latencies would make GoPro impossible for synchronous action, but we should also think about how to intercalate activity based on durational rhythm, not point-sync’d instantaneous events.

Also, time delaying the feed by 3 hours would be interesting first step to intercalating Montreal and Phoenix

Xin Wei

__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________






On Jun 26, 2014, at 9:41 PM, Garrett Johnson <gljohns6@asu.edu> wrote:

There is a browser object in max. Not sure how it works with video but it sounds like it's worth a shot. 

Garrett L. Johnson
MA musicology 
Synthesis Center research team 
Arizona State University


Am Jun 26, 2014 um 15:15 schrieb Tain Barzso <Tain.Barzso@asu.edu>:

Should have forwarded to the group initially:

Doing some initial research which I will present to the group.  On the Go Pro:

While the wireless feature is designed specifically for controlling and doing low resolution live previews on the Go Pro app, there is a way to stream full video into a web browser via the camera's built in web server.  If any applications that synthesis is using can access this kind of stream (via http://) we should be able to grab video.  A caveat is that the live streamed video has very significant (seems to be > 1 second) lag.  This is likely dependent on resolution, and I would guess 720p video would have less lag than 1080p  - but we'd have to try it to see.

Also, it looks like there is a cable that allows us to both insert an external (3.5mm) microphone AND still utilize the camera's USB port as a power input as to avoid running off the battery.

Here is a video that a youtube user posted covering the video streaming accessibility:  https://www.youtube.com/watch?v=Y1XaBJZ8Xcg

Tain