Re: good comparison of IMU's and sensor fusion source

cool. thanks. Adrian suggested last year a ready-to-wear IMU that went for ~ $200- $250. Can’t recall the make. Xin Wei

On Aug 22, 2014, at 7:11 PM, Vangelis Lympouridis <vl_artcode@yahoo.com> wrote:

That's great! Thanks a lot Adrian.

Vangelis Lympouridis, PhD Visiting Scholar, School of Cinematic Arts University of Southern California

Senior Research Consultant, Creative Media & Behavioral Health Center University of Southern California http://cmbhc.usc.edu

Whole Body Interaction Designer www.inter-axions.com

vangelis@lympouridis.gr Tel: +1 (415) 706-2638

-----Original Message----- From: Adrian Freed [mailto:adrian@cnmat.berkeley.edu] Sent: Friday, August 22, 2014 10:47 AM To: Xin Wei Sha; Vangelis L Cc: John MacCallum Subject: good comparison of IMU's and sensor fusion source

https://github.com/kriswiner/MPU-6050/wiki/Affordable-9-DoF-Sensor-Fusion

Wireless sensor networks

FYI see Adrian’s response re McGill's Sensestage miniBee gadgets.

I know the guys who did the SenseStage work at McGill from Marcelo’s lab.  
They were nice folks, but not the best, and the research application was misguided.
This device was not useful to advance movement / gesture research at the TML.

To check my own assessment, I asked Adrian.

If you want to buy them on your own research funds, chacun à son goût :)
But I would prefer not to throw general AME or Synthesis money at buying such things
unless there’s a specific legacy need for a critical research project that will lead to an concrete outcome in predictable future.

Otherwise, let me suggest that we track the state of the art with Adrian Freed <adrian@cnmat.berkeley.edu>
and Vangelis Lympouridis <vangelis@lympouridis.gr> in USC
and get the best devices for the job under cost time constraint just when we need them.

Cheers,
Xin Wei



Begin forwarded message:

Subject: RE: Fwd: Wireless sensor networks
Date: August 22, 2014 at 7:07:02 PM MST
To: "Sha Xin Wei" <shaxinwei@gmail.com>

I am sure they are good for something but I can't use them for various
reasons.
They just aren't reliable enough unless the performers are out of reach
of RF noise from the
audience/ambient sources.

+ Slow, old atmega cpu with too little memory,
+ old accelerometer instead of full IMU.

There are lots of smaller form factor things in the works like SparkCore
and all the bluetooth LE things coming out.
The problem is you have to look at the fully integrated size with
battery, the additional sensors you actually want, the case
etc etc. Small is 6 months away (BLE), small and fast enough for serious
movement work is still a few years away.

Sixense is a company getting this right with stem:
http://www.sixensestore.com/stemsystem-2.aspx

-------- Original Message --------
Subject: Fwd: Wireless sensor networks
From: Sha Xin Wei <shaxinwei@gmail.com>
Date: Fri, August 22, 2014 3:36 pm
To: Adrian Freed <adrian@adrianfreed.com>


Are these xBees any good?   would these be superseded by other common wireless microprocessors …?

We (at Synthesis and AME) are happy with the xOSC boards,
tho I do hope for a much smaller form factor.   

...
Xin Wei

[Synthesis] [TML] Founding documents of an atelier for ethico-aesthetic play: What we do. How we do it. Why we do it the way we do.

Dear Chris, Dehlia, Kristi, and Tamara,
(Hi Katie, Omar and JA, who know this well!)

Here are some texts that describe more completely how I envision what Synthesis is about, and how I would like it to be a home for radically empirical, ethico-aesthetic play.  I have developed a progressively more nuanced notion of play over the past decade of institutional experiment, funded thanks to the Canada and Quebec's more generous attitude toward experimental cultural work.

(1) The opening chapter of Poiesis, Enchantment, and Topological Matter (MIT Press 2013): 
gives a sense of how I see philosophical inquiry (which is not the same as philosophy as practiced conventionally in the United States academy) comes out of and feeds back into poetic, speculative practice.

(2) The second part of Chapter 7 gives an analysis of the political economy of the amodern atelier that I established in 2001 at Georgia Tech (in the Graphic Visualization and Usability Center, and the School of Literature Communication and Culture), but then moved to Montreal in 2005 with a Chair in critical studies of media arts and sciences in the Fine Arts and Computer Science.   My key meta-goal for the past 15 years has been to create an alternative ecology of practices based on collective, poetic knowledge practices.   I regard FoAM to be the lovelier sister to the Topological Media Lab.

This predecessor version links to a set of colour plates.

With the Synthesis Center I want to extend both the TML's central research streams and its model for how to go about doing that sort of transdisciplinary work.   To be very clear I came back to the States not to slip back into more conventional interpretations and practices of technology, art or humanities, but to harness Yankee enterprise to the much more radical work of ethico-aesthetic improvisation.   (I use radical both in its political sense and in the sense of William James’  radical empiricism.)

(3) Here are two one-page letters written at the invitation of the President of Concordia University about art practice versus art research.  They are not the same.  Many confusions abound here at Herberger as well.

 

(4)  Here’s the Synthesis Center pitch.   I invite your help to polish it before sending it up to the Provost and Engineering Dean. --- in the coming week




(5) Working ethos


(6) And finally, a letter that I share with people who want to study or work with me. 

Hope this gives you a more complete and substantial understanding of what I would like us to do, and why I would like to do it in certain ways.

I sincerely hope that after mulling this over, and considering that this has already actually flourished in two contexts, you will feel inspired to help me realize a third and even more beautiful atelier here at ASU.

Looking forward to working with you!
Xin Wei


__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2146
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Re: wearable x-osc biometric prototype

Seriously, how do we think techniques of observation together with techniques of performance? I know it may be confusing to use those pair of terms -- observation and performance...

we need a better vocabulary that retains some of the mechanisms of entanglement from quantum mechanics, but not this dualism.

Xin Wei

[Synthesis] Portals needed

Hi!

We need portals supporting concurrent conversation via common spaces like tabletops + audio… (no video!)
not talking-heads.     It may be useful to have audio muffle as a feature — continuous stream audio, but default is to  “content-filter” the speech.   (Research in 1970’s … showed which spectral filters to apply to speech to remove “semantics” but keep enough affect…)

Maybe we can invite Omar to work with Garrett or Byron or Ozzie to install Evan’s version in the Brickyard and Stauffer and iStage as a side effect of the Animated spaces: Amorphous lighting network workshop with Chris Ziegler and Synthesis researchers.

BUT we should have portals running now ideally on my desk and on a Brickyard surface.  
And that workshop remains to be planned (October ??)
And possibly running also on the two panel displays re-purposed from Il Y A — now moved to Stauffer...

Xin Wei


__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2146
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

[TML][Synthesis] Plant-Thinking Meeting/Seminar: discuss Marder, November 1 - Dec 19 ?

Hi Michael, everyone,

All great! 

I’ve been talking with Oana and most recently Omar about the vegetal studies research
From Omar it seems that most of the interested folks are away or too busy in September.  And October there are other events (e.g. Listen(n) @ ASU; lighting animation workshop, Einstens Dreams workshop) planned related to Synthesis or TML. 

It’s a good idea to do it on a weekly basis.  But instead of stretching over a whole semester, how about we concentrate the Marder-based part of the seminar into 1.5 months during a period when people are prepared to really grapple with the Marder.

To take the reading of Marder seriously, I think it’d be necessary to do this in person, or as synchronously as our portals can deliver.   And we need time for each one to prepare himself/herself with absorbing related works.  I would strongly recommend some of the Aristotle and Goethe.   ( to make time we invest worth the investment. ) 

So our — Oana and my — suggestion is to prep readings and exchange references etc. in vegetal studies stream
now, and do the actual readings  of Marder over seven weeks: November 3 through Dec 19.   
We recommend 
Week 1 Chap 1
Week 2 Chap 2
Week 3-4 Chap 3 & 4
Week 5-6 Chap 4 & 5
Week 7 Papers and Crits (double long session)

It’d be great to aim to deliver some substantial multi-format responses — on the order of a paper, short video, sketches of experiments that really synthesize the insights form the seminar.


Here are two key starting operating rules for this game :

• Avoid allegory — not the depiction of “what plants look like" but how plants grow, and experience dynamical existence.   

• Avoid as radically as possible anthropomorphizing .


Perhaps I can come mid November and mid December.
On the other hand my duties this Fall may well be so heavy that it’d be easier if Synthesis hosted this theoretical phase of the joint TML-Synthesis vegetal studies research stream in Phoenix.

Suggestions?
Xin Wei


__________________________________________________________________________________
Sha Xin Wei, Ph.D. • xinwei@mindspring.com • skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________

Bill Forsythe: Nowhere and Everywhere at the same time No. 2 (pendulums)

Two works by two choreographers Dimitris Papaioanno, and Bill Forsythe,
with very different and interesting approaches to causality and temporal texture…

- Xin Wei

On Jul 20, 2014, at 12:55 AM, Michael Montanaro <michael.montanaro@concordia.ca> wrote:

A beautiful choreographed work: NOWHERE (2009) / central scene / for Pina
from Dimitris Papaioanno



Begin forwarded message:

From: "Vangelis Lympouridis" <vl_artcode@yahoo.com>
\Date: July 22, 2014 at 8:39:27 AM GMT+2
To: "Adrian Freed" <Adrian.Freed@asu.edu>, "'Sha Xin Wei'" <shaxinwei@gmail.com>, "'John MacCallum'" <john@cnmat.berkeley.edu>

When you have a second please watch this 2 min video with Forsythe’s piece Nowhere and Everywhere at the same time No2.

I think it is SO to the core of what we reasoning about… J

                                                                                                                                                                                                                                                        

 
 
Vangelis Lympouridis, PhD

Visiting Scholar,

School of Cinematic Arts

University of Southern California

 
Senior Research Consultant,
Creative Media & Behavioral Health Center

University of Southern California

 
Whole Body Interaction Designer

Tel: +1 (415) 706-2638 

[Synthesis] networked cameras; potential response to activity vs actual activity

Given Tain, Cooper and other folks’ findings, IMHO we should use wifi networked cameras but not GoPro’s.  (Can we borrow two from somewhere hort term to get the video network going?)  Comments on networked cameras?  
But let’s keep the focus on portals for sound and gesture / rhythm.

Keep in mind the point of the video portal is not to look AT an image in a rectangle, but to suture different regions of space together. 

Cooper’s diagram (see attached) offers a chance to make another distinction about Synthesis research strategy, carrying on from the TML, that is experientially and conceptually quite different from representationalist or telementationalist uses of signs.  

(Another key tactic: to eschew allegorical art.)

A main Synthesis research interest here is not about looking-at an out-of-reach image far from the bodies of the inhabitants of the media space, 
but how to either:;
(1) use varying light fields to induce senses of rhythm and co-movement
(2) animate puppets (whether made of physical material, light, or light on matter).

The more fundamental research is not the actual image or sound, but the artful design of potential responsivity of the activated media to action.
This goes for projected video, lighting, acoustics, synthesized sound, as well as kinetic objects.
All we have to do this summer is to build out enough kits or aspects of the media system to demonstrate these ways of working and in the formate of some attractive installations installed in the ordinary space.

Re. the proto-rearch goals this month, we do NOT have to pre-think the design of the whole of the end-user experience, but to build some kits that would allow us to demonstrate the promise of this approach to VIP visitors in July, students in August, and to work with in September when everyone’s back.

Cheers for the smart work to date,
Xin Wei

streaming video from small cameras in a local net: Go pro fact finding research

There are many solutions for a suite of video cams.  It’s not bad to repurpose some commercial and cheaper solutions for a network of video+audio cameras. Analog solutions have much less latency compared to these digital solutions (even after an A2D converter), and they may still be a lot cheaper for a given equivalent resolution.   

Are there (still?) more variety of lens solutions available for analog cameras?

Be sure to get wide angle lenses for whatever camera solution you come up with.

Also, threaded lenses may make it easier to affix IR pass filters as necessary.

Cheers,
Xin Wei

__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Brickyard media systems architecture; notes for Thursday BY checkin

Hi Everyone,

This note is primarily about media systems, but should connect with the physical interior design, furniture and lighting etc.

Thanks Pete for the proposed media systems architecture summary.  I started to edit the doc but then bogged down because the architecture was computer-centric, and it’s clearer as a diagram.   
 
Instead of designing around the 2 Mac Minis and the Mac Pro computers I’d like to design the architecture around relatively autonomous i/o devices :

Effectively a network of:
Lamps (for now addressed via DMX dimmers), floor and desk lamps as needed to provide comfortable lighting 
5 Wifi GoPro audio-video cameras (3 in BY, 1 Stauffer, 1 iStage)
xOSC-connected sensors and stepper motors,  ask Byron 
1 black Mac Pro on internet switch
processing video streams, e.g. extracting video feature emitted in OSC
1 Mac Minis on internet switch
running lighting master control Max patch, ozone media state engine (later)
1 Mac Mini on internet switch
processing audio, e.g. extracting audio features emitted in OSC
1 iPad running Ozone control panel under Mira
1 Internet switch in Synthesis incubator room for drop-in laptop processing
8 x transducers (or Genelecs — distinct from use in lab or iStage, I prefer not to have “visible” speaker boxes or projectors in BY — perhaps flown in corners or inserted inside Kevin’s boxes, but not on tables or the floor )

The designer-inhabitants should be able to physically move the lamps and GoPros and stepper motor’ed objects around the room, replugging into few cabled interface devices (dimmer boxes) as necessary.

Do the GoPro’s have audio in?   Can we run contact or boundary or air mic into one ?

We could run ethernet from our 3 computers to the nearest wall ethernet port?

Drop-in researchers should be able to receive via OSC whatever sensor data streams (e.g. video and audio feature streams cooked by the fixed black MacPro and audio Mini)
and also emit audio / video / control OSC into this BY network. 

DITTO iStage.

Can we organize an architecture session Thursday sometime ?  
I’m sorry since I have to get my mother into post-accident care here in Washington DC I don’t know when I’ll be free.

Perhaps Thursday sometime between 2 and 5 PM Phoenix?  I will make myself available for this because I’d like to take up a more active role now thanks to your smart work.

Looking forward to it!
Xin Wei


PS. Please let’s email our design-build to post@synthesis.posthaven.com  so we can capture the design process and build on accumulated knowledge in http://synthesis.posthaven.com 

__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________