[Synthesis] signal correlation as a measure of synchronicity in dance

On the hw wireless sensor side of the story — 

For the  work reported in Ubicomp and ISWC 200, TML @ GaTech (Atlanta)  pioneered use of the TinyOS platforms in two form factors (size of xosc, and US quarter !).  
We had a crack team — Giovanni Iachello, Steven Dow (now prof at CMU, a friend to our incoming prof Stacey Kuznetsov :), and Yoichiro Serita (from Sony Labs).

There followed 10 more years of hackers in the movement art + tech world making their own naive solutions, 
and not even getting to either the interesting art or the significant problems to be solved in EE, which lay beyond their scientific judgment.*

The good news is that good-enough on-body IMU’s are affordable to AME.  So Pavan and Ozzie are going to get us a set (soon!)

I am in a hurry though to form a crack team to tackle the 
actual research challenges of 

(1) Understanding non-sonic rhythm as an example of apperception , and

(2) Scaffolding different senses of resonating temporal texture, using for example spectral analogies that can generalize from classical DSP to higher dimensional time-varying fields 
(this is not as deep as it sounds — it should be amenable to smart engineering of which ASU has an abundance. 
I don’t know if it’s an area with lots of readymade techniques.  Some expert needs to tell us.
But nothing should stop us form doing the first step ourselves : 
implement 
the Aylward-Paradiso in Max/MSP
and run multiple time series through it from whatever has adequate fps
from interestingly rich movement (Chris Ziegler + students; maybe with our Visiting Artist friends from Montreal and Copenhagen )
)

Thanks for the great reference Mike!  
( Can someone show me that MSP external connected with Adrian and John’s oriented normal odot kit, so we can advance the temporality research? )


Political-Epistemological Rant:
AME and its cousins CIDSE, ECEE, SEMTE can get ahead of the shallow uses that artists and engineers have made of each other to date.

Political-Economic Rant:
The foundations in EU and Quebec that funded us lucky bastards gave too much and not enough funding:  
So much $ that artists could hire their own students with just enough EE / CS knowledge to hack naive solutions;
not enough $$ to fund cohorts of grad student that could make it worthwhile for a EE / CS professor to dedicate 2-3 MA students over 6+ years of continuous trial and error projects in daily studio +bench work with movement artists who could be subsidized to NOT produce productions for a significant % of that time.


On Oct 4, 2014, at 4:18 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:

Yes this is a tool I wanted since 2003 in  the oz/math/  section of Ozone


We need all these conditions:
robust hardware with lightweight battery (ASU has some good battery guys ), 
high sensor-ensemble fps ,  
low latency transmission,  
some maths like Aylward-Paradiso  to play with in our Max toolkit

What we would do in place of Paradiso’s naive notion of music is to map to electroacoustics synthesis etc.

In fact if Julian or Mike or ...could point us to the Max external that implements cross-correlation (not auto-correlation) we could play with it right away on acoustic input
and think about how to handle control rate data…  I think there  is one already in McGill or IRCAM’s vector processing toolkits.

If someone is interested, I’d be happy to work with him/her to implement this and map it more directly to organized sound (with the help of our sound artists) for rich feedback.

On Oct 4, 2014, at 3:50 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

Hi,

Do you guys know this paper? They put gyroscopes on dancers and used realtime (windowed) cross-covariance to measure time lag between several dancers. I believe this is similar the what Xin Wei has in mind as part of studying temporality.

Mike

Notes for Lighting and Rhythm Residency Nov (13)17-26

Chris, Garrett, Julian, Omar, Chris Z, Evan, Byron, Prashan, Althea, Mike B, Ian S, Aniket, et al.

This is a preliminary note to sound out who is down for what for the coming residency on Lighting and Rhythm (LRR).  

The goal is to continue work on temporality from the IER last Feb March, and this time really seriously experimentally mucking with your sense of time by modulating lighting or your vision as you physically move.  First-person experience

NOT

 designing for spectator
.    

We need to identify a more rigorous scientific direction for this residency.  Having been asking people for ideas — I’ll go ahead and decide soon!

Please think carefully about:


The idea is to invite Chris and his students to work on site in the iStage and have those of us who are hacking time via lighting play in parallel with Chris.   Pavan & students and interested scientist/engineers should be explicitly invited to kibbutz.


Lighting and Rhythm 
The way things are shaping up — we are gathering some gadgets to prepare for .

Equipment requested (some already installed thanks to Pete Ozzie and TML)
Ozone media system in iStage
Chris Ziegler’s Wald Forest system (MUST be able to lift off out of way as necessary within minutes — can an inexpensive motorized solution be installed ?)
3 x 6 ? grid of light fixtures with RGB gels, beaming onto floor
IR illuminators and IR-pass camera for tracking
Robe Robin MiniMe Moving Light/ Projector 
Hazer (?)
Strobe + diffuser (bounce?)
+ Oculus DK1, (Mike K knows )
+ Google Glass (Chris R can ask Cooper , Ruth @ CSI)

We need to make sure we have a few rich instruments (NOT one-off hacked tableaux!) coded up ahead of time -- hence the call to Max-literate students who would like to try out what we have in order to adapt them for playing in the LRR by November.

Note 1:
Let’s be sure to enable multiplex of iStage to permit two other groups:
• Video portal - windows : Prashan, Althea Pergakis, Jen Weiler
•  shadow puppetting, Prashan working with Byron

Note 2:
Garth’s Singing Bowls are there.  Think about how to integrate such field effects.
Mike can you provide a Max patch to control them — ideally OSC -- but at least to fade up/down without having to physically touch any of the SB hardware.

Note 3:
This info should go on the lightingrhythm.weebly.com  experiment website that the LRR leads should create Monday unless someone has a better solution — it must be editable by the researchers and experiment leads themselves.  Clone from http://improvisationalenvironments.weebly.com !

Xin Wei

On Sep 4, 2014, at 8:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I am afraid I can't be very helpful here. I don't do MIR work myself. The field for the most part does
offline analyses of large data sets using musicologically naive Western musical concepts of pitch and rhythm.

One exception to the realtime/offline choice is from our most recent graduate student to work on the
beat tracking problem, Eric Battenburg. Here is his dissertation: http://escholarship.org/uc/item/6jf2g52n#page-3
There is interesting machine learning going on in that work but it presumes that one can make a reliable
onset detector which is a reasonable (but narrow) assumption for certain percussion sounds and drumming practice.

The questions of phase and "in sync." raised below interest me greatly. There are is no ground truth to the beat
(up or down or on the "beat"). I remember being shocked recently to discover that a bunch of research on dance/music entrainment relied as a reference on hand-labeled visual beat markings from "expert listeners in the computer music lab next door" . Various concepts such as "perceptual onset time" have been developed to sufficiently complicate this question and explain the difficulty people have observing concensus on musical event timing and relating a particular beat measurement to features of the acoustic signals.
Even a "simple" case, bass and drums, is extremely difficult to unravel. The bass being a low frequency instrument complicates the question of "onset" or moment of the beat. The issue of who in this pair is determining the tempo
is challenging and the usual handwaving that the tempo is an emergent coproduction of the performers is not very helpful in itself in elaborating the process or identifying which features of the action and sound are relevant to the entrainment. My guess is that we will find models like the co-orbital arrangment of Saturn's moons Epimetheus and Janus.
What are the system identification tools to reveal these sorts of entrainment structures? Can this be done from the sound
alone or do we have to model  embodied motions that produce the sounds?


NOTE from Adrian XW Mike Krzyzaniak on Percival-Tzanetakis Tempo Estimator :

On Sep 3, 2014, at 6:38 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.

Yes, it would be great to have a different measure.  For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks.  This is a weaker criterion than being in phase, and does not require periodicity.

Xin Wei

Puppetry++ Cluster



Here is a thread about a possible cluster for “improvisational environments”: a mobile platform for chamber- and building-scale shadow puppetry, combining:
mechatronics +
lighting
a sensing and media choreography system +
“radio”

Here’s just one example of a series of projects that such a Puppetry++ Cluster could mount,
building on relationships and friendships developed over recent years with some exciting artists and scholars, with high impact work.

Possible Phases:
• Recruit interested volunteers with a good blend of aesthetic judgment, engineering chops, and elegant whimsy
• Build a kit for improvising such puppetry live in the iStage
• Try out in Spring with experimental puppeteers
• Crit with experienced artists
Oana Suteu, master filmmaker and animation editor Montreal/Paris
Laura Heit, Matchbox Series
• Document and publish first phase works
• Apply for external funding
• Residency Workshop(s) with
Manual Cinema
Mick Taussig, Sun and Sea project



On Thu, Sep 11, 2014 at 6:47 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Would any of you be able to recommend some faculty and / or students experienced with urban design, public lighting design, projection art, who might be interested in teaming with the Synthesis Center and the Topological Media Lab to come up with some pitches for  some urban art / design projects in Europe.   (See “Connecting Cities”  and “Ambience Network” for examples of work.)

Cheers,
Xin Wei



On Fri, Sep 12, 2014 at 8:14 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi,

What I have in mind is a mobile platform for chamber and building-scale shadow puppetry, driven by mechatronics + our Max/MSP/Jitter sensing and media choreography systems, + “radio”.

In terms of content and style — just to let you know where we're coming from, here are some inspirations:

• BluBlu MUTO, https://vimeo.com/993998


• Royal de Luxe, The Little Girl Giant - The Sultans Elephant, https://vimeo.com/11107046

Omar Faleh, TML-Synthesis’ senior grad researcher in this urban media initiative, visited Royal de Luxe’s home city Nantes.

<rant>I’d like to stay away from all the "tech-art" geek discourses from the 90’s that never played outside the Empire of new media.</rant>

Not knowing the economies here in Phoenix and in SoCal, one of my questions is what sources of funding exist for such work?

Xin Wei






On Sep 12, 2014, at 12:54 PM, Byron Lahey <Byron.Lahey@asu.edu> wrote:

Hi All,

It goes without saying that I'm interested in this project. Kinetic sculpture, puppetry, light and shadow systems are all important parts of my artistic pallet. It also goes without saying that my time is largely spoken for by my dissertation work, so I would like to stay connected with this project, but will have to limit my direct involvement. I would certainly offer my camera/projector system as a tool for this work if it fits in any way (though I would likely have to hand off programming and maintenance duties to another willing person or persons).

Another artist who might be interested in collaborating on this work is Al Price. He is an ASU MFA graduate and has done numerous large scale public art projects around and outside the Phoenix area. This is a project of his that I find particularly enchanting and which seems to directly resonates with the examples Xin Wei shared: http://www.alpricestudio.com/new-gallery-5/  

Best,
Byron

CC Newsletter September 2014

Connecting Cities Events 2014

Here's a quick overview of the Summer/Autumn Connecting Cities Events: 

Event # 6 11 – 14 September: Connecting Cities: Urban Reflections, Public Art Lab @ Berlin
Event # 7 12 – 14 September: Connecting Cities Event, Riga 2014 @ Riga
Event # 8 25 – 27 September: Medialab-Prado @ Madrid
 

Connecting Cities Event #6 @ Berlin 
Connecting Cities: Urban Reflections, Public Art Lab

Connecting Cities: Urban Reflections
11 - 14 Sep 2014, – Light Parcours, Brunnenstr. 64-72
11 - 13 Sep 2014, Symposium, SUPERMARKT Brunnenstr. 64
11 - 13 Sep 2014, Workshops, SUPERMARKT Brunnenstr. 64


No Plans for tonight? Mark your calendar! Connecting Cities: Urban Reflections
Join for our Light Parcours, the Symposium and Workshops!

From 11 – 14 September a rich programme with workshops, the Connecting Cities: Urban Reflections – Symposium (curated by Public Art Lab in cooperation with SUPERMARKT) and the Connecting Cities: Urban Reflections – Light Parcoursincluding audiovisual works and an urban picnic will take place.

Connecting Cities: Urban Reflections – Symposium and Workshops
From 11 – 13 September artists' workshops by Suse MiessnerCharlotte Gould & Paul SermonMoritz Behrens & Nina ValkanovaDr. Alexander Wiethoff & Marius Hoggenmülleras well as keynotes from Mark Shepard, Paul SermonMoritz Behrenand the speakersNicole Srock-StanleyDr. Alexander Wiethoff, Dr.Eva HorneckerDr. Bastian Lange,B_tours and many more will be offered…
Find all the Panels here!

Connecting Cities: Urban Reflections – Light Parcours 
During ‚Connecting Cities: Urban Reflections‘ Public Art Lab will transform the shop windows in the Brunnenstrasse into interactive light and projection windows. Connecting Cities will make the neighborhood shine! The Connecting Cities projectsOccupy the ScreenSmart Citizen Sentiment DashboardUrban AlphabetsHuman Beeingwill be shown, as well as works from Lichtpiraten, a performance by Scott Sinclair vs. ESOC and a cooperation project of Public Art Lab and Quartiersmanagement Brunnenstraße, the students of Leuphana University Lüneburg and the Bauhaus -Universität Weimar and a video programme by Screen City and Videospread.

Find the programme here or download the programme leaflet here!


Picture: © Public Art Lab

Event # 7 Staro Riga 2014 @ Riga 2014, Riga

12 – 13 September 2014, Esplanāde 2014 Culural Chalet, Riga, Latvia
 
Two days and two interactive projects will unite Riga and Berlin.

Suse Miessner's project ‘Urban Alphabet’ is an interactive neighbourhood art project that invites peaople to create their own urban alphabet by capturing letters with an especially developed application, when walking around the city. After collecting the letters in the public space, the screens and projections will give access to the database and invite the participants to write a personalized postcards to the other connected cities. 

Charlotte Gould and Paul Sermon's project ‘Occupy the Screen’ is an interactive telepresent public video installation designed for site-specific impromptu performance and user interaction. It will connect two cities and the people will meet on the screen as a third space. 
 
Picture: © Paul Sermon

Connecting Cities Event #8 @ Medialab-Prado, Madrid

 26 – 27 September: Medialab-Prado, Madrid, Plaza de las Letras

Bees, plants, urban alphabets and many other elements of the city come to life on the digital facade of the Medialab-Prado. From the 26 – 27 the facade located at the Plaza de las Letras becomes an interactive canvas!

The projects Urban Alphabets (Suse Miessner), Human Beeing (The Constitute),Telepuppet.tv (Ali Momeni & Nima Dehghani), Organic cinema (World Wilder Lab) andn'UNDO (n’UNDO organización).
More Information here!

Picture: © n’UNDO organización

Connecting Cities Event #9 Nuit Blanche@ iMAL Brussels

10 October 2014, Rue Marché aux Herbes & Rue Saint-Pierre, Brussels, Belgium
 
In the framework of the Participatory City 2014, the Quinzaine Numérique and during theNuit Blanche Bruxelles, iMAL presents telepuppet.tv, by Ali Momeni & Nima Dehghani (USA/Iran).
 
Join the Nuit Blanche Brussels! Telepuppet.tv is a crowd-sourced storytelling platform that combines augmented-puppetry with urban projection performance. Traveling through the streets of the city centre, artists Ali Momeni Nima Dehghani (USA/Iran) will project videos filmed this summer, in Iran, by augmented puppets. A way to share experiences of immigration across time and space on our planet.
 
In the framework of Connecting Cities 2014: Participatory City, Telepuppet.tv will also be presented in Madrid (Medialab-Prado, 25 - 26.09) and Liverpool (FACT)!

Picture: © Ali Momeni & Nima Dehghani 

CALL FOR PROPOSAL FOR THE VISIBLE CITY 2015

Deadline: 31 October 2014

BERLIN - BRUSSELS - HELSINKI - LINZ -LIVERPOOL - MADRID - MARSEILLE - MONTREAL - SAO PAULO - ZAGREB

Our modern cities are hybrid structures in which technology is invisibly interweaved in the perception layers of our everyday lives. With the curatorial theme of InVISIBLE and VISIBLE Cities we want to develop an awareness on the changes which are hardly visible to the eyes and are underlying our nowadays’ cities.

Please submit your project proposal until 31 October, 2014 here.

Picture: © Public Art Lab

RECAP Connecting Cities Event #5 @ Linz

C … what it takes to change, Ars Electronica
4 - 8 September 2014, Linz, Austria
8 September, Connecting Cities Workshop


From 4 - 8 September the Connecting Cities artists Moritz Behrens & Nina Valkanova showed their project Smart Citizen Sentiment Dashboard (SCSD) on the facade of the Ars Electronica building. Visitors and Citizen could vote how happy they are i.e. with the mobility of the city of Linz. Ars Electronica Futurelab presented their project Entangled Sparks with a series of workshops and presentations on the Ars Electronica building, where every visitors could access one pixel of the facade.
Have a look at the photos here!

Picture: © Public Art Lab

RECAP: Connecting Cities Workshop: Design Fiction and Narrative Prototyping @ 403 Art Center, Wuhan, China

31 July - 09 August.2014 Summerfestival - Back to the Future
31 July - 09 August 2014 Workshop

 
DESIGN FICTION & NARRATIVE PROTOTYPING
Connecting Cities with the tools of the future

In July the first Connecting Cities workshop in China organized by Marc Piesbergen (CC manager China) and the 403 Art Center Wuhan was held by Christian Zöllner (The Constitute) and Julian Adenauer (Sonice Development)

Nearly 40 students from Huazhong University of Science and Technology, Hubei Institute of Fine Arts and Wuhan University of Science and Technology participated. The 9 student groups each designed their own device for the future sci-fi world.
The results of the intense two-week Connecting Cities workshop will be published soon onwww.connectingcities.net.

Picture: © The Constitute

Call for outstanding Media Architecture

Are you a student? Would you like to receive a travel scholarship for the Media Architecture Biennale 2014

Then sign up for the MAB24H student design competition. Commencing on October 1 14:00 (CET), teams will have 24 hours to create a design, write a paper, maintain a blog and produce a short video. The winning team is invited to present their conceptual work at the biennale and will receive a travel grant to the amount of EUR 1,000.

Twitter
Website
Email
Connecting Cities is a European and worldwide expanding network aiming to build up a connected infrastructure of media facades, urban screens, projection sites and mobile units to circulate artistic and social content. More information on www.connectingcities.net
You are receiving our newsletter because of the interest you have shown towards urban media art projects and the Connecting Cities Network. Should you have received this email in error, please accept our sincere apologies and be so kind to click on the "unsubscribe from this list" link in the footer of this newsletter.
Copyright © 2014 Public Art Lab, All rights reserved.
You are receiving this email because you opted in at our website www.connectingcities.net

Our mailing address is:
Public Art Lab
Brunnenstrasse 41
Berlin 10115
Germany

Add us to your address book
unsubscribe from this list    update subscription preferences 

Email Marketing Powered by MailChimp

wearable x-osc biometric prototype; observer / observed problem, and characteristic time

[ Synthesis:  This recent thread recaps a discussion about what sensors are sensing, the observer / observed problem, and characteristic time.  - Xin Wei ]

From: Adrian Freed [mailto:adrian@adrianfreed.com
Sent: Friday, August 15, 2014 8:28 AM
To: Vangelis L; Vangelis L; marientina.gotsis@gmail.com; John MacCallum; Sha Xin Wei

hi, Vangelis, Marientina

John and Teoma may bring this box of goodies down to show you. It is a quick prototype for them to experiment with to help them figure out what they need for their IRCAM project. It has an x-OSC with imu, analog devices 2-lead EKG chip and inputs for a handmade respiration sensor based on EEonyx fabrics and an ear-clip pulse sensor (https://www.sparkfun.com/products/11574).

Obviously I would build something more substantial for regular use but this should suffice for building the signal processing and evaluating the sensors.

Incidentallally, Marientina it occurs to me that an ear lobe pulse sensor has a lot of potential for the large scale walking meditation experiments you discussed. It gives a muscle-noise free pulse signal and somebody must have created a BLE earring by now? Intel is building this kind of sensor into earbuds:http://www.sfgate.com/technology/article/50-Cent-Intel-team-on-heart-beat-headphones-5690650.php
<image001.jpg>


==================================================

On Aug 18, 2014, at 3:33 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Motivated by the same “curiously skeptical” judgment, I decided form the get go that the TML would avoid “physiological" sensors.
The practical ethical problems far exceeded any artistic or scientific or pedagogical contribution I could imagine us making.   In fact, the main pedagogical contribution was and is Adrian’s observation.

On a different, related matter:

As for CO2, or other gas sensing — I advised a former colleague who was just getting into that in one of his installation pieces that there are a lot of molecules out there in a typical room, and the room’s CO2 levels don’t change all that fast due to breathing bodies, even if you stuff a bunch of them onto pallets in a room and make them watch pseudo-mystic videos.  The characteristic time of changes from such aggregate sensing is much longer than the characteristic time of a human twiddling thumbs waiting for something to happen.   (I think the statement is true whether thumbs are twiddled mentally or with physical tendons.)    In and fact, it was so.

How can young artists get a feel for material experiment?

Xin Wei


==================================================

On Aug 18, 2014, at 5:05 AM, Adrian Freed <adrian@adrianfreed.com> wrote:


On Aug 18, 2014, at 3:33 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

The characteristic time of changes from such aggregate sensing is much longer than the characteristic time of a human twiddling thumbs waiting for something to happen.   (I think the statement is true whether thumbs are twiddled mentally or with physical tendons.)    In and fact, it was so.

Yes, this notion of characteristic time is a good one and raises some interesting questions philosophers have looked at such as whether there are kinds of perception and knowings (consciousness) that people (and larger things such as biospheres and the universe) can have that operate over much longer or much shorter time frames
than we are commonly familiar with.

How can young artists get a feel for material experiment?

I believe John and Teoma are planning to do this by producing events that experimentally coarticulate the materialities of performer bodies (dance, musicians)
and the materialities of sound (yes, I am rejecting important arguments for the immateriality of sound and music). The challenge for them is how
to notice interesting results through the blizzard of ungrounded hermeneutic noise wearing the seductive,  rose glasses of technique? This noise is what the use of biosensors produces. Technique frames everything by its regimes of discipline and control. Material agency is thus able to slip unnoticed out of the scene
and go down the road for a good drink at the local pub.


==================================================

On Aug 18, 2014, at 5:53 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Seriously, how do we think techniques of observation together 
with techniques of performance?  I know it may be confusing to use those pair of terms -- observation and performance...  

we need a better vocabulary that retains some of the mechanisms of entanglement from quantum mechanics, but not this dualism.

Xin Wei


==================================================

On Aug 18, 2014, at 7:00 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

Teoma has done a nice job in this piece we just finished recording ("X") of expressively challenging that dualism with numerous
ambiguous framings and reframings of the gaze - a productive place to confront the problems of observation/performance.

How do we move past this stage of exploring and celebrating this difficulty?

All I have to offer so far on the "better vocabulary" front is to account for the fragilities of intersubjectivity
(the process that coproduces performance and observation) using the metaphor of a contract. I haven't unpacked this much
in solid writings yet.


==================================================

On Aug 23, 2014, at 11:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I would add that the observation/performance pair problems are connected to the problems of signal/noise - both dependent
on POV and preschema. Another tactic I have started to explore is the material agency of "lenses" (or filters as lenses are framed in the signal processing literature). This points to bringing in the material aspects of intersubjectivity - one of the key conundrums of quantum theory that has had to invoke a lot of magic around the macroscopic and microscopic properties
of "apparatus" to keep the rest of the theory coherent.


==================================================

On Aug 23, 2014, at 12:02 PM, Adrian Freed <adrian@cnmat.berkeley.edu> wrote:

Vangelis has been tracking the ready-to-wear IMU space more carefully than I.
I am hoping IMU's are a temporary bootstrap and that we will have less encumbering techniques with
absolute position measurements such as the upcoming Sixense Stem system.

My fear is that we will be surrounded by even cheaper, slower,  uncalibratable IMU's before the situation
improves substantially.

Keep an eye out for the next-gen x-OSC with a built-in charger and better IMU.

==================================================

Jonathan Sterne on materiality

Hi

Dehlia — As a tangent from your tangent about the material turn, here’s Jonathan Sterne’s essay, “What Do We Want?” “Materiality!” “When Do We Want It?” “Now!”  ( http://sterneworks.org/Sterne--Materiality.pdf )

Maybe we could use it as a survey in a course on technology someday...

Ed and David — in the same essay as an aside Jonathan makes some comments about forms of writing (e.g. book, journal article), and event (e.g the seminar, or music lesson) :

"Geoff Bowker’s materialist and somewhat scary analysis of our own situation in the production of knowledge is quite telling. As he surveys the every-grow- ing glut of journal articles, each of which has a smaller and smaller audi- ence, he sees: “We are clearly not creating a species of knowledge-power appropriate to the issues that we face. We are producing knowledge that is predicated on and replicates mass production and mass consumption. Our information infrastructure, willy-nilly, is the fold in the Moebius strip that permits the world to seem as society writ large” (chapter 5, this book). The declining relevance of the journal article as the materialization of schol- arly knowledge, and the uncertain struggle to find alternatives, demands a certain patience, since if there is a new form of knowledge coming, it hasn’t yet arrived. Bowker finds some hope in massive collaborations and new database logics. For my part, I retain some confidence in the resiliency of both the essay form and the codex, which have thrived for hundreds of years. Meanwhile, the journal article seems to undergo transformations every two or three decades.

Boczkowski and Siles turn more hopefully to pedagogy as a solution, get- ting students to work across disciplinary categories. If I still believe in the book and the essay, I still believe in the seminar even more. I am experi- menting with disallowing rehearsals of “technological vs. cultural deter- minism” arguments in my classes and exams. It’s harder than it sounds, especially when the rhetoric of techno-utopianism is alive and well in the commercial world and still operates in the truth spaces of journalism and online discussion. It’s also difficult given how much this comes up in cultural analyses of technology of whatever stripe. But if we want to get beyond the argument, our students stand a better chance of succeeding than we do, so it’s up to us to stop trying to reproduce it, even as a historical curiosity. At the graduate level, my seminar on the historiography of new media in winter 2013 takes Boczkowski’s approach to the extreme, though my model is less the social scientific diagram (with its quadrants) than the record collection with its eclecticism. Students will select the topic of their semester’s research at the beginning of the term and each week retrieve a primary source relevant to it. Each week, they will also read a distinc- tive work of media historiography (mostly books, since that is still the core traffic in the field). They will then write about their artifact in the style of the author, which requires them to determine what the important stylistic aspects of the work really are. At the end of the term, the students can then revise these short papers into something longer, synthesized into some- thing approaching their own authorial style. The approach is meant to encourage openness to other ways of writing and thinking, to free students of the pressure to take positions as their own against the positions of oth- ers, and to challenge them to reverse-engineer the work of other scholars so that they get a better sense of what’s actually involved in the interface between writing and thought. The pedagogy imposes some strict limits and demands for imitation (at first) to encourage creativity by freeing students of the demand for creativity in the places we usually look for it (choice of object, originality of voice, etc). It is drawn from how musicians learn their instruments: when I wanted to learn to play a good bass line, my teachers had me learn to imitate what the best bassists did. I either succeeded and incorporated their techniques with my own, or failed and came up with something original-sounding in the process.”

pp 126-127


__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2846
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Fwd: wearable x-osc biometric prototype (Was: Wireless sensor networks)

Hey everyone,

I would like to invite John MacCallum and Teoma Naccarato to come workshop their project at iStage,
bringing the wearable  x-osc biometric prototype that Adrian built in Berkeley. 

When?  I don’t know.  It will be up to the local host here at ASU to determine if and when John and Teoma can come.    Ideally the host could be a combination of Dehlia, Chris Roberts, and Kristi … Or ?

Chris Ziegler will be busy with the lighting workshop — which I think should be scheduled for late October because that’s when Omar is available, and that gives us more time to prep for the ASU-wide event focused on AME and Synthesis, assisted by the Deans and OKED.

Xin Wei

PS Dehlia, Chris, Kristi.  I’m going to start feeding you the research chatter for this OTHER and older stream of research at SYNTHESIS:  movement, improvisation and responsive environments.  This will expose cutting edge work on movement by very accomplished set of collaborators in TML Montreal, CNMAT Berkeley, USC Los Angeles, IRCAM Paris, and AME.   There’s a lot to absorb.  I suggest that you and everyone cc’ing chatter worth re-reading by mates in subsequent years in post@synthesis.posthaven.com 



__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2846
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________



Begin forwarded message:

From: Adrian Freed <adrian@adrianfreed.com>
Subject: wearable x-osc biometric prototype
Date: August 15, 2014 at 8:28:02 AM MST

hi, Vangelis, Marientina

John and Teoma may bring this box of goodies down to show you. It is a quick prototype for them to experiment with to help them figure out what they need for their IRCAM project. It has an x-OSC with imu, analog devices 2-lead EKG chip and inputs for a handmade respiration sensor based on EEonyx fabrics and an ear-clip pulse sensor (https://www.sparkfun.com/products/11574).

Obviously I would build something more substantial for regular use but this should suffice for building the signal processing and evaluating the sensors.

Incidentallally, Marientina it occurs to me that an ear lobe pulse sensor has a lot of potential for the large scale walking meditation experiments you discussed. It gives a muscle-noise free pulse signal and somebody must have created a BLE earring by now? Intel is building this kind of sensor into earbuds: http://www.sfgate.com/technology/article/50-Cent-Intel-team-on-heart-beat-headphones-5690650.php
                                         

Synthesis lighting research cluster / responsive environments

Dear Chris, Omar,

In the responsive environments research area:

Let’s start gathering our notes into a Posthaven — for now use 

Kristi can help summarize once a fortnight  or so...








__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2146
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Re: good comparison of IMU's and sensor fusion source

cool. thanks. Adrian suggested last year a ready-to-wear IMU that went for ~ $200- $250. Can’t recall the make. Xin Wei

On Aug 22, 2014, at 7:11 PM, Vangelis Lympouridis <vl_artcode@yahoo.com> wrote:

That's great! Thanks a lot Adrian.

Vangelis Lympouridis, PhD Visiting Scholar, School of Cinematic Arts University of Southern California

Senior Research Consultant, Creative Media & Behavioral Health Center University of Southern California http://cmbhc.usc.edu

Whole Body Interaction Designer www.inter-axions.com

vangelis@lympouridis.gr Tel: +1 (415) 706-2638

-----Original Message----- From: Adrian Freed [mailto:adrian@cnmat.berkeley.edu] Sent: Friday, August 22, 2014 10:47 AM To: Xin Wei Sha; Vangelis L Cc: John MacCallum Subject: good comparison of IMU's and sensor fusion source

https://github.com/kriswiner/MPU-6050/wiki/Affordable-9-DoF-Sensor-Fusion

Wireless sensor networks

FYI see Adrian’s response re McGill's Sensestage miniBee gadgets.

I know the guys who did the SenseStage work at McGill from Marcelo’s lab.  
They were nice folks, but not the best, and the research application was misguided.
This device was not useful to advance movement / gesture research at the TML.

To check my own assessment, I asked Adrian.

If you want to buy them on your own research funds, chacun à son goût :)
But I would prefer not to throw general AME or Synthesis money at buying such things
unless there’s a specific legacy need for a critical research project that will lead to an concrete outcome in predictable future.

Otherwise, let me suggest that we track the state of the art with Adrian Freed <adrian@cnmat.berkeley.edu>
and Vangelis Lympouridis <vangelis@lympouridis.gr> in USC
and get the best devices for the job under cost time constraint just when we need them.

Cheers,
Xin Wei



Begin forwarded message:

Subject: RE: Fwd: Wireless sensor networks
Date: August 22, 2014 at 7:07:02 PM MST
To: "Sha Xin Wei" <shaxinwei@gmail.com>

I am sure they are good for something but I can't use them for various
reasons.
They just aren't reliable enough unless the performers are out of reach
of RF noise from the
audience/ambient sources.

+ Slow, old atmega cpu with too little memory,
+ old accelerometer instead of full IMU.

There are lots of smaller form factor things in the works like SparkCore
and all the bluetooth LE things coming out.
The problem is you have to look at the fully integrated size with
battery, the additional sensors you actually want, the case
etc etc. Small is 6 months away (BLE), small and fast enough for serious
movement work is still a few years away.

Sixense is a company getting this right with stem:
http://www.sixensestore.com/stemsystem-2.aspx

-------- Original Message --------
Subject: Fwd: Wireless sensor networks
From: Sha Xin Wei <shaxinwei@gmail.com>
Date: Fri, August 22, 2014 3:36 pm
To: Adrian Freed <adrian@adrianfreed.com>


Are these xBees any good?   would these be superseded by other common wireless microprocessors …?

We (at Synthesis and AME) are happy with the xOSC boards,
tho I do hope for a much smaller form factor.   

...
Xin Wei