On the hw wireless sensor side of the story —
For the work reported in Ubicomp and ISWC 200, TML @ GaTech (Atlanta) pioneered use of the TinyOS platforms in two form factors (size of xosc, and US quarter !).
We had a crack team — Giovanni Iachello, Steven Dow (now prof at CMU, a friend to our incoming prof Stacey Kuznetsov :), and Yoichiro Serita (from Sony Labs).
There followed 10 more years of hackers in the movement art + tech world making their own naive solutions,
and not even getting to either the interesting art or the significant problems to be solved in EE, which lay beyond their scientific judgment.*
The good news is that good-enough on-body IMU’s are affordable to AME. So Pavan and Ozzie are going to get us a set (soon!)
I am in a hurry though to form a crack team to tackle the
actual research challenges of
(1) Understanding non-sonic rhythm as an example of
apperception , and
(2) Scaffolding different senses of resonating temporal texture, using for example
spectral analogies that can generalize from classical DSP to higher dimensional time-varying fields
(this is not as deep as it sounds — it should be amenable to smart engineering of which ASU has an abundance.
I don’t know if it’s an area with lots of readymade techniques. Some expert needs to tell us.
But nothing should stop us form doing the first step ourselves :
implement
the Aylward-Paradiso in Max/MSP
and run multiple time series through it from whatever has adequate fps
from interestingly rich movement (Chris Ziegler + students; maybe with our Visiting Artist friends from Montreal and Copenhagen )
)
Thanks for the great reference Mike!
( Can someone show me that MSP external connected with Adrian and John’s oriented normal odot kit, so we can advance the temporality research? )
Political-Epistemological Rant:
AME and its cousins CIDSE, ECEE, SEMTE can get ahead of the shallow uses that artists and engineers have made of each other to date.
Political-Economic Rant:
The foundations in EU and Quebec that funded us lucky bastards gave too much and not enough funding:
So much $ that artists could hire their own students with just enough EE / CS knowledge to hack naive solutions;
not enough $$ to fund cohorts of grad student that could make it worthwhile for a EE / CS professor to dedicate 2-3 MA students over 6+ years of continuous trial and error projects in daily studio +bench work with movement artists who could be subsidized
to NOT produce productions for a significant % of that time.
On Oct 4, 2014, at 4:18 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Yes this is a tool I wanted since 2003 in the oz/math/ section of Ozone
- with Y. Serita, J. Fantauzza, S. Dow, G. Iachello, V. Fiano, J. Berzowska, Y. Caravia, D. Nain, W. Reitberger, J. Fistre, "Demonstrations of expressive softwear and ambient media," Ubicomp 2003 (short PDF, video, long PDF). http://topologicalmedialab.net/xinwei/papers/texts/ubicomp/Sha_long_11.pdf
- with G. Iachello, S. Dow, Y. Serita, T. St. Julien, J. Fistre, "Continuous sensing of gesture for control of audio-visual media," ISWC International Society for Wearable Computing, 2003. (PDF) http://www.gvu.gatech.edu/people/sha.xinwei/topologicalmedia/papers/ISWC03_full.pdf
We need all these conditions:robust hardware with lightweight battery (ASU has some good battery guys ),high sensor-ensemble fps ,low latency transmission,some maths like Aylward-Paradiso to play with in our Max toolkitWhat we would do in place of Paradiso’s naive notion of music is to map to electroacoustics synthesis etc.In fact if Julian or Mike or ...could point us to the Max external that implements cross-correlation (not auto-correlation) we could play with it right away on acoustic inputand think about how to handle control rate data… I think there is one already in McGill or IRCAM’s vector processing toolkits.If someone is interested, I’d be happy to work with him/her to implement this and map it more directly to organized sound (with the help of our sound artists) for rich feedback.On Oct 4, 2014, at 3:50 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:Hi,Do you guys know this paper? They put gyroscopes on dancers and used realtime (windowed) cross-covariance to measure time lag between several dancers. I believe this is similar the what Xin Wei has in mind as part of studying temporality.Mike