HMM in Max

On Fri, Apr 24, 2015 at 5:12 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Where can we get the best publicly available HMM external for Max
as a general purpose HMM package

Should we extend / modify  gf (which we have via IRCAM license )
and can we use it easily for non-audio data?   People claim to have tried it on video.
It seems that the real work is the preliminary feature extraction where a lot of interpretation happens
What are examples of code that do this in interesting ways?

Xin Wei

Navid Navab wrote 

While FTM is somewhat discontinued, this all is being moved to IRCAM's free Mubu package:
download the package and quickly check some of their example patches.

poster: 


It contains optimized algorithms building on gf, FTM, cataRT, pipo, etc. While mubu is audio-centric, it is not necessarily audio-specific. mubu buffers can work with multiple data modalities and use a variety of correlation methods to move between these layers... This makes up a fairly wholesome platform without the need to move back and forth between gf, FTM, concatenative synthesis instruments, multimodal data handling, analysis, and etc.

As with most current IRCAM releases, it is highly under-documented. Besides gf that is distributed with their package, the mubu.hhmm object might be good place to start for what you are looking for:


also their xmm object might be of interest:

Vienna symposium: "Textural Care and Natality: the Stuff of Publics," in Public Life - Towards a politics of care.

Vienna symposium: "Textural Care and Natality: the Stuff of Publics," in Public Life - Towards a politics of care: Bodies, Place, Matter


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

Graduate Certificate in Critical Theory launched!

On behalf of Gregory Castle, Mark Lussier, and me:
Please join us to celebrate the launch of the Graduate Certificate in Critical Theory. This will be a chance to learn a bit about the certificate and to meet other theory leaning faculty and grad students and find out what we've planned for the future. Please circulate this info to other faculty and grad students and/or relevant listserves. There are a lot of folks campus wide working in this area; we want to extend theoretical and material hospitality to all.

Ron

Ron Broglio
Director of Graduate Studies
Department of English

Senior Scholar at the Global Institute of Sustainability
Provost Humanities Fellow

http://www.public.asu.edu/~rbroglio/



PS AME has a course in this certificate.

How to develop ECM (experiential climate model kit) to stage iMonsoon using Ozone in iStage. AKA, standing on the shoulders of giants.

Garrett, Connor, Pete, as well as Ozzie, Prashanth, Varsha: 

The iStage is waking up and growing more expressive exponentially thanks to a LOT of work by a very special crew of people.

As strongly as possible I am urging everyone here in or near the Synthesis core dev group  to fully learn and use the TML codebase rather than hack your own solutions for any particular “demo".  This learning, more than the particular installation, is the most valuable point of the TML+ Synthesis workshops.   ( This regards Synthesis research projects, not class or personal projects. )

For the core research projects — such as building ECM (experiential climate model kit) to stage iMonsoon using Ozone in iStage —  do not hack your own special case solutions.  Newbie media artists make too many special case demos that only work once.  We cannot beat MIT Media Lab at that game, nor will we gain any respect from our research colleagues doing that.

Now that you’re growing beyond your classroom training, a bit of advice for how researchers gain leverage in the real world:  Use prior generations' best works and give them credit.  Given the toolkits that are out there, it is powerful to think that any realtime audio-video instrument or effect that you imagine coding up probably already has been done before. Not only is there probably a patch that does what you want  already in the (extended) Ozone codebase or the wider world of OSC-enabled apps, but there is well-crafted ensemble of utilities that fit well together to achieve not only that function but also a whole spectrum of rich effects that fit in a rich way.

At least please come talk with me before you go down a path that takes more than a couple of days to complete, and I’ll let you know if your sin’s original as Tom Lehrer used to say.  If not, I’ll see if I can point you to the archive.  This is not a general offer, only specially for core dev folks, so please take advantage of this offer.  :)

Re. strategic learning (intellectual capacity building as Chris and I would say): for example, 

It would be great if Garrett could learn to exploit ambisonic spatialization code to create his own atmospheric movement.  (Is Mike K could be a source of knowledge about ambisonics? )   

And it would be great for Connor to learn to exploit Michael Fortin's particle systems by (1) driving them with radically different force fields e.g. from sound field, and from optical flow and (2) do some chemistry : interspecies reaction like combustion or epoxy phase changes dependent on  relative densities.  (Dehlia Hannah also had this idea of phase change.)

"If I have seen further it is by standing on the shoulders of giants.” — Isaac Newton (1676)

Looking forward to working with you!

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

computational matter (Was: Evaporation in iStage?)

Great idea.  I take this opportunity to engage the core dev team and apprentices 

That’s why for 15 years TML has been  been researching  poetic uses of computational physics.

M Fortin’s Masters Thesis, Interactive Simulation of Fluid Flow (2011)
Susan Stepney, The neglected pillar of material computation, Physica D 237 (2008) 1157–1164

Some of this is incorporated into Ozone kit but not exploited .

I've been trying to get the media artist-programmers to do for years to exploit the foundations for material computation 
built into Ozone to begin to create computational matter,

One of the first nontrivial phenomena I asked for was 
PHASE TRANSITION

Garth also raised this last Fall for our first Synthesis Residency Improvisational Environments 

Another is
RESONANCE

a third is
MORTALITY

A fourth:
CHEMISTRY
(That’s why Michael Fortin implemented 4 species of particles in our particle kit.
as groundwork for chemical (not merely geometrical or kinematic) interaction between species
but defined magically, according to custom rules designed by the composer of the media environment.

Our media programmers have to first get to a certain degree of mastery of the media just to achieve
decent look and feel.   This work is virtuosic, meaning something they can tackle after mastery 
but to go beyond using  the software to make spectacular effects — images and sounds.

This is worth a summer project 
(i.e. programming in the AL-chemical behavior in Jitter + c not merely faking it by making an image of the process.)

On Apr 8, 2015, at 12:56 PM, Dehlia Hannah <dhannah1@asu.edu> wrote:

It would be great if we could model (or suggest) evaporation/condensation/evapotranspiration.... Maybe Chris Z's installation idea could be relevant here.

Ozone LECE instruments post BuB review

It was fun to do the Atmosphere panel yesterday.   Let’s review the instruments…   

My main comment is that instead of creating custom code for a specific state / scenario that displaces the other instruments, we need to make the suite of instruments in Ozone available in parallel as well.  The way to do this is to continue to learn the suite of Ozone instruments which are a fairly small (5 or so) number of instruments, all resident in memory on the trashcan, where each can be varied widely (and wildly) in look and feel by parameterization and pre-sets.   Rather than code from scratch, let’s see what you can do by modifying parameters of the rich suite of code that already exists.

We’ll need the full suite for rehearsals this week, because the Dean’s invited the President and Provost for April 7-9.  A visit may or may not happen that week, but that is the earliest date by which we need to be able demo the full spectrum of Ozone++  including timespace and elastic.

Ideally addressable under Mira interface tabs on  multiple iPads  — say 3 iPads (ask Sylvia for the mini-iPad so we can install code on that one for me)  — so different people can be walking on the floor simultaneously varying different layers.

Visuals:
The visuals from yesterday were ok, but were far too sparse and simple and specialized to be adequate for the VIP shows to come.  We need the visuals to be cross-fadable to the older (and visually richer) instruments : 
timespace, 
particles, 
Navier-Stokes, 
elastic time,
live portal feed,
canned feeds (in a jit.submatrix + jit.rota),
vectorfield

We must be able to layer in the other instruments.   Connor: this was already written in Ozone so I’d like you to integrate your particular Jitter instruments into the framework what Evan left behind.

Project line art — Most of the textures do not work very well for demo purposes.  And they never appear as clearly as line art.  (Connor : I’ll  send separately the simple vectorfield patch so you can incorporate it into our kit of video instruments as a basic utility.   It would be worthwhile making a particle system that’s driven bib the vector field without Navier-Stokes, so that the particles will flow unbound. )

Sound:
Ditto.  This is in a better state of playability.  The main needs are
(1) True spatialization —  Julian and Navid had this so it must be exposed.  Don’t be shy to ask  Mike, Concordia, CNMAT, or IRCAM folks  for help.  The rich ambisonics recordings are pretty hard to appreciate mixed down as they are.

(2) Much clearer relation to movement (and location) — please for ideation purposes, let’s put in a sonic reticulation i.e.
 Julian’s jitgrid2snd
Navid’s wind , 
Garrett's tuned scales,
are the most successful.
ask Todd for the percussive code, or write it based on Julian’s rhythm.

Garrett has a great idea of writing a separate Mira tab for rhythm flow from instrument to instrument — cross-modally:
lights
fans
video
sound
Each of these instruments should be able to register i/o rhythm_type which is an FTM matrix a la Julian’s rhythm kit.

Connor: You should run the default sound instrument patches directly on your own visual computer, so that they run with sound even if they have to play out thru your own local computer speakers.   All video instruments must make (local) sound by default, if only as an experiential design aid.  This means that the video computer should have its own audio out to the sound mixer etc. etc.
Lighting:
All lights need to be addressed under a uniform interface.
right now the floor lamps, overhead LED’s, fans 
seem separate.   Even if the device level interfaces are distinct, they  should be addressable by a uniform channel map.

I’m not sure how to treat the iCue and Byron’s motorized mount.  I’m inclined to treat them as a new class of Ozone instruments: bodies.
For now these bodies are merely objects made of light.  They should take a parameter of type rhythm to introduce warbles in time (i.e. speed), and space (displacement).

The fans can be part of yet another new class of Ozone instruments : field.

Nikolaos Chandolias: choreographer Dimitris Papaioanou, Still Life

From Nikolaos Chandolias <nikos.chandolias@gmail.com>:

I just thought of these projects that are not necessarily concerned with the climate change but the techniques they are using could be re-appropriated and used in an artistic context to help us think of the climate changes.

Angela Morelli is an Italian information and graphic designer based in London. She became known through her research about The Global Water Footprint of Humanity at TEDxOslo, where she shared her research into the water crisis and she eloquently illustrates the value of information design to communicate global issues: http://www.angelamorelli.com/water/

The choreographer Dimitris Papaioanou and his performance titled STILL LIFE: where he had a nylon hanging from the ceiling that could be raised or lowered in the performance space filled with haze, creating a cloud sense. The performers were able to manipulate it with different props or their hands. I was thinking that a motorised movable nylon ceiling with some funs and some haze could be very suitable for the iMonsoon event. Please find attached some stunning images:




Finally, the water activated art, by Pererine Church. He decorates public spaces with graffiti that only appears in the rain. This kind of trick is achieved through copious used of superhydrophobic materials. In effect, the painting part of the sidewalks remain dry. This material can be bought in home depot stores, is called NeverWet and is fairly cheap. This could also be an analog prop that could be used in relation to the digital media in the space and create interesting explorations in the theme of climate.

Nikos Chandolias: Dimitris Papaioanou nylon + haze, Angela Morelli water, iMonsoon, Serra

I just thought of these projects that are not necessarily concerned with the climate change but the techniques they are using could be re-appropriated and used in an artistic context to help us think of the climate changes.

Angela Morelli is an Italian information and graphic designer based in London. She became known through her research about The Global Water Footprint of Humanity at TEDxOslo, where she shared her research into the water crisis and she eloquently illustrates the value of information design to communicate global issues: http://www.angelamorelli.com/water/

The choreographer Dimitris Papaioanou and his performance titled STILL LIFE: where he had a nylon hanging from the ceiling that could be raised or lowered in the performance space filled with haze, creating a cloud sense. The performers were able to manipulate it with different props or their hands. I was thinking that a motorised movable nylon ceiling with some funs and some haze could be very suitable for the iMonsoon event. Please find attached some stunning images:


Finally, the water activated art, by Pererine Church. He decorates public spaces with graffiti that only appears in the rain. This kind of trick is achieved through copious used of superhydrophobic materials. In effect, the painting part of the sidewalks remain dry. This material can be bought in home depot stores, is called NeverWet and is fairly cheap. This could also be an analog prop that could be used in relation to the digital media in the space and create interesting explorations in the theme of climate.

Cheers,
Nikos :)