Ozone github history. O4_ASU ≠ tgvu-code

Synthesis and TML should have a common archival repository of code, 
which is different  from the reduced and cleaned up set that runs the iStage and Lounge called  Synthesis/O4_ASU/.

Unfortunately, I’m not sure the name of the most complete archival version,

MaxLibraries_ASU/tgvu-code/
does not have TML work by key Ozone authors:
Michael Fortin
JS Rousseau
Tim Sutton

Evan was most recently the Ozone master, before him Julian.

TML Ozone authors  started publishing their own specialist branches by medium x author, instead of making a synthesis architecture.  
Julian and then Evan had overall systems architecture synthesis as part of their brief.   Ozone programmers focussed on specific instrument kits  but did not rework all the old functionality (e.g. tgvu and JS Rousseau’s extensive toolkit)
as needed to prep the codebase for use by next generation of researchers.

Evan kindly uploaded the older tgvu-code codebase “as-is” — no warranty implied or explicit.
There are useful and clever instruments and externals, especially in Michael Fortin’s work, that need a programmer educated in C-level programming (CS) to unpack for us.  Examples: 
• Michael Fortin’s Python hook for custom forces in the state engine,
• Hooks for implementing chemistry between particles, 

History:

tgvu dates from TML @ Georgia Institute of Technology GVU (formerly Graphics, Visualization and Usability): principal author was Yoichiro Serita, with strong contributions from Erik Conrad, Delphine Nain, et al. (see TML papers from 2001-2005).  Many of the functionality wheels have been re-invented and made a little more robust in later layers of Max , MXR, and now Jamoma.  

However there are key utilities for e.g. state engine, affine remapping video, sensor sensitivity conditioning, scatter computations.  Julian and Evan were the last TMLabbers I know who spelunked through the archives.

The “t”  in “tgvu” comes from TGarden, the founding play space for the TML.

And “T” refers multiply to
transformation
topology
time
tea


On May 5, 2015, at 11:51 AM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

Greetings,

Can one of you give me a brief tutorial or point me to a material that describes how the .mxt files in MaxLibraries_ASU/tgvu-code/visuals on github are used? what does 'tgvu' stand for? There seem to be frequent changes to files in the tgvu-code directory.

Assegid Kidané
Engineer
School of Arts, Media and Engineering

Ozone video in Matthews Fishbowl: Timespace+Elastic time

Connor 

— or ideally Jitter understudy— Janelle or one of the grads ??

It would be good to run Timespace as well Caroline’s loop in the Fishbowl.

Maybe via IL Y A logic: 
 Run Caroline’s loop on the glass until we enter the camera view, then fade to Timespace (in proportion to presence inside the room)

Xin Wei

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab 
_________________________________________________________________________________________________

Ozone video in Matthews iStage: other "naturalistic" video to beam on the floor

Janelle, and Connor :

Can you please prep hi res videos for us? —>

I’d to beam two different "naturalistic" videos on the floor (as well as a vertical surface like the one that I hope Megan (Pete) will be able to hang).

(1) The first one is natural terrain: the Ohio river with flowing river texture clipped to the sinuous contour of the map. Let’s try to make it as high res as possible.

(2) The second is urban patterns, from overhead POV:

Here are some examples — but find better ones if you can.

2.1 Chicago aerial view day (daylight PREFERRED ) http://www.shutterstock.com/video/clip-4247351-stock-footage-aerial-sunset-cityscape-view-chicago-skyline-chicago-illinois-usa-shot-on-red-epic.html?src=recommended/4246655:8

2.2 Chicago aerial view night http://www.shutterstock.com/video/clip-4246655-stock-footage-aerial-vertical-illuminated-night-view-chicago-river-trump-tower-downtown-skyscrapers-chicago.html&download_comp=1

2.3 An absolute classic! The Social Life of Small Urban Spaces: by William H. Whyte The Vimeo is poor quality — we should get a high res archival version maybe w Library help?!

Pay special attention to the first 90 seconds and 6:52-11:10. https://vimeo.com/111488563

Fascinating observations.

Notice that Ozone:Urban aims to to present not abstract or gods’ eye view but the lived experience e.g. 10:02 - 10:28 So we can switch fluidly between Data view — vector fields God’s eye view — overhead video (Chicago from airplane) Live POV — project city walk (like a better quality version of Whyte 10:02 - 10:28) onto vertical screen suspended from grid.

Don’t worry about watermarks — let’s just grab highest res we can and project on the floor to try it out.

Cheers! Xin Wei

PS I will ask the Ozone core team to check in on Monday 4:30 in iStage - those who can come. At minimum I’d like to project a variety of videos on the floor, and on the walls.

HMM in Max

On Fri, Apr 24, 2015 at 5:12 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Where can we get the best publicly available HMM external for Max
as a general purpose HMM package

Should we extend / modify  gf (which we have via IRCAM license )
and can we use it easily for non-audio data?   People claim to have tried it on video.
It seems that the real work is the preliminary feature extraction where a lot of interpretation happens
What are examples of code that do this in interesting ways?

Xin Wei

Navid Navab wrote 

While FTM is somewhat discontinued, this all is being moved to IRCAM's free Mubu package:
download the package and quickly check some of their example patches.

poster: 


It contains optimized algorithms building on gf, FTM, cataRT, pipo, etc. While mubu is audio-centric, it is not necessarily audio-specific. mubu buffers can work with multiple data modalities and use a variety of correlation methods to move between these layers... This makes up a fairly wholesome platform without the need to move back and forth between gf, FTM, concatenative synthesis instruments, multimodal data handling, analysis, and etc.

As with most current IRCAM releases, it is highly under-documented. Besides gf that is distributed with their package, the mubu.hhmm object might be good place to start for what you are looking for:


also their xmm object might be of interest:

Vienna symposium: "Textural Care and Natality: the Stuff of Publics," in Public Life - Towards a politics of care.

Vienna symposium: "Textural Care and Natality: the Stuff of Publics," in Public Life - Towards a politics of care: Bodies, Place, Matter


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

Graduate Certificate in Critical Theory launched!

On behalf of Gregory Castle, Mark Lussier, and me:
Please join us to celebrate the launch of the Graduate Certificate in Critical Theory. This will be a chance to learn a bit about the certificate and to meet other theory leaning faculty and grad students and find out what we've planned for the future. Please circulate this info to other faculty and grad students and/or relevant listserves. There are a lot of folks campus wide working in this area; we want to extend theoretical and material hospitality to all.

Ron

Ron Broglio
Director of Graduate Studies
Department of English

Senior Scholar at the Global Institute of Sustainability
Provost Humanities Fellow

http://www.public.asu.edu/~rbroglio/



PS AME has a course in this certificate.

How to develop ECM (experiential climate model kit) to stage iMonsoon using Ozone in iStage. AKA, standing on the shoulders of giants.

Garrett, Connor, Pete, as well as Ozzie, Prashanth, Varsha: 

The iStage is waking up and growing more expressive exponentially thanks to a LOT of work by a very special crew of people.

As strongly as possible I am urging everyone here in or near the Synthesis core dev group  to fully learn and use the TML codebase rather than hack your own solutions for any particular “demo".  This learning, more than the particular installation, is the most valuable point of the TML+ Synthesis workshops.   ( This regards Synthesis research projects, not class or personal projects. )

For the core research projects — such as building ECM (experiential climate model kit) to stage iMonsoon using Ozone in iStage —  do not hack your own special case solutions.  Newbie media artists make too many special case demos that only work once.  We cannot beat MIT Media Lab at that game, nor will we gain any respect from our research colleagues doing that.

Now that you’re growing beyond your classroom training, a bit of advice for how researchers gain leverage in the real world:  Use prior generations' best works and give them credit.  Given the toolkits that are out there, it is powerful to think that any realtime audio-video instrument or effect that you imagine coding up probably already has been done before. Not only is there probably a patch that does what you want  already in the (extended) Ozone codebase or the wider world of OSC-enabled apps, but there is well-crafted ensemble of utilities that fit well together to achieve not only that function but also a whole spectrum of rich effects that fit in a rich way.

At least please come talk with me before you go down a path that takes more than a couple of days to complete, and I’ll let you know if your sin’s original as Tom Lehrer used to say.  If not, I’ll see if I can point you to the archive.  This is not a general offer, only specially for core dev folks, so please take advantage of this offer.  :)

Re. strategic learning (intellectual capacity building as Chris and I would say): for example, 

It would be great if Garrett could learn to exploit ambisonic spatialization code to create his own atmospheric movement.  (Is Mike K could be a source of knowledge about ambisonics? )   

And it would be great for Connor to learn to exploit Michael Fortin's particle systems by (1) driving them with radically different force fields e.g. from sound field, and from optical flow and (2) do some chemistry : interspecies reaction like combustion or epoxy phase changes dependent on  relative densities.  (Dehlia Hannah also had this idea of phase change.)

"If I have seen further it is by standing on the shoulders of giants.” — Isaac Newton (1676)

Looking forward to working with you!

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

computational matter (Was: Evaporation in iStage?)

Great idea.  I take this opportunity to engage the core dev team and apprentices 

That’s why for 15 years TML has been  been researching  poetic uses of computational physics.

M Fortin’s Masters Thesis, Interactive Simulation of Fluid Flow (2011)
Susan Stepney, The neglected pillar of material computation, Physica D 237 (2008) 1157–1164

Some of this is incorporated into Ozone kit but not exploited .

I've been trying to get the media artist-programmers to do for years to exploit the foundations for material computation 
built into Ozone to begin to create computational matter,

One of the first nontrivial phenomena I asked for was 
PHASE TRANSITION

Garth also raised this last Fall for our first Synthesis Residency Improvisational Environments 

Another is
RESONANCE

a third is
MORTALITY

A fourth:
CHEMISTRY
(That’s why Michael Fortin implemented 4 species of particles in our particle kit.
as groundwork for chemical (not merely geometrical or kinematic) interaction between species
but defined magically, according to custom rules designed by the composer of the media environment.

Our media programmers have to first get to a certain degree of mastery of the media just to achieve
decent look and feel.   This work is virtuosic, meaning something they can tackle after mastery 
but to go beyond using  the software to make spectacular effects — images and sounds.

This is worth a summer project 
(i.e. programming in the AL-chemical behavior in Jitter + c not merely faking it by making an image of the process.)

On Apr 8, 2015, at 12:56 PM, Dehlia Hannah <dhannah1@asu.edu> wrote:

It would be great if we could model (or suggest) evaporation/condensation/evapotranspiration.... Maybe Chris Z's installation idea could be relevant here.