o4.track
records video concurrent with OSC data streams
now in TML-Synthesis github.
Note happy end to this thread?
Let’s make sure we use this during LRR group work this coming week!
Xin Wei
On Nov 10, 2014, at 1:07 AM, Julian Stein <julian.stein@gmail.com> wrote:
Hello all,I just pushed some new things to the O4.synthesis github repository. This features a major overhaul of the rhythm tools which has been in the works for several months now. This system as a whole is both more stable and flexible, and should be better catered to the research upcoming this month and beyond. Several examples are included within the bundle, and more documentation will come soon (hopefully this week). As discussed with a few of you, I'll try to make myself as available I can to help get these things up and running from afar. :)Also included in the O4.rhyth_abs is a folder labeled o4.track. This features a simple system for recording and playing video with a synchronized osc data stream.best,Julian
On Oct 29, 2014, at 3:36 PM, Evan Montpellier <evan.montpellier@gmail.com> wrote:
Jinx, Julian! Please send that patch my way as well.EvanOn Wed, Oct 29, 2014 at 6:35 PM, Evan Montpellier <evan.montpellier@gmail.com> wrote:Hello all,Mike, forwarding a conversation from February on a related topic (see below) - apologies if you've seen it before. A decent video recording patch (never mind one with recording and syncing of accompanying data) is a glaring absence from the current ASU/TML repertoire, at least as far as I'm aware. In terms of the video aspect, I'd like to build something around the Hap codec developed by Vidvox (Max implementationhere), since it seems like the best solution for high-efficiency playback (Hap clips are read directly onto the GPU) - although I welcome alternate suggestions!
EvanOn Thu, Feb 6, 2014 at 4:24 PM, <adrian@adrianfreed.com> wrote:qmetro just adds jitter and latency unless the machine is loaded in
which case an indeterminate number of frames will be dropped. In fact
the whole
notion of frame rate is corrupted in Jitter because when times get tough
it just grabs a frame from a buffer that is being overwritten.
I vaguely remember that there is a flag in one of the objects that tries
to suppress this (jit.grab or jit.record or something like that).
As for Xin Wei's wish I can say that there is nothing but silly
technical silos in the way of recording video streams along with the
OSC context with o.record. Transcoding to OSC blobs is the quick way to
do this and was in fact why the BLOB type is part of OSC.
Jeff Lubow here at CNMAT is interested in this sort of thing. We should
add a function to the "o." library that transcodes a jitter matrix
as a BLOB. It is more glamorous to truly translate between the formats
but for Xin Wei's purpose simply viewing OSC bundles as an
encapsulation format is sufficient.
We have another project that will soon need this so I will try to scare
up some cycles to move it along.
> -------- Original Message --------
> Subject: Re: documenting the discussions and exercises
> From: Sha Xin Wei <shaxinwei@gmail.com>
> Date: Thu, February 06, 2014 3:32 am
>>
> [I’m widening this very thin technical thread to AME colleagues who may also have a stake in journaled monitored video + sensor data streams in OSC / Max-MSP-Jitter for research purposes. - Xin Wei ]
>
> Thanks Julian. Does qmetro introduce accumulating drift or “merely” local indeterminacy in time? If it’s accumulating then we have a problem that may require ugly hacks like restart and stitching files.
>
> Just remember that for this work I do not care to have msec sync. Very coarse sync between low-res video and the OSC stream is good enough for the sort of eyeballing I have in mind. We need the OSC streams to stay in sync but the heavy media (video+video) stream is merely there for the human to imagine and talk about what was going on on the floor at that time. Imagine scrolling through time (in Jamoma CueManager or autopattr interpolator) and always being able to see a small video showing what was going on at that time.
>
> If I had truly godlike power — like the authors of OSC and o.dot -- I’d like to not only build time tags into the OSC standard, but some human-readable representation of the scene — i.e. a “photo / video" track. So by default, in addition to the torrents of numbers, there’s always a way to see what the hell was going on in human-legible physical space at any OSC-moment. This is not a rational engineering desire but an experimentalist’s desire :)
>
> Ideally the video would be the same as the input video feeds, so you can develop offline months later, and basically run the recorded input video and all sensor input against the modified code. That way you can “reuse” the live activity in order to develop refined activity feature detectors and new synthesis instruments ourselves with out the expense of flying together everyone for technical work. It’s a practical workflow matter of letting the technical development and the physical play interleave with less stuttering.
>
> IMHO, it is crucial for TML that O4 engineers make instruments that collaborating dancers, paper artists, interior design students, stroke rehab experimenters all over the world can use without you guys babysitting at the console.
>
>
> Julian, if you run short on time, you might confer with local experts — students of Garth and Todd or Pavan — or John?
>
> Happy to see you all soon!
> Xin Wei
>
> On Feb 5, 2014, at 7:33 PM, Julian Stein <julian.stein@gmail.com> wrote:
>
> > sure, I'll see what I can do with this -- probably makes most sense to use the o.table object. I'll keep in mind Adrian's concerns with qmetro. Perhaps I can synchronize the osc data and the jitter material with the same clock?
> >
> > best,
> >
> > Julian
> >
> >
> > On Wed, Feb 5, 2014 at 7:49 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
> > Speaking of videos,
> >
> > A lot of the videos we shoot for “documentation” is inadequate for scientific / phenomenological work .
> > What we must do this time around is to simultaneously record
> >
> > VIDEO (low res for human eyeballing)
> > All OSC streams.
> >
> > Is this easy with Jamoma CueManager?
> >
> > Julian do you have, or can you write this tool that stores all OSC streams in sync with al input videos (which can be low resolution), consulting Navid? It’d be good to test it with an ASU student before coming here — Chinmay perhaps.
> >
> > • We need to recruit a reliable team of documenters to
> > — baby sit the video streams or time-lapse (it could even be 4 iPhone cameras running time lapse of still shots, or we could borrow some fancier gear)
> > — replay the buffer at the end o f every session (or whenever asked for my the participants)
> > — Take text notes at the Recap hours(12-1, and 4-5 every day. There’s an error in the calendar which had the Afternoon sessions too long 2-6.)
> >
> > I cc Chris as the coordinator. "Freed, Inigo" <inigo.freed@prescott.edu>, Adrian’s son, offered to bring a video team down from Prescott AZ to ASU for the workshop. I’m not sure that will work bc we need volunteer participants to ruin these cameras 4 + hours day x 3 weeks.
> >
> > IN fact, Adrian said based on his experience, that to really capture a lot of subtleties we should roll cameras before anyone steps in to the room. I totally agree. It introduces a lot of phenomenological framing to “set things up” and then check ___ then “roll camera” This model works of reframed art like cinema production, but does not work for studying the phenomenology of event
> >
> > Q: Do we need moving video? I think lo-res motion or time-lapse medium res stills should suffice. Remember this is merely to stimulate Recap discussion after every two hours . If we run out of space we could throw away all but salient stills which would need to be chosen on the spot and copied to a common drive… etc. etc. etc.
> >
> > Anyway, the Coordinators can plan this out in consultation with me or Adrian, and we need a team.
> >
> > Cheers,
> > Xin Wei