Connor and others who are considering using machine learning.
Rebecca Fiebrink, a computer scientist @ Princeton now Goldsmiths, has created a very popular machine learning engine designed specifically for media artists: Wekinator - which can be used as a blackbox from Max or any client via OSC. I think Lauren Hayes and several AME students have tried it out. (Tell us what you think!)
Treat Wekinator’s machine learning engine like a power tool — it’s very useful for what it's designed for, but it can hurt you ( in mind and soul if not body ;) if you try to use it against its design. Take appropriate courses from Pavan (or Suren, Robert…) to learn when and why to use these tools.
Wekinator is not a research tool for those interested in inventing new methods in machine perception or signal processing, but Fiebrink has taken a great deal of care to user test, design and implement the best of known methods to
• Create mappings between gesture and computer sounds.
• Creation of gesturally-controlled animations and games
• Control interactive visual environments created in Processing, OpenFrameworks, or Quartz Composer, or game engines like Unity, using gestures sensed from webcam, Kinect, Arduino, etc.
• Creation of systems for gesture analysis and feedback
• Build classifiers to detect which gesture a user is performing. Use the identified gesture to control the computer or to inform the user how he’s doing.
• Detect instrument, genre, pitch, rhythm, etc. of audio coming into the mic, and use this to control computer audio, visuals, etc.
• Creation of other interactive systems in which the computer responds in real-time to some action performed by a human user (or users)
• Anything that can output OSC can be used as a controller
• Anything that can be controlled by OSC can be controlled by Wekinator
Having said that, these methods are also a trap in a fundamental sense: they can only recognize / categorize the given, and cannot produce novel, artful, living gesture. (The non-prestatabilty of potential is the central thrust of the ontogenetics group.) That’s why we make human-in-the-loop systems, technical ensembles.
On the third hand, it could be quite fun to implement two ETUDES leveraging Wekinator’s power:
Sha’s Follower-to-Leader game: Follow-spot tries at some point to anticipate and lead the human walker)
Rawls’ N-1 game : Train on n walkers, remove one person, direct lightspot to move with the remaining n-1 walkers.
I’m pretty sure the Follower-to-Leader game would do something interesting. I don’t see how to constrain the (repetition of the) walking to make the n-1 game work.