Cooper, if one of those black GoPros are available, may be we can find them useful under certain scenarios. Below I have a few network cameras along with their pro and con features.
Pro: wired and wireless camera with IR for night vision, pan tilt action, 2way audio, 1280x800wxga
Con: no POE
pro: POE, 2 way audio, wxga
con: no wireless, no optical zoom
pro: high resolution, POE, 2-way audio
con: fixed focus and zoom
pro: high and multiple resolution, POE, 2way audio, focus and zoom, supports RTP/RTSP/UDP/TCP/HTTP protocols
con: dome design, no pan tilt
From: synthesis-research@googlegroups.com [synthesis-research@googlegroups.com] on behalf of Sha Xin Wei [shaxinwei@gmail.com]
Sent: Saturday, June 28, 2014 11:23 AM
To: synthesis-research@googlegroups.com
Cc: post@synthesis.posthaven.com; ozone@concordia.ca
Subject: [Synthesis] networked cameras; potential response to activity vs actual activity
Sent: Saturday, June 28, 2014 11:23 AM
To: synthesis-research@googlegroups.com
Cc: post@synthesis.posthaven.com; ozone@concordia.ca
Subject: [Synthesis] networked cameras; potential response to activity vs actual activity
Given Tain, Cooper and other folks’ findings, IMHO we should use wifi networked cameras but not GoPro’s. (Can we borrow two from somewhere hort term to get the video network going?) Comments on networked cameras?
But let’s keep the focus on portals for sound and gesture / rhythm.
Keep in mind the point of the video portal is not to look AT an image in a rectangle, but to suture different regions of space together.
Cooper’s diagram (see attached) offers a chance to make another distinction about Synthesis research strategy, carrying on from the TML, that is experientially and conceptually quite different from representationalist or telementationalist uses of signs.
(Another key tactic: to eschew allegorical art.)
A main Synthesis research interest here is not about looking-at an out-of-reach image far from the bodies of the inhabitants of the media space,
but how to either:;
(1) use varying light fields to induce senses of rhythm and co-movement
(2) animate puppets (whether made of physical material, light, or light on matter).
The more fundamental research is not the actual image or sound, but the artful design of potential responsivity of the activated media to action.
This goes for projected video, lighting, acoustics, synthesized sound, as well as kinetic objects.
All we have to do this summer is to build out enough kits or aspects of the media system to demonstrate these ways of working and in the formate of some attractive installations installed in the ordinary space.
Re. the proto-rearch goals this month, we do NOT have to pre-think the design of the whole of the end-user experience, but to build some kits that would allow us to demonstrate the promise of this approach to VIP visitors in July, students in August,
and to work with in September when everyone’s back.
Cheers for the smart work to date,
Xin Wei
On Jun 28, 2014, at 1:58 AM, Cooper Sanghyun Yoo <cooperyoo@asu.edu> wrote:
Hi all,I would also like to add some thoughts on using GoPro. Well to begin with, I am a huge fan of GoPro cameras, and I was testing GoPro Hero 3+ Black Edition (the latest model) last April. It would be great if AME purchase new GoPro cameras but maybe it is not the best choice for Brickyard projects.<1012591_10152839329222119_7096611735257331402_n.jpg>
[Image 1] Six GoPros on a special mount made from a 3D printer. Didn't really work under Arizona weather. Photos taken near Tempe.
As far as I remember, ideas of using GoPro cameras started because we wanted to locate cameras outside of the building without any wires connected to the computers.GoPro cameras are small, light weight, feature loaded, and are designed to be put under extreme conditions. However, as they are not built for long-term use, they have battery issue, overheating issue, and Wifi range issues. Battery last less than an hour, and if we connect a source of electricity the camera will overheat and turn off once a while.I also found ways to stream full video into a web browser or VLC media player (https://www.youtube.com/watch?v=RhFzrYBXItc) and a Mac app controlling and doing low resolution live previews (https://www.youtube.com/watch?v=au5Yi6y-LmE). Unfortunately, most of these solutions are for low resolution previews. Trying GoPro cameras in Max/jitter might still be a very interesting and challenging project, but we can also consider different options.For example, wifi network cameras can be another good choice. They are designed specifically for long-term live streaming without being connected to the computer. Whenever the camera is connected to power source and network, it doesn't even need to be located near the computers. Image below shows my previous project that used two network cameras in an outdoor environment.<Camera and Interaction Elevation View.jpg>
[Image 2] Network camera was located outside of the building. All the machines controlling the LED ceiling were in a different building so we couldn't use wires to connect camera to computer.
Please let me know your ideas and thoughts.
Much appreciated,CooperOn Fri, Jun 27, 2014 at 9:21 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:Very likely standard httpd would be a bad way to send video since it was designed for sending bytes of characters or a few embedded media objects. But Max jit.* used to read rtsp video streams as well ( in the syntax rtsp:// ? )Such long latencies would make GoPro impossible for synchronous action, but we should also think about how to intercalate activity based on durational rhythm, not point-sync’d instantaneous events.Also, time delaying the feed by 3 hours would be interesting first step to intercalating Montreal and PhoenixXin WeiOn Jun 26, 2014, at 9:41 PM, Garrett Johnson <gljohns6@asu.edu> wrote:There is a browser object in max. Not sure how it works with video but it sounds like it's worth a shot.
Garrett L. JohnsonMA musicologySynthesis Center research teamArizona State UniversityShould have forwarded to the group initially:Doing some initial research which I will present to the group. On the Go Pro:While the wireless feature is designed specifically for controlling and doing low resolution live previews on the Go Pro app, there is a way to stream full video into a web browser via the camera's built in web server. If any applications that synthesis is using can access this kind of stream (via http://) we should be able to grab video. A caveat is that the live streamed video has very significant (seems to be > 1 second) lag. This is likely dependent on resolution, and I would guess 720p video would have less lag than 1080p - but we'd have to try it to see.Also, it looks like there is a cable that allows us to both insert an external (3.5mm) microphone AND still utilize the camera's USB port as a power input as to avoid running off the battery.Here is a video that a youtube user posted covering the video streaming accessibility: https://www.youtube.com/watch?v=Y1XaBJZ8XcgTain