• Francesco Pavanian
• Paola Rigob,
• Giovanni Galfanoc
http://www.sciencedirect.com/science/article/pii/S1053810014001202
Keywords
From: Xin Wei Sha <Xinwei.Sha@asu.edu>
Subject: Re: [Synthesis] Notes for Lighting and Rhythm Residency Nov (13)17-26
Date: October 4, 2014 at 2:31:10 PM MST
Please please please before we dive into more gear specsWhat are the experiential provocations being proposed?For example, Omar, everyone, can you please write into some common space more example micro-studies similar to Adrian’s examples?(See the movement exercises that MM has drawn up for past experiments for more examples.)Here at Synthesis, I must insist on this practice, prior to buying gear, so that we have a much greater ratio ofpropositions : gadgets.Thank you, let’s play.Xin Wei
_________________________________________________________________________________________________
On Oct 4, 2014, at 12:53 PM, Omar Faleh <omar@morscad.com> wrote:I got the chance lately to work with the Philips Nitro strobes which is intensely stronger than the atomics 3000 for example. It is an LED strobe, so you can pulse, flicker, and keep on for quite a while without having to worry about discharge and re-charge.. and being an all-led strobe, it isn't as voltage- hungry as the atomics..The LED surface is split into 6 sub rectangle that you can address individually or can animate by preset effects, which allows for a nice play with shadows with only one light (all DMX-controlled)
and there is an RGB version of it too.. so no need for gels and colour changers.I am also looking into some individually-addressable RGB LED strips . Placing the order today so I will hopefully be able to test and report the findings soon
_________________________________________________________________________________________________
On 2014-10-04, at 3:30 PM, Adrian Freed <adrian@adrianfreed.com> wrote:Sounds like a fun event!
Does the gear support simple temporal displacement modulations, e.g., delaying one's shadow or a projected image of oneself?
This is rather easy to do with the right gear.
I would like to see something more ambitious attempted along the lines of things we have played with using sound. By tracking feet we can produce the anticipatory sound of a foot fall which messes with the neat and tidy
notion of retentions and protentions. Can you rework a visual representation of oneself (shadow, image, silouette, ghost) to move in anticipation of where one moves?
It would also be interesting to modulate a scrambling of oneself and connect its intensity to movement intensity. Navid has done things similar to this with sound. The experience was rather predictible but might well be different visually.
_________________________________________________________________________________________________
On Oct 4, 2014, at 11:53 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:Chris, Garrett, Julian, Omar, Chris Z, Evan, Byron, Prashan, Althea, Mike B, Ian S, Aniket, et al.
This is a preliminary note to sound out who is down for what for the coming residency on Lighting and Rhythm (LRR).
The goal is to continue work on temporality from the IER last Feb March, and this time really seriously experimentally mucking with your sense of time by modulating lighting or your vision as you physically move. First-person experience, NOT designing for spectator.
We need to identify a more rigorous scientific direction for this residency. Having been asking people for ideas — I’ll go ahead and decide soon!
Please think carefully about:
Core Questions to extend: http://improvisationalenvironments.weebly.com/about.html
Playing around with lights: https://vimeo.com/tml/videos/search:light/sort:date
Key Background: http://textures.posthaven.comThe idea is to invite Chris and his students to work [richly] on site in the iStage and have those of us who are hacking time via lighting play in parallel with Chris. Pavan & students and interested scientist/engineers should be explicitly invited to kibbutz.
• Lighting and Rhythm
The way things are shaping up — we are gathering some gadgets to prepare for .
Equipment requested (some already installed thanks to Pete Ozzie and TML)
Ozone media system in iStage
Chris Ziegler’s Wald Forest system (MUST be able to lift off out of way as necessary within minutes — can an inexpensive motorized solution be installed ?)
3 x 6 ? grid of light fixtures with RGB gels, beaming onto floor
IR illuminators and IR-pass camera for tracking
Robe Robin MiniMe Moving Light/ Projector
Hazer (?)
Strobe + diffuser (bounce?)
+ Oculus DK1, (Mike K knows )
+ Google Glass (Chris R can ask Cooper , Ruth @ CSI)
We need to make sure we have a few rich instruments (NOT one-off hacked tableaux!) coded up ahead of time -- hence the call to Max-literate students who would like to try out what we have in order to adapt them for playing in the LRR by November.
Note 1:
Let’s be sure to enable multiplex of iStage to permit two other groups:
• Video portal - windows : Prashan, Althea Pergakis, Jen Weiler
• shadow puppetting, Prashan working with Byron
Note 2:
Garth’s Singing Bowls are there. Think about how to integrate such field effects.
Mike can you provide a Max patch to control them — ideally OSC -- but at least to fade up/down without having to physically touch any of the SB hardware.
Note 3:
This info should go on the lightingrhythm.weebly.com experiment website that the LRR leads should create Monday unless someone has a better solution — it must be editable by the researchers and experiment leads themselves. Clone from http://improvisationalenvironments.weebly.com !
Xin Wei
_________________________________________________________________________________________________
On Sep 4, 2014, at 8:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:
I am afraid I can't be very helpful here. I don't do MIR work myself. The field for the most part does
offline analyses of large data sets using musicologically naive Western musical concepts of pitch and rhythm.
One exception to the realtime/offline choice is from our most recent graduate student to work on the
beat tracking problem, Eric Battenburg. Here is his dissertation: http://escholarship.org/uc/item/6jf2g52n#page-3
There is interesting machine learning going on in that work but it presumes that one can make a reliable
onset detector which is a reasonable (but narrow) assumption for certain percussion sounds and drumming practice.
The questions of phase and "in sync." raised below interest me greatly. There are is no ground truth to the beat
(up or down or on the "beat"). I remember being shocked recently to discover that a bunch of research on dance/music entrainment relied as a reference on hand-labeled visual beat markings from "expert listeners in the computer music lab next door" . Various concepts such as "perceptual onset time" have been developed to sufficiently complicate this question and explain the difficulty people have observing concensus on musical event timing and relating a particular beat measurement to features of the acoustic signals.
Even a "simple" case, bass and drums, is extremely difficult to unravel. The bass being a low frequency instrument complicates the question of "onset" or moment of the beat. The issue of who in this pair is determining the tempo
is challenging and the usual handwaving that the tempo is an emergent coproduction of the performers is not very helpful in itself in elaborating the process or identifying which features of the action and sound are relevant to the entrainment. My guess is that we will find models like the co-orbital arrangment of Saturn's moons Epimetheus and Janus.
What are the system identification tools to reveal these sorts of entrainment structures? Can this be done from the sound
alone or do we have to model embodied motions that produce the sounds?
NOTE from Adrian XW Mike Krzyzaniak on Percival-Tzanetakis Tempo Estimator :
_________________________________________________________________________________________________
On Sep 3, 2014, at 6:38 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.
Yes, it would be great to have a different measure. For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks. This is a weaker criterion than being in phase, and does not require periodicity.
Xin Wei
_________________________________________________________________________________________________
On Sep 2, 2014, at 5:07 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:Of course we could reduce the 6 second lag by reducing the window sizes and increasing the hop sizes, at the expense of resolution. Also, rather than using the OSS calculation provided, perhaps we could perhaps just use a standard amplitude follower that sums the absolute value of the signal with the absolute value of the Hilbert Transform of the signal and filtering the result. This would save us from decimating the signal on input and reduce the amount of time needed to gather enough samples for autocorrelation (at the expense of accuracy, particularly for slow tempi).
What are you ultimately using this algorithm for? Percival-Tzanetakis also doesn't keep track of phase. If you plan on using it to take some measure of metaphorical rhythm between, say, humans as they interact with each other or the environment, then it seems like phase would be highly important. Are we in sync or syncopated? Am I on your upbeats or do we together make a flam on the downbeats?
Mike
_________________________________________________________________________________________________
On Tue, Sep 2, 2014 at 4:09 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi Adrian,
Mike pointed out what for me is a serious constraint in the Percival-Tzanetakis tempo estimator : it is not realtime.
I wonder if you have any suggestion on how to modify the algorithm to run more “realtime” with less buffering if that’s the right word for it…
Anyway I’d trust Mike to talk with you since this is more your than my competence. cc me for my edification and interest!
Xin Wei
_________________________________________________________________________________________________
On Sep 2, 2014, at 12:06 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:Hi Xin Wei,
I read the paper last night and downloaded the Marsyas source, but only the MATLAB implememtation is there. I can work on getting the c++ version and porting it, but the algorithm has some serious caveats that I want to run by you before I get my hands too dirty.
The main caveat is that it was not intended to run in real-time. The implementations they provide take an audio file, process the whole thing, and spit back one number representing the overall tempo.
"our algorithm is more accurate when these estimates are accumulated for an entire audio track"
It could be adapted to run in sort-of real time, but at 44.1k the tempo estimation will always lag by 6 seconds, and at a control rate of 30ms (i.e. the rate touchOSC uses to send accelerometer data from iPhone) the algorithm as described will have to gather data for over 2 hours to make an initial tempo estimation and will only update once every 5 minutes.
Once I get the c++ source I can give an estimation of how difficult it might be to adapt (in the worst-case scenario it would be time-consuming but not terribly difficult to re-implement the whole thing in your language of choice).
If you would still like me to proceed let me know and I will contact the authors about the source.
Mike
________________________________________________________________________________________________
On Mon, Sep 1, 2014 at 3:45 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
beat~ hasn't worked well for our research purposes so I'm looking for a better instrument.
I'm no expert but P & T carefully analyze the extant techniques.
the keyword is 'streamlined'
Read the paper. Ask Adrian and John.
Xin Wei
From: Adrian Freed <adrian@cnmat.berkeley.edu>
Subject: Re: good comparison of IMU's and sensor fusion source
Date: August 23, 2014 at 12:02:17 PM MST
To: Sha Xin Wei <shaxinwei@gmail.com>
Cc: Vangelis Lympouridis <vl_artcode@yahoo.com>, John MacCallum <john@cnmat.berkeley.edu>, Garth Paine <Garth.Paine@asu.edu>, Todd Ingalls <TestCase@asu.edu>, Assegid Kidane <Assegid.Kidane@asu.edu>, post@synthesis.posthaven.com, Teoma Naccarato <teomajn@gmail.com>
Vangelis has been tracking the ready-to-wear IMU space more carefully than I.
I am hoping IMU's are a temporary bootstrap and that we will have less encumbering techniques with
absolute position measurements such as the upcoming Sixense Stem system.
My fear is that we will be surrounded by even cheaper, slower, uncalibratable IMU's before the situation
improves substantially.
Keep an eye out for the next-gen x-OSC with a built-in charger and better IMU.
On Aug 23, 2014, at 6:50 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:cool. thanks. Adrian suggested last year a ready-to-wear IMU that went for ~ $200- $250.
Can’t recall the make.
Xin Wei
On Aug 22, 2014, at 7:11 PM, Vangelis Lympouridis <vl_artcode@yahoo.com> wrote:That's great! Thanks a lot Adrian.
Vangelis Lympouridis, PhD
Visiting Scholar,
School of Cinematic Arts
University of Southern California
Senior Research Consultant,
Creative Media & Behavioral Health Center
University of Southern California
http://cmbhc.usc.edu
Whole Body Interaction Designer
www.inter-axions.com
vangelis@lympouridis.gr
Tel: +1 (415) 706-2638
-----Original Message-----
From: Adrian Freed [mailto:adrian@cnmat.berkeley.edu]
Sent: Friday, August 22, 2014 10:47 AM
To: Xin Wei Sha; Vangelis L
Cc: John MacCallum
Subject: good comparison of IMU's and sensor fusion source
https://github.com/kriswiner/MPU-6050/wiki/Affordable-9-DoF-Sensor-Fusion
I would add that the observation/performance pair problems are connected to the problems of signal/noise - both dependent
on POV and preschema. Another tactic I have started to explore is the material agency of "lenses" (or filters as lenses are framed in the signal processing literature). This points to bringing in the material aspects of intersubjectivity - one of the key conundrums of quantum theory that has had to invoke a lot of magic around the macroscopic and microscopic properties
of "apparatus" to keep the rest of the theory coherent.
Yes this is a tool I wanted since 2003 in the oz/math/ section of Ozone
- with Y. Serita, J. Fantauzza, S. Dow, G. Iachello, V. Fiano, J. Berzowska, Y. Caravia, D. Nain, W. Reitberger, J. Fistre, "Demonstrations of expressive softwear and ambient media," Ubicomp 2003 (short PDF, video, long PDF). http://topologicalmedialab.net/xinwei/papers/texts/ubicomp/Sha_long_11.pdf
- with G. Iachello, S. Dow, Y. Serita, T. St. Julien, J. Fistre, "Continuous sensing of gesture for control of audio-visual media," ISWC International Society for Wearable Computing, 2003. (PDF) http://www.gvu.gatech.edu/people/sha.xinwei/topologicalmedia/papers/ISWC03_full.pdf
We need all these conditions:robust hardware with lightweight battery (ASU has some good battery guys ),high sensor-ensemble fps ,low latency transmission,some maths like Aylward-Paradiso to play with in our Max toolkitWhat we would do in place of Paradiso’s naive notion of music is to map to electroacoustics synthesis etc.In fact if Julian or Mike or ...could point us to the Max external that implements cross-correlation (not auto-correlation) we could play with it right away on acoustic inputand think about how to handle control rate data… I think there is one already in McGill or IRCAM’s vector processing toolkits.If someone is interested, I’d be happy to work with him/her to implement this and map it more directly to organized sound (with the help of our sound artists) for rich feedback.On Oct 4, 2014, at 3:50 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:Hi,Do you guys know this paper? They put gyroscopes on dancers and used realtime (windowed) cross-covariance to measure time lag between several dancers. I believe this is similar the what Xin Wei has in mind as part of studying temporality.Mike
Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.
Yes, it would be great to have a different measure. For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks. This is a weaker criterion than being in phase, and does not require periodicity.
Xin Wei
Wednesday, 9/10/2014 Meeting Notes
Lighting Workshop (Fall 2014) Planning Session
(Attending: Xin Wei, Chris, Pete, Ozzie, Byron, Mike K., 3 students)
Preparing a presentation and a performance
It’s about rhythm, sounds and frequencies. XW will work with Chris and Omar about lighting. Also, some Grads should talk to Omar about what they want to do with the lights, what they want to achieve experientially.
Corpus that emits light by movement in a blackbox (front, backwards recognition, body architecture, synchronicity, shadows, reflection, casting) – question of self and other “when is this my body?” It’s not so clear anymore… delays in time… “is the shadow mine, or someone else’s?”
Temporal space
Kant – symmetry; left, right… the construction of identity. David Morris wrote an essay about phenomenology. Refer to that essay.
Auditory and optical space in time. If we have zero agency shadows….
Julian Stein’s Rhythm Toolkit
Record (Jitter code); temporal displacement (the shadow of a flickering light)
Chris: shadow and perception – silhouettes
We can get extremely precise in a small, 3-day exercise: easy to accomplish things with our kit. …by the 17th or 18th, then we can get our hands on the GitHub and laptops and access to really run the code in the blackbox. Work with the systems to modify cues….
September goals
Everyone show know the TML code, the video stuff and the GitHub.
Everyone should know Julian Stein (first point of contact – he’s the one-stop shop, the one responsible for the kit), Evan Montpellier and Navid Navab.
Have some creative exercises happening in the iStage… movement games, with tracking and non-tracking lights, etc. Then Xin Wei will talk with Pavan and his grad students.
(On Wednesdays from today on out, we’ll talk about the research questions.)
Synchronicity and “Entrainment”
Construct algorithms to make detectors (Pavan) – phenomenological, listen to the students (Nov) – make a rhythm toolkit. The first version is in the GitHub.
In March, work with Pavan and Byron to make those instruments – when John and Fiona come in March – “Heartbeat” – and we’ll have the instruments
Adrian Freed has a rich description of different kinds of synchronicity and temporal textures (CNMAT); co-mingling. We’ll try to get him here to do a workshop.
1). Rhythm is not based in symmetric regularity – being in sync didn’t require it at all. We’re trying to build computational algorithms…
Or it might be energy-transfer….
2). Rhythm doesn’t have to be one-dimensional. What does it mean? Take a video on the floor, put it in space.
Set up on the 13-14 Nov; start workshop on the 17th (Lighting Workshop)
Would any of you be able to recommend some faculty and / or students experienced with urban design, public lighting design, projection art, who might be interested in teaming with the Synthesis Center and the Topological Media Lab to come up with some pitches for some urban art / design projects in Europe. (See “Connecting Cities” and “Ambience Network” for examples of work.)Cheers,Xin Wei
Hi,What I have in mind is a mobile platform for chamber and building-scale shadow puppetry, driven by mechatronics + our Max/MSP/Jitter sensing and media choreography systems, + “radio”.In terms of content and style — just to let you know where we're coming from, here are some inspirations:• BluBlu MUTO, https://vimeo.com/993998• Manual Cinema, Lula del Ray, https://vimeo.com/52391900• Royal de Luxe, The Little Girl Giant - The Sultans Elephant, https://vimeo.com/11107046Omar Faleh, TML-Synthesis’ senior grad researcher in this urban media initiative, visited Royal de Luxe’s home city Nantes.<rant>I’d like to stay away from all the "tech-art" geek discourses from the 90’s that never played outside the Empire of new media.</rant>Not knowing the economies here in Phoenix and in SoCal, one of my questions is what sources of funding exist for such work?Xin Wei
On Sep 12, 2014, at 12:54 PM, Byron Lahey <Byron.Lahey@asu.edu> wrote:Hi All,It goes without saying that I'm interested in this project. Kinetic sculpture, puppetry, light and shadow systems are all important parts of my artistic pallet. It also goes without saying that my time is largely spoken for by my dissertation work, so I would like to stay connected with this project, but will have to limit my direct involvement. I would certainly offer my camera/projector system as a tool for this work if it fits in any way (though I would likely have to hand off programming and maintenance duties to another willing person or persons).Another artist who might be interested in collaborating on this work is Al Price. He is an ASU MFA graduate and has done numerous large scale public art projects around and outside the Phoenix area. This is a project of his that I find particularly enchanting and which seems to directly resonates with the examples Xin Wei shared: http://www.alpricestudio.com/new-gallery-5/
Best,Byron
CC Newsletter September 2014
Connecting Cities Events 2014
Here's a quick overview of the Summer/Autumn Connecting Cities Events:
Event # 6 11 – 14 September: Connecting Cities: Urban Reflections, Public Art Lab @ Berlin
Event # 7 12 – 14 September: Connecting Cities Event, Riga 2014 @ Riga
Event # 8 25 – 27 September: Medialab-Prado @ Madrid
Connecting Cities Event #6 @ Berlin
Connecting Cities: Urban Reflections, Public Art Lab
Connecting Cities: Urban Reflections
11 - 14 Sep 2014, – Light Parcours, Brunnenstr. 64-72
11 - 13 Sep 2014, Symposium, SUPERMARKT Brunnenstr. 64
11 - 13 Sep 2014, Workshops, SUPERMARKT Brunnenstr. 64
No Plans for tonight? Mark your calendar! Connecting Cities: Urban Reflections
Join for our Light Parcours, the Symposium and Workshops!
From 11 – 14 September a rich programme with workshops, the Connecting Cities: Urban Reflections – Symposium (curated by Public Art Lab in cooperation with SUPERMARKT) and the Connecting Cities: Urban Reflections – Light Parcoursincluding audiovisual works and an urban picnic will take place.
Connecting Cities: Urban Reflections – Symposium and Workshops
From 11 – 13 September artists' workshops by Suse Miessner, Charlotte Gould & Paul Sermon, Moritz Behrens & Nina Valkanova, Dr. Alexander Wiethoff & Marius Hoggenmülleras well as keynotes from Mark Shepard, Paul Sermon, Moritz Behrens and the speakersNicole Srock-Stanley, Dr. Alexander Wiethoff, Dr.Eva Hornecker, Dr. Bastian Lange,B_tours and many more will be offered…
Find all the Panels here!
Connecting Cities: Urban Reflections – Light Parcours
During ‚Connecting Cities: Urban Reflections‘ Public Art Lab will transform the shop windows in the Brunnenstrasse into interactive light and projection windows. Connecting Cities will make the neighborhood shine! The Connecting Cities projectsOccupy the Screen, Smart Citizen Sentiment Dashboard, Urban Alphabets, Human Beeingwill be shown, as well as works from Lichtpiraten, a performance by Scott Sinclair vs. ESOC and a cooperation project of Public Art Lab and Quartiersmanagement Brunnenstraße, the students of Leuphana University Lüneburg and the Bauhaus -Universität Weimar and a video programme by Screen City and Videospread.
Find the programme here or download the programme leaflet here!
Picture: © Public Art Lab
Event # 7 Staro Riga 2014 @ Riga 2014, Riga
12 – 13 September 2014, Esplanāde 2014 Culural Chalet, Riga, Latvia
Two days and two interactive projects will unite Riga and Berlin.
Suse Miessner's project ‘Urban Alphabet’ is an interactive neighbourhood art project that invites peaople to create their own urban alphabet by capturing letters with an especially developed application, when walking around the city. After collecting the letters in the public space, the screens and projections will give access to the database and invite the participants to write a personalized postcards to the other connected cities.
Charlotte Gould and Paul Sermon's project ‘Occupy the Screen’ is an interactive telepresent public video installation designed for site-specific impromptu performance and user interaction. It will connect two cities and the people will meet on the screen as a third space.
Picture: © Paul Sermon
Connecting Cities Event #8 @ Medialab-Prado, Madrid
26 – 27 September: Medialab-Prado, Madrid, Plaza de las Letras
Bees, plants, urban alphabets and many other elements of the city come to life on the digital facade of the Medialab-Prado. From the 26 – 27 the facade located at the Plaza de las Letras becomes an interactive canvas!
The projects Urban Alphabets (Suse Miessner), Human Beeing (The Constitute),Telepuppet.tv (Ali Momeni & Nima Dehghani), Organic cinema (World Wilder Lab) andn'UNDO (n’UNDO organización).
More Information here!
Picture: © n’UNDO organización
Connecting Cities Event #9 Nuit Blanche@ iMAL Brussels
10 October 2014, Rue Marché aux Herbes & Rue Saint-Pierre, Brussels, Belgium
In the framework of the Participatory City 2014, the Quinzaine Numérique and during theNuit Blanche Bruxelles, iMAL presents telepuppet.tv, by Ali Momeni & Nima Dehghani (USA/Iran).
Join the Nuit Blanche Brussels! Telepuppet.tv is a crowd-sourced storytelling platform that combines augmented-puppetry with urban projection performance. Traveling through the streets of the city centre, artists Ali Momeni & Nima Dehghani (USA/Iran) will project videos filmed this summer, in Iran, by augmented puppets. A way to share experiences of immigration across time and space on our planet.
In the framework of Connecting Cities 2014: Participatory City, Telepuppet.tv will also be presented in Madrid (Medialab-Prado, 25 - 26.09) and Liverpool (FACT)!
Picture: © Ali Momeni & Nima Dehghani
CALL FOR PROPOSAL FOR THE VISIBLE CITY 2015
Deadline: 31 October 2014
BERLIN - BRUSSELS - HELSINKI - LINZ -LIVERPOOL - MADRID - MARSEILLE - MONTREAL - SAO PAULO - ZAGREB
Our modern cities are hybrid structures in which technology is invisibly interweaved in the perception layers of our everyday lives. With the curatorial theme of InVISIBLE and VISIBLE Cities we want to develop an awareness on the changes which are hardly visible to the eyes and are underlying our nowadays’ cities.
Please submit your project proposal until 31 October, 2014 here.
Picture: © Public Art Lab
RECAP Connecting Cities Event #5 @ Linz
C … what it takes to change, Ars Electronica
4 - 8 September 2014, Linz, Austria
8 September, Connecting Cities Workshop
From 4 - 8 September the Connecting Cities artists Moritz Behrens & Nina Valkanova showed their project Smart Citizen Sentiment Dashboard (SCSD) on the facade of the Ars Electronica building. Visitors and Citizen could vote how happy they are i.e. with the mobility of the city of Linz. Ars Electronica Futurelab presented their project Entangled Sparks with a series of workshops and presentations on the Ars Electronica building, where every visitors could access one pixel of the facade.
Have a look at the photos here!
Picture: © Public Art Lab
RECAP: Connecting Cities Workshop: Design Fiction and Narrative Prototyping @ 403 Art Center, Wuhan, China
31 July - 09 August.2014 Summerfestival - Back to the Future
31 July - 09 August 2014 Workshop
DESIGN FICTION & NARRATIVE PROTOTYPING
Connecting Cities with the tools of the future
In July the first Connecting Cities workshop in China organized by Marc Piesbergen (CC manager China) and the 403 Art Center Wuhan was held by Christian Zöllner (The Constitute) and Julian Adenauer (Sonice Development)
Nearly 40 students from Huazhong University of Science and Technology, Hubei Institute of Fine Arts and Wuhan University of Science and Technology participated. The 9 student groups each designed their own device for the future sci-fi world.
The results of the intense two-week Connecting Cities workshop will be published soon onwww.connectingcities.net.
Picture: © The Constitute
Call for outstanding Media Architecture
Are you a student? Would you like to receive a travel scholarship for the Media Architecture Biennale 2014?
Then sign up for the MAB24H student design competition. Commencing on October 1 14:00 (CET), teams will have 24 hours to create a design, write a paper, maintain a blog and produce a short video. The winning team is invited to present their conceptual work at the biennale and will receive a travel grant to the amount of EUR 1,000.
Website
Connecting Cities is a European and worldwide expanding network aiming to build up a connected infrastructure of media facades, urban screens, projection sites and mobile units to circulate artistic and social content. More information on www.connectingcities.net
You are receiving our newsletter because of the interest you have shown towards urban media art projects and the Connecting Cities Network. Should you have received this email in error, please accept our sincere apologies and be so kind to click on the "unsubscribe from this list" link in the footer of this newsletter.
Copyright © 2014 Public Art Lab, All rights reserved.
You are receiving this email because you opted in at our website www.connectingcities.net
Our mailing address is: