Interactive Futures – 2007

Interactive Futures 2007 ( November 15 – 17 ) Victoria, British Columbia, Canada

Jump to content.

Stefan Müller Arisona

Semi-Automatic Content Editing Techniques in the Context of Multimedia Live Performance

The recent years have seen an enormous growth in the volume of media content: Emerging internet-based applications (often associated with the term “Web 2.0”) are media-rich, they emphasise on content sharing, and they attract huge communities. Examples are video sharing applications such as YouTube or Google Video; the photo management and sharing application Flickr; or community building services such as MySpace. In addition, there are many professional content providers that supply the media industry with high-quality stock content (e.g., gettyimages). As a requirement for efficient retrieval, media collections are typically comprised not only of the media data alone, but also of associated semantic metadata. The annotation of metadata may occur manually or through automated analysis mechanisms, respectively. As the volume of available media content continues to grow, we see an increasing need for not only being able to annotate and retrieve content, but also to edit content and to create new content out of existing one by applying computational generative methods. Therefore, we can foresee enormous efforts by the content production and entertainment industries to improve existing, and to design and implement novel computational content-editing techniques. The corresponding emerging research field is sometimes referred to as “computational media aesthetics.”

This paper discusses the application of semi-automatic methods to multimedia live performance: Obviously, the evolution of novel editing techniques has a major influence on the way we deal with digital media, and in particular on how we compose multimedia artwork, possibly in real-time during performance. One example out of the visual domain is the real-time non-linear editing (NLE) component of the Soundium multimedia platform: The system employs a computerised video mixer that releases the artist from manual editing tasks. It provides interactive access to high-level editing parameters such as cutting rate, fade types, colour preferences, or editing style. In addition, semi-automatic reverse editing tools are provided in order to pre-process media material; e.g., computer-vision based shot and scene detection. In the audio domain, an example is Ableton Live, which provides semi-automatic beat analysis and matching modes, and therefore allows the performer to concentrate on macroscopic musical entities instead of microscopic details, such as beat alignment.

Currently, more artists are becoming interested in mixing different media types, for instance in order to achieve true synaesthetic live performance. However, there is only little work regarding computational editing techniques for multiple media types. Therefore, we present our approach towards a general theory of media editing, which is based on formal and informal theories on composition methods in individual domains (e.g., music, film): In a generalised manner, the concepts behind Soundium’s NLE component are encapsulated in composition templates, which can be applied to arbitrary media content during performance.

Finally, experience with past projects has shown that methods that arise from artistic experiments with novel content authoring systems can be successfully employed in entertainment scenarios. We are convinced that ongoing artistic devotion will eventually lead to next- generation entertainment applications with large impact such as successors to the currently emerging IPTV.

Bio

Stefan Müller Arisona is lecturer and post-doctoral researcher at the Computer Systems Institute of ETH Zurich, Switzerland and scientific chair of ETH’s Digital Art Weeks. His main interests are at the intersections of art and technology, and in particular in the domain of live digital art. His research focuses on novel real-time multimedia systems and on live media composition and performance techniques. He is a founding member of Corebounce and co-author of the multimedia authoring software Soundium, which is frequently used for digital art installations and live performances by himself and his collaborators. A recent work, the Digital Marionette, is currently installed at the Ars Electronica Center’s permanent exhibition. Recently, Stefan was granted a two-year research fellowship by the Swiss National Science Foundation (SNF) and will be researcher at Media Arts and Technology (MAT) of the University of California, Santa Barbara from October 2007.

Associates

Pascal Müller
Computer Vision Laboratory
ETH Zurich
http://www.vision.ee.ethz.ch/~pmueller

Simon Schubiger-Banz
Corebounce Association
[see] IF07 paper: Large Screen Interaction in Public Space: TowerTalk and NVOA

Matthias Specht
Anthropological Institute
University of Zurich
http://www.corebounce.org/specht

Links

Homepage: http://www.jg.inf.ethz.ch/wiki/SMA/Front
Corebounce Association: http://www.corebounce.org
Digital Art Weeks (an IF07 parallel event): http://www.digitalartweeks.ethz.ch

Selected Media


Interactive Production of a Music Video Clip

 

Back to [PAPERS] See also [Simon Schubiger-Banz] – [EPI]

Leave a Comment

You must be logged in to post a comment.