{"id":20,"date":"2007-09-12T12:09:04","date_gmt":"2007-09-12T19:09:04","guid":{"rendered":"http:\/\/if2007.dev.ecuad.ca\/?page_id=20"},"modified":"2007-09-18T10:31:19","modified_gmt":"2007-09-18T17:31:19","slug":"stefan-muller-arisona","status":"publish","type":"page","link":"http:\/\/if2007.ecuad.ca\/?page_id=20","title":{"rendered":"Stefan M\u00fcller Arisona"},"content":{"rendered":"
Semi-Automatic Content Editing Techniques in the Context of Multimedia Live Performance<\/strong><\/p>\n The recent years have seen an enormous growth in the volume of media content: Emerging internet-based applications (often associated with the term \u201cWeb 2.0\u201d) are media-rich, they emphasise on content sharing, and they attract huge communities. Examples are video sharing applications such as YouTube or Google Video; the photo management and sharing application Flickr; or community building services such as MySpace. In addition, there are many professional content providers that supply the media industry with high-quality stock content (e.g., gettyimages). As a requirement for efficient retrieval, media collections are typically comprised not only of the media data alone, but also of associated semantic metadata. The annotation of metadata may occur manually or through automated analysis mechanisms, respectively. As the volume of available media content continues to grow, we see an increasing need for not only being able to annotate and retrieve content, but also to edit content and to create new content out of existing one by applying computational generative methods. Therefore, we can foresee enormous efforts by the content production and entertainment industries to improve existing, and to design and implement novel computational content-editing techniques. The corresponding emerging research field is sometimes referred to as \u201ccomputational media aesthetics.\u201d<\/p>\n This paper discusses the application of semi-automatic methods to multimedia live performance: Obviously, the evolution of novel editing techniques has a major influence on the way we deal with digital media, and in particular on how we compose multimedia artwork, possibly in real-time during performance. One example out of the visual domain is the real-time non-linear editing (NLE) component of the Soundium multimedia platform: The system employs a computerised video mixer that releases the artist from manual editing tasks. It provides interactive access to high-level editing parameters such as cutting rate, fade types, colour preferences, or editing style. In addition, semi-automatic reverse editing tools are provided in order to pre-process media material; e.g., computer-vision based shot and scene detection. In the audio domain, an example is Ableton Live, which provides semi-automatic beat analysis and matching modes, and therefore allows the performer to concentrate on macroscopic musical entities instead of microscopic details, such as beat alignment.<\/p>\n Currently, more artists are becoming interested in mixing different media types, for instance in order to achieve true synaesthetic live performance. However, there is only little work regarding computational editing techniques for multiple media types. Therefore, we present our approach towards a general theory of media editing, which is based on formal and informal theories on composition methods in individual domains (e.g., music, film): In a generalised manner, the concepts behind Soundium\u2019s NLE component are encapsulated in composition templates, which can be applied to arbitrary media content during performance.<\/p>\n Finally, experience with past projects has shown that methods that arise from artistic experiments with novel content authoring systems can be successfully employed in entertainment scenarios. We are convinced that ongoing artistic devotion will eventually lead to next- generation entertainment applications with large impact such as successors to the currently emerging IPTV.<\/p>\n Bio<\/strong><\/p>\n Stefan M\u00fcller Arisona is lecturer and post-doctoral researcher at the Computer Systems Institute of ETH Zurich, Switzerland and scientific chair of ETH’s Digital Art Weeks. His main interests are at the intersections of art and technology, and in particular in the domain of live digital art. His research focuses on novel real-time multimedia systems and on live media composition and performance techniques. He is a founding member of Corebounce and co-author of the multimedia authoring software Soundium, which is frequently used for digital art installations and live performances by himself and his collaborators. A recent work, the Digital Marionette, is currently installed at the Ars Electronica Center\u2019s permanent exhibition. Recently, Stefan was granted a two-year research fellowship by the Swiss National Science Foundation (SNF) and will be researcher at Media Arts and Technology (MAT) of the University of California, Santa Barbara from October 2007.<\/p>\n Associates<\/strong><\/p>\n Pascal M\u00fcller Simon Schubiger-Banz Matthias Specht Links<\/strong><\/p>\n Homepage: http:\/\/www.jg.inf.ethz.ch\/wiki\/SMA\/Front<\/a> Selected Media<\/strong><\/p>\n
\nComputer Vision Laboratory
\nETH Zurich
\nhttp:\/\/www.vision.ee.ethz.ch\/~pmueller<\/a><\/p>\n
\nCorebounce Association
\n[see] IF07 paper: Large Screen Interaction in Public Space: TowerTalk and NVOA<\/em><\/p>\n
\nAnthropological Institute
\nUniversity of Zurich
\nhttp:\/\/www.corebounce.org\/specht<\/a><\/p>\n
\nCorebounce Association: http:\/\/www.corebounce.org<\/a>
\nDigital Art Weeks (an IF07 parallel event): http:\/\/www.digitalartweeks.ethz.ch<\/a><\/p>\n