Page Text: Menu
Proceedings Archive
This page contains a list of all publications that have been published at the NIME conferences.
Peer review: All papers have been peer-reviewed (most often by three international experts). See the list of reviewers . Only papers that were presented at the conferences (as presentation, poster or demo) are included.
Open access: NIME papers are open access (gold), and the copyright remains with the author(s). The NIME archive uses the Creative Commons Attribution 4.0 International License (CC BY 4.0) .
Public domain: The bibliographic information for NIME, including all BibTeX information and abstracts, is public domain. The list below is generated from a collection of BibTeX files hosted at GitHub using Jekyll Scholar .
PDFs: Individual papers are linked for each entry below. All PDFs are archived separately in Zenodo , and there are also Zip files for each year in Zenodo. If you just want to download everything quickly, you can find the Zip files here as well.
ISSN for the proceedings series: ISSN 2220-4806. Each year’s ISBN is in the BibTeX files and are also listed here .
Impact factor: Academic work should always be considered on its own right (cf. DORA declaration ). That said, the NIME proceedings are generally ranked highly in, for example, the Google Scholar ranking .
Ethics: Please take a look at NIME’s Publication ethics and malpractice statement .
Contact: If you find any errors in the database, please feel free to fork and modify at GitHub , or add an issue in the tracker .
NIME publications (in backwards chronological order)
2021
Stefano Fasciani and Jackson Goode. 2021. 20 NIMEs: Twenty Years of New Interfaces for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.b368bcd5
Abstract
Download PDF DOI
This paper provides figures and metrics over twenty years of New Interfaces for Musical Expression conferences, which are derived by analyzing the publicly available paper proceedings. Besides presenting statistical information and a bibliometric study, we aim at identifying trends and patterns. The analysis shows the growth and heterogeneity of the NIME demographic, as well the increase in research output. The data presented in this paper allows the community to reflect on several issues such as diversity and sustainability, and it provides insights to address challenges and set future directions.
@inproceedings{NIME21_1, author = {Fasciani, Stefano and Goode, Jackson}, title = {20 NIMEs: Twenty Years of New Interfaces for Musical Expression}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {1}, doi = {10.21428/92fbeb44.b368bcd5}, url = {https://nime.pubpub.org/pub/20nimes}, presentation-video = {https://youtu.be/44W7dB7lzQg} }
Raul Masu, Nuno N. Correia, and Teresa Romao. 2021. NIME Scores: a Systematic Review of How Scores Have Shaped Performance Ecologies in NIME. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.3ffad95a
Abstract
Download PDF DOI
This paper investigates how the concept of score has been used in the NIME community. To this end, we performed a systematic literature review of the NIME proceedings, analyzing papers in which scores play a central role. We analyzed the score not as an object per se but in relation to the users and the interactive system(s). In other words, we primarily looked at the role that scores play in the performance ecology. For this reason, to analyze the papers, we relied on ARCAA, a recent framework created to investigate artifact ecologies in computer music performances. Using the framework, we created a scheme for each paper and clustered the papers according to similarities. Our analysis produced five main categories that we present and discuss in relation to literature about musical scores.
@inproceedings{NIME21_10, author = {Masu, Raul and Correia, Nuno N. and Romao, Teresa}, title = {NIME Scores: a Systematic Review of How Scores Have Shaped Performance Ecologies in NIME}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {10}, doi = {10.21428/92fbeb44.3ffad95a}, url = {https://nime.pubpub.org/pub/41cj1pyt}, presentation-video = {https://youtu.be/j7XmQvDdUPk} }
Christian Frisson, Mathias Bredholt, Joseph Malloch, and Marcelo M. Wanderley. 2021. MapLooper: Live-looping of distributed gesture-to-sound mappings. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.47175201
Abstract
Download PDF DOI
This paper presents the development of MapLooper: a live-looping system for gesture-to-sound mappings. We first reviewed loop-based Digital Musical Instruments (DMIs). We then developed a connectivity infrastructure for wireless embedded musical instruments with distributed mapping and synchronization. We evaluated our infrastructure in the context of the real-time constraints of music performance. We measured a round-trip latency of 4.81 ms when mapping signals at 100 Hz with embedded libmapper and an average inter-onset delay of 3.03 ms for synchronizing with Ableton Link. On top of this infrastructure, we developed MapLooper: a live-looping tool with 2 example musical applications: a harp synthesizer with SuperCollider and embedded source-filter synthesis with FAUST on ESP32. Our system is based on a novel approach to mapping, extrapolating from using FIR and IIR filters on gestural data to using delay-lines as part of the mapping of DMIs. Our system features rhythmic time quantization and a flexible loop manipulation system for creative musical exploration. We open-source all of our components.
@inproceedings{NIME21_11, author = {Frisson, Christian and Bredholt, Mathias and Malloch, Joseph and Wanderley, Marcelo M.}, title = {MapLooper: Live-looping of distributed gesture-to-sound mappings}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {11}, doi = {10.21428/92fbeb44.47175201}, url = {https://nime.pubpub.org/pub/2pqbusk7}, presentation-video = {https://youtu.be/9r0zDJA8qbs} }
P. J. Charles Reimer and Marcelo M. Wanderley. 2021. Embracing Less Common Evaluation Strategies for Studying User Experience in NIME. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.807a000f
Abstract
Download PDF DOI
Assessment of user experience (UX) is increasingly important in music interaction evaluation, as witnessed in previous NIME reviews describing varied and idiosyncratic evaluation strategies. This paper focuses on evaluations conducted in the last four years of NIME (2017 to 2020), compares results to previous research, and classifies evaluation types to describe how researchers approach and study UX in NIME. While results of this review confirm patterns such as the prominence of short-term, performer perspective evaluations, and the variety of evaluation strategies used, they also show that UX-focused evaluations are typically exploratory and limited to novice performers. Overall, these patterns indicate that current UX evaluation strategies do not address dynamic factors such as skill development, the evolution of the performer-instrument relationship, and hedonic and cognitive aspects of UX. To address such limitations, we discuss a number of less common tools developed within and outside of NIME that focus on dynamic aspects of UX, potentially leading to more informative and meaningful evaluation insights.
@inproceedings{NIME21_12, author = {Reimer, P. J. Charles and Wanderley, Marcelo M.}, title = {Embracing Less Common Evaluation Strategies for Studying User Experience in NIME}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {12}, doi = {10.21428/92fbeb44.807a000f}, url = {https://nime.pubpub.org/pub/fidgs435}, presentation-video = {https://youtu.be/WTaee8NVtPg} }
Takuto Fukuda, Eduardo Meneses, Travis West, and Marcelo M. Wanderley. 2021. The T-Stick Music Creation Project: An approach to building a creative community around a DMI. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.26f33210
Abstract
Download PDF DOI
To tackle digital musical instrument (DMI) longevity and the problem of the second performer, we proposed the T-Stick Music Creation Project, a series of musical commissions along with workshops, mentorship, and technical support, meant to foment composition and performance using the T-Stick and provide an opportunity to improve technical and pedagogical support for the instrument. Based on the project’s outcomes, we describe three main contributions: our approach; the artistic works produced; and analysis of these works demonstrating the T-Stick as actuator, modulator, and data provider.
@inproceedings{NIME21_13, author = {Fukuda, Takuto and Meneses, Eduardo and West, Travis and Wanderley, Marcelo M.}, title = {The T-Stick Music Creation Project: An approach to building a creative community around a DMI}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {13}, doi = {10.21428/92fbeb44.26f33210}, url = {https://nime.pubpub.org/pub/7c4qdj4u}, presentation-video = {https://youtu.be/tfOUMr3p4b4} }
Doga Cavdir, Chris Clarke, Patrick Chiu, Laurent Denoue, and Don Kimber. 2021. Reactive Video: Movement Sonification for Learning Physical Activity with Adaptive Video Playback. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.eef53755
Abstract
Download PDF DOI
This paper provides initial efforts in developing and evaluating a real-time movement sonification framework for physical activity practice and learning. Reactive Video provides an interactive, vision-based, adaptive video playback with auditory feedback on users’ performance to better support when learning and practicing new physical skills. We implement the sonification for auditory feedback design by extending the Web Audio API framework. The current application focuses on Tai-Chi performance and provides two main audio cues to users for several Tai Chi exercises. We provide our design approach, implementation, and sound generation and mapping, specifically for interactive systems with direct video manipulation. Our observations reveal the relationship between the movement-to-sound mapping and characteristics of the physical activity.
@inproceedings{NIME21_14, author = {Cavdir, Doga and Clarke, Chris and Chiu, Patrick and Denoue, Laurent and Kimber, Don}, title = {Reactive Video: Movement Sonification for Learning Physical Activity with Adaptive Video Playback}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {14}, doi = {10.21428/92fbeb44.eef53755}, url = {https://nime.pubpub.org/pub/dzlsifz6}, presentation-video = {https://youtu.be/pbvZI80XgEU} }
Daniel Chin, Ian Zhang, and Gus Xia. 2021. Hyper-hybrid Flute: Simulating and Augmenting How Breath Affects Octave and Microtone. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c09d91be
Abstract
Download PDF DOI
We present hyper-hybrid flute, a new interface which can be toggled between its electronic mode and its acoustic mode. In its acoustic mode, the interface is identical to the regular six-hole recorder. In its electronic mode, the interface detects the player’s fingering and breath velocity and translates them to MIDI messages. Specifically, it maps higher breath velocity to higher octaves, with the modulo remainder controlling the microtonal pitch bend. This novel mapping reproduces a highly realistic flute-playing experience. Furthermore, changing the parameters easily augments the interface into a hyperinstrument that allows the player to control microtones more expressively via breathing techniques.
@inproceedings{NIME21_15, author = {Chin, Daniel and Zhang, Ian and Xia, Gus}, title = {Hyper-hybrid Flute: Simulating and Augmenting How Breath Affects Octave and Microtone}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {15}, doi = {10.21428/92fbeb44.c09d91be}, url = {https://nime.pubpub.org/pub/eshr}, presentation-video = {https://youtu.be/UIqsYK9F4xo} }
Beat Rossmy and Alexander Wiethoff. 2021. Musical Grid Interfaces: Past, Present, and Future Directions. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.6a2451e6
Abstract
Download PDF DOI
This paper examines grid interfaces which are currently used in many musical devices and instruments. This type of interface concept has been rooted in the NIME community since the early 2000s. We provide an overview of research projects and commercial products and conducted an expert interview as well as an online survey. In summary this work shares: (1) an overview on grid controller research, (2) a set of three usability issues deduced by a multi method approach, and (3) an evaluation of user perceptions regarding persistent usability issues and common reasons for the use of grid interfaces.
@inproceedings{NIME21_16, author = {Rossmy, Beat and Wiethoff, Alexander}, title = {Musical Grid Interfaces: Past, Present, and Future Directions}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {16}, doi = {10.21428/92fbeb44.6a2451e6}, url = {https://nime.pubpub.org/pub/grid-past-present-future}, presentation-video = {https://youtu.be/GuPIz2boJwA} }
Beat Rossmy, Sebastian Unger, and Alexander Wiethoff. 2021. TouchGrid – Combining Touch Interaction with Musical Grid Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.303223db
Abstract
Download PDF DOI
Musical grid interfaces such as the monome grid have developed into standard interfaces for musical equipment over the last 15 years. However, the types of possible interactions more or less remained the same, only expanding grid capabilities by external IO elements. Therefore, we propose to transfer capacitive touch technology to grid devices to expand their input capabilities by combining tangible and capacitive-touch based interaction paradigms. This enables to keep the generic nature of grid interfaces which is a key feature for many users. In this paper we present the TouchGrid concept and share our proof-of-concept implementation as well as an expert evaluation regarding the general concept of touch interaction used on grid devices. TouchGrid provides swipe and bezel interaction derived from smart phone interfaces to allow navigation between applications and access to menu systems in a familiar way.
@inproceedings{NIME21_17, author = {Rossmy, Beat and Unger, Sebastian and Wiethoff, Alexander}, title = {TouchGrid – Combining Touch Interaction with Musical Grid Interfaces}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {17}, doi = {10.21428/92fbeb44.303223db}, url = {https://nime.pubpub.org/pub/touchgrid}, presentation-video = {https://youtu.be/ti2h_WK5NeU} }
Corey Ford, Nick Bryan-Kinns, and Chris Nash. 2021. Creativity in Children’s Digital Music Composition. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.e83deee9
Abstract
Download PDF DOI
Composing is a neglected area of music education. To increase participation, many technologies provide open-ended interfaces to motivate child autodidactic use, drawing influence from Papert’s LOGO philosophy to support children’s learning through play. This paper presents a case study examining which interactions with Codetta, a LOGO-inspired, block-based music platform, supports children’s creativity in music composition. Interaction logs were collected from 20 children and correlated against socially-validated creativity scores. To conclude, we recommend that the transition between low-level edits and high-level processes should be carefully scaffolded.
@inproceedings{NIME21_18, author = {Ford, Corey and Bryan-Kinns, Nick and Nash, Chris}, title = {Creativity in Children's Digital Music Composition}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {18}, doi = {10.21428/92fbeb44.e83deee9}, url = {https://nime.pubpub.org/pub/ker5w948}, presentation-video = {https://youtu.be/XpMiDWrxXMU} }
Yinmiao Li, Ziyue Piao, and Gus Xia. 2021. A Wearable Haptic Interface for Breath Guidance in Vocal Training. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.6d342615
Abstract
Download PDF DOI
Various studies have shown that haptic interfaces could enhance the learning efficiency in music learning, but most existing studies focus on training motor skills of instrument playing such as finger motions. In this paper, we present a wearable haptic device to guide diaphragmatic breathing, which can be used in vocal training as well as the learning of wind instruments. The device is a wearable strap vest, consisting of a spinal exoskeleton on the back for inhalation and an elastic belt around the waist for exhalation. We first conducted case studies to assess how convenient and comfortable to wear the device, and then evaluate its effectiveness in guiding rhythm and breath. Results show users’ acceptance of the haptic interface and the potential of haptic guidance in vocal training.
@inproceedings{NIME21_19, author = {Li, Yinmiao and Piao, Ziyue and Xia, Gus}, title = {A Wearable Haptic Interface for Breath Guidance in Vocal Training}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {19}, doi = {10.21428/92fbeb44.6d342615}, url = {https://nime.pubpub.org/pub/cgi7t0ta}, presentation-video = {https://youtu.be/-t-u0V-27ng} }
Lior Arbel. 2021. Aeolis: A Virtual Instrument Producing Pitched Tones With Soundscape Timbres. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.64f66047
Abstract
Download PDF DOI
Ambient sounds such as breaking waves or rustling leaves are sometimes used in music recording, composition and performance. However, as these sounds lack a precise pitch, they can not be used melodically. This work describes Aeolis, a virtual instrument producing pitched tones from a real-time ambient sound input using subtractive synthesis. The produced tones retain the identifiable timbres of the ambient sounds. Tones generated using input sounds from various environments, such as sea waves, leaves rustle and traffic noise, are analyzed. A configuration for a live in-situ performance is described, consisting of live streaming the produced sounds. In this configuration, the environment itself acts as a ‘performer’ of sorts, alongside the Aeolis player, providing both real-time input signals and complementary visual cues.
@inproceedings{NIME21_2, author = {Arbel, Lior}, title = {Aeolis: A Virtual Instrument Producing Pitched Tones With Soundscape Timbres}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {2}, doi = {10.21428/92fbeb44.64f66047}, url = {https://nime.pubpub.org/pub/c3w33wya}, presentation-video = {https://youtu.be/C0WEeaYy0tQ} }
Florent Berthaut. 2021. Musical Exploration of Volumetric Textures in Mixed and Virtual Reality. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.6607d04f
Abstract
Download PDF DOI
The development of technologies for acquisition and display gives access to a large variety of volumetric (3D) textures, either synthetic or obtained through tomography. They constitute extremely rich data which is usually explored for informative purposes, in medical or engineering contexts. We believe that this exploration has a strong potential for musical expression. To that extent, we propose a design space for the musical exploration of volumetric textures. We describe the challenges for its implementation in Virtual and Mixed-Reality and we present a case study with an instrument called the Volume Sequencer which we analyse using your design space. Finally, we evaluate the impact on expressive exploration of two dimensions, namely the amount of visual feedback and the selection variability.
@inproceedings{NIME21_20, author = {Berthaut, Florent}, title = {Musical Exploration of Volumetric Textures in Mixed and Virtual Reality}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {20}, doi = {10.21428/92fbeb44.6607d04f}, url = {https://nime.pubpub.org/pub/sqceyucq}, presentation-video = {https://youtu.be/C9EiA3TSUag} }
Abby Aresty and Rachel Gibson. 2021. Changing GEAR: The Girls Electronic Arts Retreat’s Teaching Interfaces for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.25757aca
Abstract
Download PDF DOI
The Girls Electronic Arts Retreat (GEAR) is a STEAM summer camp for ages 8 - 11. In this paper, we compare and contrast lessons from the first two iterations of GEAR, including one in-person and one remote session. We introduce our Teaching Interfaces for Musical Expression (TIME) framework and use our analyses to compose a list of best practices in TIME development and implementation.
@inproceedings{NIME21_21, author = {Aresty, Abby and Gibson, Rachel}, title = {Changing GEAR: The Girls Electronic Arts Retreat's Teaching Interfaces for Musical Expression}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {21}, doi = {10.21428/92fbeb44.25757aca}, url = {https://nime.pubpub.org/pub/8lop0zj4}, presentation-video = {https://youtu.be/8qeFjNGaEHc} }
Anne Sophie Andersen and Derek Kwan. 2021. Grisey’s ’Talea’: Musical Representation As An Interactive 3D Map. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.27d09832
Abstract
Download PDF DOI
The praxis of using detailed visual models to illustrate complex ideas is widely used in the sciences but less so in music theory. Taking the composer’s notes as a starting point, we have developed a complete interactive 3D model of Grisey’s Talea (1986). Our model presents a novel approach to music education and theory by making understanding of complex musical structures accessible to students and non-musicians, particularly those who struggle with traditional means of learning or whose mode of learning is predominantly visual. The model builds on the foundations of 1) the historical associations between visual and musical arts and those concerning spectralists in particular 2) evidence of recurring cross-modal associations in the general population and consistent associations for individual synesthetes. Research into educational uses of the model is a topic for future exploration.
@inproceedings{NIME21_22, author = {Andersen, Anne Sophie and Kwan, Derek}, title = {Grisey’s 'Talea': Musical Representation As An Interactive 3D Map}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {22}, doi = {10.21428/92fbeb44.27d09832}, url = {https://nime.pubpub.org/pub/oiwz8bb7}, presentation-video = {https://youtu.be/PGYOkFjyrek} }
Enrique Tomás, Thomas Gorbach, Hilda Tellioğlu, and Martin Kaltenbrunner. 2021. Embodied Gestures: Sculpting Energy-Motion Models into Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.ce8139a8
Abstract
Download PDF DOI
In this paper we discuss the beneficial aspects of incorporating energy-motion models as a design pattern in musical interface design. These models can be understood as archetypes of motion trajectories which are commonly applied in the analysis and composition of acousmatic music. With the aim of exploring a new possible paradigm for interface design, our research builds on the parallel investigation of embodied music cognition theory and the praxis of acousmatic music. After having run a large study for understanding a listener’s spontaneous rendering of form and movement, we built a number of digital instruments especially designed to emphasise a particular energy-motion profile. The evaluation through composition and performance indicates that this design paradigm can foster musical inventiveness and expression in the processes of composition and performance of gestural electronic music.
@inproceedings{NIME21_23, author = {Tomás, Enrique and Gorbach, Thomas and Tellioğlu, Hilda and Kaltenbrunner, Martin}, title = {Embodied Gestures: Sculpting Energy-Motion Models into Musical Interfaces}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {23}, doi = {10.21428/92fbeb44.ce8139a8}, url = {https://nime.pubpub.org/pub/gsx1wqt5}, presentation-video = {https://youtu.be/QDjCEnGYSC4} }
Raul Masu, Adam Pultz Melbye, John Sullivan, and Alexander Refsum Jensenius. 2021. NIME and the Environment: Toward a More Sustainable NIME Practice. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.5725ad8f
Abstract
Download PDF DOI
This paper addresses environmental issues around NIME research and practice. We discuss the formulation of an environmental statement for the conference as well as the initiation of a NIME Eco Wiki containing information on environmental concerns related to the creation of new musical instruments. We outline a number of these concerns and, by systematically reviewing the proceedings of all previous NIME conferences, identify a general lack of reflection on the environmental impact of the research undertaken. Finally, we propose a framework for addressing the making, testing, using, and disposal of NIMEs in the hope that sustainability may become a central concern to researchers.
@inproceedings{NIME21_24, author = {Masu, Raul and Melbye, Adam Pultz and Sullivan, John and Jensenius, Alexander Refsum}, title = {NIME and the Environment: Toward a More Sustainable NIME Practice}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {24}, doi = {10.21428/92fbeb44.5725ad8f}, url = {https://nime.pubpub.org/pub/4bbl5lod}, presentation-video = {https://youtu.be/JE6YqYsV5Oo} }
Randall Harlow, Mattias Petersson, Robert Ek, Federico Visi, and Stefan Östersjö. 2021. Global Hyperorgan: a platform for telematic musicking and research. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.d4146b2d
Abstract
Download PDF DOI
The Global Hyperorgan is an intercontinental, creative space for acoustic musicking. Existing pipe organs around the world are networked for real-time, geographically-distant performance, with performers utilizing instruments and other input devices to collaborate musically through the voices of the pipes in each location. A pilot study was carried out in January 2021, connecting two large pipe organs in Piteå, Sweden, and Amsterdam, the Netherlands. A quartet of performers tested the Global Hyperorgan’s capacities for telematic musicking through a series of pieces. The concept of modularity is useful when considering the artistic challenges and possibilities of the Global Hyperorgan. We observe how the modular system utilized in the pilot study afforded multiple experiences of shared instrumentality from which new, synthetic voices emerge. As a long-term technological, artistic and social research project, the Global Hyperorgan offers a platform for exploring technology, agency, voice, and intersubjectivity in hyper-acoustic telematic musicking.
@inproceedings{NIME21_25, author = {Harlow, Randall and Petersson, Mattias and Ek, Robert and Visi, Federico and Östersjö, Stefan}, title = {Global Hyperorgan: a platform for telematic musicking and research}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {25}, doi = {10.21428/92fbeb44.d4146b2d}, url = {https://nime.pubpub.org/pub/a626cbqh}, presentation-video = {https://youtu.be/t88aIXdqBWQ} }
Luis Zayas-Garin, Jacob Harrison, Robert Jack, and Andrew McPherson. 2021. DMI Apprenticeship: Sharing and Replicating Musical Artefacts. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.87f1d63e
Abstract
Download PDF DOI
The nature of digital musical instruments (DMIs), often bespoke artefacts devised by single or small groups of technologists, requires thought about how they are shared and archived so that others can replicate or adapt designs. The ability for replication contributes to an instrument’s longevity and creates opportunities for both DMI designers and researchers. Research papers often omit necessary knowledge for replicating research artefacts, but we argue that mitigating this situation is not just about including design materials and documentation. Our way of approaching this issue is by drawing on an age-old method as a way of disseminating knowledge, the apprenticeship. We propose the DMI apprenticeship as a way of exploring the procedural obstacles of replicating DMIs, while highlighting for both apprentice and designer the elements of knowledge that are a challenge to communicate in conventional documentation. Our own engagement with the DMI apprenticeship led to successfully replicating an instrument, Strummi. Framing this process as an apprenticeship highlighted the non-obvious areas of the documentation and manufacturing process that are crucial in the successful replication of a DMI.
@inproceedings{NIME21_26, author = {Zayas-Garin, Luis and Harrison, Jacob and Jack, Robert and McPherson, Andrew}, title = {DMI Apprenticeship: Sharing and Replicating Musical Artefacts}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {26}, doi = {10.21428/92fbeb44.87f1d63e}, url = {https://nime.pubpub.org/pub/dmiapprenticeship}, presentation-video = {https://youtu.be/zTMaubJjlzA} }
Kelsey Cotton, Pedro Sanches, Vasiliki Tsaknaki, and Pavel Karpashevich. 2021. The Body Electric: A NIME designed through and with the somatic experience of singing. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.ec9f8fdd
Abstract
Download PDF DOI
This paper presents the soma design process of creating Body Electric: a novel interface for the capture and use of biofeedback signals and physiological changes generated in the body by breathing, during singing. This NIME design is grounded in the performer’s experience of, and relationship to, their body and their voice. We show that NIME design using principles from soma design can offer creative opportunities in developing novel sensing mechanisms, which can in turn inform composition and further elicit curious engagements between performer and artefact, disrupting notions of performer-led control. As contributions, this work 1) offers an example of NIME design for situated living, feeling, performing bodies, and 2) presents the rich potential of soma design as a path for designing in this context.
@inproceedings{NIME21_27, author = {Cotton, Kelsey and Sanches, Pedro and Tsaknaki, Vasiliki and Karpashevich, Pavel}, title = {The Body Electric: A NIME designed through and with the somatic experience of singing}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {27}, doi = {10.21428/92fbeb44.ec9f8fdd}, url = {https://nime.pubpub.org/pub/ntm5kbux}, presentation-video = {https://youtu.be/zwzCgG8MXNA} }
Emma Frid and Alon Ilsar. 2021. Reimagining (Accessible) Digital Musical Instruments: A Survey on Electronic Music-Making Tools. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c37a2370
Abstract
Download PDF DOI
This paper discusses findings from a survey on interfaces for making electronic music. We invited electronic music makers of varying experience to reflect on their practice and setup and to imagine and describe their ideal interface for music-making. We also asked them to reflect on the state of gestural controllers, machine learning, and artificial intelligence in their practice. We had 118 people respond to the survey, with 40.68% professional musicians, and 10.17% identifying as living with a disability or access requirement. Results highlight limitations of music-making setups as perceived by electronic music makers, reflections on how imagined novel interfaces could address such limitations, and positive attitudes towards ML and AI in general.
@inproceedings{NIME21_28, author = {Frid, Emma and Ilsar, Alon}, title = {Reimagining (Accessible) Digital Musical Instruments: A Survey on Electronic Music-Making Tools}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {28}, doi = {10.21428/92fbeb44.c37a2370}, url = {https://nime.pubpub.org/pub/reimaginingadmis}, presentation-video = {https://youtu.be/vX8B7fQki_w} }
Jonathan Pitkin. 2021. SoftMRP: a Software Emulation of the Magnetic Resonator Piano. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.9e7da18f
Abstract
Download PDF DOI
The Magnetic Resonator Piano (MRP) is a relatively well-established DMI which significantly expands the capabilities of the acoustic piano. This paper presents SoftMRP, a Max/MSP patch designed to emulate the physical MRP and thereby to allow rehearsal of MRP repertoire and performance techniques using any MIDI keyboard and expression pedal; it is hoped that the development of such a tool will encourage even more widespread adoption of the original instrument amongst composers and performers. This paper explains SoftMRP’s features and limitations, discussing the challenges of approximating responses which rely upon the MRP’s continuous sensing of key position, and considering ways in which the development of the emulation might feed back into the development of the original instrument, both specifically and more broadly: since it was designed by a composer, based on his experience of writing for the instrument, it offers the MRP’s designers an insight into how the instrument is conceptualised and understood by the musicians who use it.
@inproceedings{NIME21_29, author = {Pitkin, Jonathan}, title = {SoftMRP: a Software Emulation of the Magnetic Resonator Piano}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {29}, doi = {10.21428/92fbeb44.9e7da18f}, url = {https://nime.pubpub.org/pub/m9nhdm0p}, presentation-video = {https://youtu.be/Fw43nHVyGUg} }
Andreas Förster and Mathias Komesker. 2021. LoopBlocks: Design and Preliminary Evaluation of an Accessible Tangible Musical Step Sequencer. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.f45e1caf
Abstract
Download PDF DOI
This paper presents the design and preliminary evaluation of an Accessible Digital Musical Instrument (ADMI) in the form of a tangible wooden step sequencer that uses photoresistors and wooden blocks to trigger musical events. Furthermore, the paper presents a short overview of design criteria for ADMIs based on literature and first insights of an ongoing qualitative interview study with German Special Educational Needs (SEN) teachers conducted by the first author. The preliminary evaluation is realized by a reflection on the mentioned criteria. The instrument was designed as a starting point for a participatory design process in music education settings. The software is programmed in Pure Data and running on a Raspberry Pi computer that fits inside the body of the instrument. While most similar developments focus on professional performance and complex interactions, LoopBlocks focuses on accessibility and Special Educational Needs settings. The main goal is to reduce the cognitive load needed to play music by providing a clear and constrained interaction, thus reducing intellectual and technical barriers to active music making.
@inproceedings{NIME21_3, author = {Förster, Andreas and Komesker, Mathias}, title = {LoopBlocks: Design and Preliminary Evaluation of an Accessible Tangible Musical Step Sequencer}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {3}, doi = {10.21428/92fbeb44.f45e1caf}, url = {https://nime.pubpub.org/pub/bj2w1gdx}, presentation-video = {https://youtu.be/u5o0gmB3MX8} }
Kyriakos Tsoukalas and Ivica Bukvic. 2021. Music Computing and Computational Thinking: A Case Study. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.1eeb3ada
Abstract
Download PDF DOI
The NIME community has proposed a variety of interfaces that connect making music and education. This paper reviews current literature, proposes a method for developing educational NIMEs, and reflects on a way to manifest computational thinking through music computing. A case study is presented and discussed in which a programmable mechatronics educational NIME and a virtual simulation of the NIME offered as a web application were developed.
@inproceedings{NIME21_30, author = {Tsoukalas, Kyriakos and Bukvic, Ivica}, title = {Music Computing and Computational Thinking: A Case Study}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {30}, doi = {10.21428/92fbeb44.1eeb3ada}, url = {https://nime.pubpub.org/pub/t94aq9rf}, presentation-video = {https://youtu.be/pdsfZX_kJBo} }
Travis West, Baptiste Caramiaux, Stéphane Huot, and Marcelo M. Wanderley. 2021. Making Mappings: Design Criteria for Live Performance. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.04f0fc35
Abstract
Download PDF DOI
We present new results combining data from a previously published study of the mapping design process and a new replication of the same method with a group of participants having different background expertise. Our thematic analysis of participants’ interview responses reveal some design criteria common to both groups of participants: mappings must manage the balance of control between the instrument and the player, and they should be easy to understand for the player and audience. We also consider several criteria that distinguish the two groups’ evaluation strategies. We conclude with important discussion of the mapping designer’s perspective, performance with gestural controllers, and the difficulties of evaluating mapping designs and musical instruments in general.
@inproceedings{NIME21_31, author = {West, Travis and Caramiaux, Baptiste and Huot, Stéphane and Wanderley, Marcelo M.}, title = {Making Mappings: Design Criteria for Live Performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {31}, doi = {10.21428/92fbeb44.04f0fc35}, url = {https://nime.pubpub.org/pub/f1ueovwv}, presentation-video = {https://youtu.be/3hM531E_vlg} }
Andrea Martelloni, Andrew McPherson, and Mathieu Barthet. 2021. Guitar augmentation for Percussive Fingerstyle: Combining self-reflexive practice and user-centred design. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.2f6db6e6
Abstract
Download PDF DOI
What is the relationship between a musician-designer’s auditory imagery for a musical piece, a design idea for an augmented instrument to support the realisation of that piece, and the aspiration to introduce the resulting instrument to a community of like-minded performers? We explore this NIME topic in the context of building the first iteration of an augmented acoustic guitar prototype for percussive fingerstyle guitarists. The first author, himself a percussive fingerstyle player, started the project of an augmented guitar with expectations and assumptions made around his own playing style, and in particular around the arrangement of one song. This input was complemented by the outcome of an interview study, in which percussive guitarists highlighted functional and creative requirements to suit their needs. We ran a pilot study to assess the resulting prototype, involving two other players. We present their feedback on two configurations of the prototype, one equalising the signal of surface sensors and the other based on sample triggering. The equalisation-based setting was better received, however both participants provided useful suggestions to improve the sample-triggering model following their own auditory imagery.
@inproceedings{NIME21_32, author = {Martelloni, Andrea and McPherson, Andrew and Barthet, Mathieu}, title = {Guitar augmentation for Percussive Fingerstyle: Combining self-reflexive practice and user-centred design}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {32}, doi = {10.21428/92fbeb44.2f6db6e6}, url = {https://nime.pubpub.org/pub/zgj85mzv}, presentation-video = {https://youtu.be/qeX6dUrJURY} }
Thomas Nuttall, Behzad Haki, and Sergi Jorda. 2021. Transformer Neural Networks for Automated Rhythm Generation. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.fe9a0d82
Abstract
Download PDF DOI
Recent applications of Transformer neural networks in the field of music have demonstrated their ability to effectively capture and emulate long-term dependencies characteristic of human notions of musicality and creative merit. We propose a novel approach to automated symbolic rhythm generation, where a Transformer-XL model trained on the Magenta Groove MIDI Dataset is used for the tasks of sequence generation and continuation. Hundreds of generations are evaluated using blind-listening tests to determine the extent to which the aspects of rhythm we understand to be valuable are learnt and reproduced. Our model is able to achieve a standard of rhythmic production comparable to human playing across arbitrarily long time periods and multiple playing styles.
@inproceedings{NIME21_33, author = {Nuttall, Thomas and Haki, Behzad and Jorda, Sergi}, title = {Transformer Neural Networks for Automated Rhythm Generation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {33}, doi = {10.21428/92fbeb44.fe9a0d82}, url = {https://nime.pubpub.org/pub/8947fhly}, presentation-video = {https://youtu.be/Ul9s8qSMUgU} }
Derek Holzer, Henrik Frisk, and Andre Holzapfel. 2021. Sounds of Futures Passed: Media Archaeology and Design Fiction as NIME Methodologies. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.2723647f
Abstract
Download PDF DOI
This paper provides a study of a workshop which invited composers, musicians, and sound designers to explore instruments from the history of electronic sound in Sweden. The workshop participants applied media archaeology methods towards analyzing one particular instrument from the past, the Dataton System 3000. They then applied design fiction methods towards imagining several speculative instruments of the future. Each stage of the workshop revealed very specific utopian ideas surrounding the design of sound instruments. After introducing the background and methods of the workshop, the authors present an overview and thematic analysis of the workshop’s outcomes. The paper concludes with some reflections on the use of this method-in-progress for investigating the ethics and affordances of historical electronic sound instruments. It also suggests the significance of ethics and affordances for the design of contemporary instruments.
@inproceedings{NIME21_34, author = {Holzer, Derek and Frisk, Henrik and Holzapfel, Andre}, title = {Sounds of Futures Passed: Media Archaeology and Design Fiction as NIME Methodologies}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {34}, doi = {10.21428/92fbeb44.2723647f}, url = {https://nime.pubpub.org/pub/200fpd5a}, presentation-video = {https://youtu.be/qBapYX7IOHA} }
Juliette Regimbal and Marcelo M. Wanderley. 2021. Interpolating Audio and Haptic Control Spaces. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.1084cb07
Abstract
Download PDF DOI
Audio and haptic sensations have previously been linked in the development of NIMEs and in other domains like human-computer interaction. Most efforts to work with these modalities together tend to either treat haptics as secondary to audio, or conversely, audio as secondary to haptics, and design sensations in each modality separately. In this paper, we investigate the possibility of designing audio and vibrotactile effects simultaneously by interpolating audio-haptic control spaces. An inverse radial basis function method is used to dynamically create a mapping from a two-dimensional space to a many-dimensional control space for multimodal effects based on user-specified control points. Two proofs of concept were developed focusing on modifying the same structure across modalities and parallel structures.
@inproceedings{NIME21_35, author = {Regimbal, Juliette and Wanderley, Marcelo M.}, title = {Interpolating Audio and Haptic Control Spaces}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {35}, doi = {10.21428/92fbeb44.1084cb07}, url = {https://nime.pubpub.org/pub/zd2z1evu}, presentation-video = {https://youtu.be/eH3mn1Ad5BE} }
Shelly Knotts. 2021. Algorithmic Power Ballads. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.548cca2b
Abstract
Download PDF DOI
Algorithmic Power Ballads is a performance for Saxophone and autonomous improvisor, with an optional third performer who can use the web interface to hand-write note sequences, and adjust synthesis parameters. The performance system explores shifting power dynamics between acoustic, algorithmic and autonomous performers through modifying the amount of control and agency they have over the sound over the duration of the performance. A higher-level algorithm how strongly the machine listening algorithms, which analyse the saxophone input, influence the rhythmic and melodic patterns generated by the system. The autonomous improvisor is trained on power ballad melodies prior to the performance and in lieu of influence from the saxophonist and live coder strays towards melodic phrases from this musical style. The piece is written in javascript and WebAudio API and uses MMLL a browser-based machine listening library.
@inproceedings{NIME21_36, author = {Knotts, Shelly}, title = {Algorithmic Power Ballads}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {36}, doi = {10.21428/92fbeb44.548cca2b}, url = {https://nime.pubpub.org/pub/w2ubqkv4} }
Myungin Lee. 2021. Entangled: A Multi-Modal, Multi-User Interactive Instrument in Virtual 3D Space Using the Smartphone for Gesture Control. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.eae7c23f
Abstract
Download PDF DOI
In this paper, Entangled, a multi-modal instrument in virtual 3D space with sound, graphics, and the smartphone-based gestural interface for multi-user is introduced. Within the same network, the players can use their smartphone as the controller by entering a specific URL into their smartphone’s browser. After joining the network, by actuating the smartphone’s accelerometer, the players apply gravitational force to a swarm of particles in the virtual space. Machine learning-based gesture pattern recognition is parallelly used to increase the functionality of the gestural command. Through this interface, the player can achieve intuitive control of gravitation in virtual reality (VR) space. The gravitation becomes the medium of the system involving physics, graphics, and sonification which composes a multimodal compositional language with cross-modal correspondence. Entangled is built on AlloLib, which is a cross-platform suite of C++ components for building interactive multimedia tools and applications. Throughout the script, the reason for each decision is elaborated arguing the importance of crossmodal correspondence in the design procedure.
@inproceedings{NIME21_37, author = {Lee, Myungin}, title = {Entangled: A Multi-Modal, Multi-User Interactive Instrument in Virtual 3D Space Using the Smartphone for Gesture Control}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {37}, doi = {10.21428/92fbeb44.eae7c23f}, url = {https://nime.pubpub.org/pub/4gt8wiy0}, presentation-video = {https://youtu.be/NjpXFYDvuZw} }
Notto J. W. Thelle and Philippe Pasquier. 2021. Spire Muse: A Virtual Musical Partner for Creative Brainstorming. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.84c0b364
Abstract
Download PDF DOI
We present Spire Muse, a co-creative musical agent that engages in different kinds of interactive behaviors. The software utilizes corpora of solo instrumental performances encoded as self-organized maps and outputs slices of the corpora as concatenated, remodeled audio sequences. Transitions between behaviors can be automated, and the interface enables the negotiation of these transitions through feedback buttons that signal approval, force reversions to previous behaviors, or request change. Musical responses are embedded in a pre-trained latent space, emergent in the interaction, and influenced through the weighting of rhythmic, spectral, harmonic, and melodic features. The training and run-time modules utilize a modified version of the MASOM agent architecture. Our model stimulates spontaneous creativity and reduces the need for the user to sustain analytical mind frames, thereby optimizing flow. The agent traverses a system autonomy axis ranging from reactive to proactive, which includes the behaviors of shadowing, mirroring, and coupling. A fourth behavior—negotiation—is emergent from the interface between agent and user. The synergy of corpora, interactive modes, and influences induces musical responses along a musical similarity axis from converging to diverging. We share preliminary observations from experiments with the agent and discuss design challenges and future prospects.
@inproceedings{NIME21_38, author = {Thelle, Notto J. W. and Pasquier, Philippe}, title = {Spire Muse: A Virtual Musical Partner for Creative Brainstorming}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {38}, doi = {10.21428/92fbeb44.84c0b364}, url = {https://nime.pubpub.org/pub/wcj8sjee}, presentation-video = {https://youtu.be/4QMQNyoGfOs} }
Hans Leeuw. 2021. Virtuoso mapping for the Electrumpet, a hyperinstrument strategy. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.a8e0cceb
Abstract
Download PDF DOI
This paper introduces a new Electrumpet control system that affords for quick and easy access to all its electro-acoustic features. The new implementation uses virtuosic gestures learned on the acoustic trumpet for quick electronic control, showing its effectiveness by controlling an innovative interactive harmoniser. Seamless transition from the smooth but rigid, often uncommunicative sound of the harmoniser to a more noisy, open and chaotic sound world required the addition of extra features and scenarios. This prepares the instrument for multiple musical environments, including free improvised settings with large sonic diversity. The system should particularly interest virtuoso improvising electroacoustic musicians and hyperinstrument player/developers that combine many musical styles in their art and that look for inspiration to use existing virtuosity for electronic control.
@inproceedings{NIME21_39, author = {Leeuw, Hans}, title = {Virtuoso mapping for the Electrumpet, a hyperinstrument strategy}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {39}, doi = {10.21428/92fbeb44.a8e0cceb}, url = {https://nime.pubpub.org/pub/fxe52ym6}, presentation-video = {https://youtu.be/oHM_WfHOGUo} }
Filipe Calegario, João Tragtenberg, Christian Frisson, et al. 2021. Documentation and Replicability in the NIME Community. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.dc50e34d
Abstract
Download PDF DOI
In this paper, we discuss the importance of replicability in Digital Musical Instrument (DMI) design and the NIME community. Replication enables us to: create new artifacts based on existing ones, experiment DMIs in different contexts and cultures, and validate obtained results from evaluations. We investigate how the papers present artifact documentation and source code by analyzing the NIME proceedings from 2018, 2019, and 2020. We argue that the presence and the quality of documentation are good indicators of replicability and can be beneficial for the NIME community. Finally, we discuss the importance of documentation for replication, propose a call to action towards more replicable projects, and present a practical guide informing future steps toward replicability in the NIME community.
@inproceedings{NIME21_4, author = {Calegario, Filipe and Tragtenberg, João and Frisson, Christian and Meneses, Eduardo and Malloch, Joseph and Cusson, Vincent and Wanderley, Marcelo M.}, title = {Documentation and Replicability in the NIME Community}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {4}, doi = {10.21428/92fbeb44.dc50e34d}, url = {https://nime.pubpub.org/pub/czq0nt9i}, presentation-video = {https://youtu.be/ySh5SueLMAA} }
Anna Xambó, Gerard Roma, Sam Roig, and Eduard Solaz. 2021. Live Coding with the Cloud and a Virtual Agent. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.64c9f217
Abstract
Download PDF DOI
The use of crowdsourced sounds in live coding can be seen as an example of asynchronous collaboration. It is not uncommon for crowdsourced databases to return unexpected results to the queries submitted by a user. In such a situation, a live coder is likely to require some degree of additional filtering to adapt the results to her/his musical intentions. We refer to this context-dependent decisions as situated musical actions. Here, we present directions for designing a customisable virtual companion to help live coders in their practice. In particular, we introduce a machine learning (ML) model that, based on a set of examples provided by the live coder, filters the crowdsourced sounds retrieved from the Freesound online database at performance time. We evaluated a first illustrative model using objective and subjective measures. We tested a more generic live coding framework in two performances and two workshops, where several ML models have been trained and used. We discuss the promising results for ML in education, live coding practices and the design of future NIMEs.
@inproceedings{NIME21_40, author = {Xambó, Anna and Roma, Gerard and Roig, Sam and Solaz, Eduard}, title = {Live Coding with the Cloud and a Virtual Agent}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {40}, doi = {10.21428/92fbeb44.64c9f217}, url = {https://nime.pubpub.org/pub/zpdgg2fg}, presentation-video = {https://youtu.be/F4UoH1hRMoU} }
Yixiao Zhang, Gus Xia, Mark Levy, and Simon Dixon. 2021. COSMIC: A Conversational Interface for Human-AI Music Co-Creation. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.110a7a32
Abstract
Download PDF DOI
In this paper, we propose COSMIC, a COnverSational Interface for Human-AI MusIc Co-Creation. It is a chatbot with a two-fold design philosophy: to understand human creative intent and to help humans in their creation. The core Natural Language Processing (NLP) module is responsible for three functions: 1) understanding human needs in chat, 2) cross-modal interaction between natural language understanding and music generation models, and 3) mixing and coordinating multiple algorithms to complete the composition.1
@inproceedings{NIME21_41, author = {Zhang, Yixiao and Xia, Gus and Levy, Mark and Dixon, Simon}, title = {COSMIC: A Conversational Interface for Human-AI Music Co-Creation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {41}, doi = {10.21428/92fbeb44.110a7a32}, url = {https://nime.pubpub.org/pub/in6wsc9t}, presentation-video = {https://youtu.be/o5YO0ni7sng} }
Gershon Dublon and Xin Liu. 2021. Living Sounds: Live Nature Sound as Online Performance Space. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.b90e0fcb
Abstract
Download PDF DOI
This paper presents Living Sounds, an internet radio station and online venue hosted by nature. The virtual space is animated by live sound from a restored wetland wildlife sanctuary, spatially mixed from dozens of 24/7 streaming microphones across the landscape. The station’s guests are invited artists and others whose performances are responsive to and contingent upon the ever-changing environmental sound. Subtle, sound-active drawings by different visual designers anchor the one-page website. Using low latency, high fidelity WebRTC, our system allows guests to mix themselves in, remix the raw nature streams, or run our multichannel sources fully through their own processors. Created in early 2020 in response to the locked down conditions of the COVID-19 pandemic, the site became a virtual oasis, with usage data showing long duration visits. In collaboration with several festivals that went online in 2020, programmed live content included music, storytelling, and guided meditation. One festival commissioned a local microphone installation, resulting in a second nature source for the station: 5-channels of sound from a small Maine island. Catalyzed by recent events, when many have been separated from environments of inspiration and restoration, we propose Living Sounds as both a virtual nature space for cohabitation and a new kind of contingent online venue.
@inproceedings{NIME21_42, author = {Dublon, Gershon and Liu, Xin}, title = {Living Sounds: Live Nature Sound as Online Performance Space}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {42}, doi = {10.21428/92fbeb44.b90e0fcb}, url = {https://nime.pubpub.org/pub/46by9xxn}, presentation-video = {https://youtu.be/tE4YMDf-bQE} }
Nathan Villicaña-Shaw, Dale A. Carnegie, Jim Murphy, and Mo Zareei. 2021. Speculātor: visual soundscape augmentation of natural environments. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.e521c5a4
Abstract
Download PDF DOI
Speculātor is presented as a fist-sized, battery-powered, environmentally aware, soundscape augmentation artifact that listens to the sonic environment and provides real-time illuminated visual feedback in reaction to what it hears. The visual soundscape augmentations these units offer allow for creating sonic art installations whose artistic subject is the unaltered in-situ sonic environment. Speculātor is designed to be quickly installed in exposed outdoor environments without power infrastructure to allow maximum flexibility when selecting exhibition locations. Data from light, temperature, and humidity sensors guide behavior to maximize soundscape augmentation effectiveness and protect artifacts from operating under dangerous environmental conditions. To highlight the music-like qualities of cicada vocalizations, installations conducted between October 2019 and March 2020, where multiple Speculātor units are installed in outdoor natural locations are presented as an initial case study.
@inproceedings{NIME21_43, author = {Villicaña-Shaw, Nathan and Carnegie, Dale A. and Murphy, Jim and Zareei, Mo}, title = {Speculātor: visual soundscape augmentation of natural environments}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {43}, doi = {10.21428/92fbeb44.e521c5a4}, url = {https://nime.pubpub.org/pub/pxr0grnk}, presentation-video = {https://youtu.be/kP3fDzAHXDw} }
William Thompson and Edgar Berdahl. 2021. An Infinitely Sustaining Piano Achieved Through a Soundboard-Mounted Shaker . Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.2c4879f5
Abstract
Download PDF DOI
This paper outlines a demonstration of an acoustic piano augmentation that allows for infinite sustain of one or many notes. The result is a natural sounding piano sustain that lasts for an unnatural period of time. Using a tactile shaker, a contact microphone and an amplitude activated FFT-freeze Max patch, this system is easily assembled and creates an infinitely sustaining piano.
@inproceedings{NIME21_44, author = {Thompson, William and Berdahl, Edgar}, title = {An Infinitely Sustaining Piano Achieved Through a Soundboard-Mounted Shaker }, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {44}, doi = {10.21428/92fbeb44.2c4879f5}, url = {https://nime.pubpub.org/pub/cde9r70r}, presentation-video = {https://youtu.be/YRby0VdL8Nk} }
Michael Quigley and William Payne. 2021. Toneblocks: Block-based musical programming. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.46c0f6ef
Abstract
Download PDF DOI
Block-based coding environments enable novices to write code that bypasses the syntactic complexities of text. However, we see a lack of effective block-based tools that balance programming with expressive music making. We introduce Toneblocks1, a prototype web application intended to be intuitive and engaging for novice users with interests in computer programming and music. Toneblocks is designed to lower the barrier of entry while increasing the ceiling of expression for advanced users. In Toneblocks, users produce musical loops ranging from static sequences to generative systems, and can manipulate their properties live. Pilot usability tests conducted with two participants provide evidence that the current prototype is easy to use and can produce complex musical output. An evaluation offers potential future improvements including user-defined variables and functions, and rhythmic variability.
@inproceedings{NIME21_45, author = {Quigley, Michael and Payne, William}, title = {Toneblocks: Block-based musical programming}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {45}, doi = {10.21428/92fbeb44.46c0f6ef}, url = {https://nime.pubpub.org/pub/qn6lqnzx}, presentation-video = {https://youtu.be/c64l1hK3QiY} }
Yi Wu and Jason Freeman. 2021. Ripples: An Auditory Augmented Reality iOS Application for the Atlanta Botanical Garden. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.b8e82252
Abstract
Download PDF DOI
This paper introduces “Ripples”, an iOS application for the Atlanta Botanical Garden that uses auditory augmented reality to provide an intuitive music guide by seamlessly integrating information about the garden into the visiting experience. For each point of interest nearby, “Ripples” generates music in real time, representing a location through data collected from users’ smartphones. The music is then overlaid onto the physical environment and binaural spatialization indicates real-world coordinates of their represented places. By taking advantage of the human auditory sense’s innate spatial sound source localization and source separation capabilities, “Ripples” makes navigation intuitive and information easy to understand.
@inproceedings{NIME21_46, author = {Wu, Yi and Freeman, Jason}, title = {Ripples: An Auditory Augmented Reality iOS Application for the Atlanta Botanical Garden}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {46}, doi = {10.21428/92fbeb44.b8e82252}, url = {https://nime.pubpub.org/pub/n1o19efr}, presentation-video = {https://youtu.be/T7EJVACX3QI} }
Thomas LUCAS, Christophe d’Alessandro, and Serge de Laubier. 2021. Mono-Replay : a software tool for digitized sound animation. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.7b843efe
Abstract
Download PDF DOI
This article describes Mono-Replay, a software environment designed for sound animation. "Sound animation" in this context means musical performance based on various modes of replay and transformation of all kinds of recorded music samples. Sound animation using Mono-Replay is a two-step process, including an off-line analysis phase and on-line performance or synthesis phase. The analysis phase proceeds with time segmentation, and the set up of anchor points corresponding to temporal musical discourse parameters (notes, pulses, events). This allows, at the performance phase, for control of timing, playback position, playback speed, and a variety of spectral effects, with the help of gesture interfaces. Animation principles and software features of Mono-Replay are described. Two examples of sound animation based on beat tracking and transient detection algorithms are presented (a multi-track record of Superstition by Steve Wonder and Jeff Beck and Accidents/Harmoniques, an electroacoustic piece by Bernard Parmegiani). With the help of these two contrasted examples, the fundamental principles of “sound animation” are reviewed: parameters of musical discourse, audio file segmentation, gestural control and interaction for animation at the performance stage.
@inproceedings{NIME21_47, author = {LUCAS, Thomas and d'Alessandro, Christophe and Laubier, Serge de}, title = {Mono-Replay : a software tool for digitized sound animation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {47}, doi = {10.21428/92fbeb44.7b843efe}, url = {https://nime.pubpub.org/pub/8lqitvvq}, presentation-video = {https://youtu.be/Ck79wRgqXfU} }
Ward J. Slager. 2021. Designing and performing with Pandora’s Box: transforming feedback physically and with algorithms. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.61b13baf
Abstract
Download PDF DOI
This paper discusses Pandora’s Box, a novel idiosyncratic electroacoustic instrument and performance utilizing feedback as sound generation principle. The instrument’s signal path consists of a closed-loop through custom DSP algorithms and a spring. Pandora’s Box is played by tactile interaction with the spring and a control panel with faders and switches. The design and implementation are described and rituals are explained referencing a video recording of a concert.
@inproceedings{NIME21_48, author = {Slager, Ward J.}, title = {Designing and performing with Pandora’s Box: transforming feedback physically and with algorithms}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {48}, doi = {10.21428/92fbeb44.61b13baf}, url = {https://nime.pubpub.org/pub/kx6d0553}, presentation-video = {https://youtu.be/s89Ycd0QkDI} }
Chris Chronopoulos. 2021. Quadrant: A Multichannel, Time-of-Flight Based Hand Tracking Interface for Computer Music. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.761367fd
Abstract
Download PDF DOI
Quadrant is a new human-computer interface based on an array of distance sensors. The hardware consists of 4 time-of-flight detectors and is designed to detect the position, velocity, and orientation of the user’s hand in free space. Signal processing is used to recognize gestures and other events, which we map to a variety of musical parameters to demonstrate possible applications. We have developed Quadrant as an open-hardware circuit board, which acts as a USB controller to a host computer.
@inproceedings{NIME21_49, author = {Chronopoulos, Chris}, title = {Quadrant: A Multichannel, Time-of-Flight Based Hand Tracking Interface for Computer Music}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {49}, doi = {10.21428/92fbeb44.761367fd}, url = {https://nime.pubpub.org/pub/quadrant}, presentation-video = {https://youtu.be/p8flHKv17Y8} }
Marinos Koutsomichalis. 2021. A Yellow Box with a Key Switch and a 1/4" TRS Balanced Audio Output. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.765a94a7
Abstract
Download PDF DOI
This short article presents a reductionist infra-instrument. It concerns a yellow die-cast aluminium box only featuring a key switch and a 1/4” TRS balanced audio output as its UI. On the turn of the key, the device performs a certain poem in Morse code and via very low frequency acoustic pulses; in this way, it transforms poetry into bursts of intense acoustic energy that may resonate a hosting architecture and any human bodies therein. It is argued that the instrument functions at the very same time as a critical/speculative electronic object, as an ad-hoc performance instrument, and as a piece of (conceptual) art on its own sake.
@inproceedings{NIME21_5, author = {Koutsomichalis, Marinos}, title = {A Yellow Box with a Key Switch and a 1/4" TRS Balanced Audio Output}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {5}, doi = {10.21428/92fbeb44.765a94a7}, url = {https://nime.pubpub.org/pub/n69uznd4}, presentation-video = {https://youtu.be/_IUT0tbtkBI} }
Lisa Andersson López, Thelma Svenns, and Andre Holzapfel. 2021. Sensitiv – Designing a Sonic Co-play Tool for Interactive Dance. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.18c3fc2b
Abstract
Download PDF DOI
In the present study a musician and a dancer explore the co-play between them through sensory technology. The main questions concern the placement and processing of motion sensors, and the choice of sound parameters that a dancer can manipulate. Results indicate that sound parameters of delay and pitch altered dancers’ experience most positively and that placement of sensors on each wrist and ankle with a diagonal mapping of the sound parameters was the most suitable.
@inproceedings{NIME21_50, author = {Andersson López, Lisa and Svenns, Thelma and Holzapfel, Andre}, title = {Sensitiv – Designing a Sonic Co-play Tool for Interactive Dance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {50}, doi = {10.21428/92fbeb44.18c3fc2b}, url = {https://nime.pubpub.org/pub/y1y5jolp}, presentation-video = {https://youtu.be/Mo8mVJJrqx8} }
Geise Santos, Johnty Wang, Carolina Brum, Marcelo M. Wanderley, Tiago Tavares, and Anderson Rocha. 2021. Comparative Latency Analysis of Optical and Inertial Motion Capture Systems for Gestural Analysis and Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.51b1c3a1
Abstract
Download PDF DOI
Wireless sensor-based technologies are becoming increasingly accessible and widely explored in interactive musical performance due to their ubiquity and low-cost, which brings the necessity of understanding the capabilities and limitations of these sensors. This is usually approached by using a reference system, such as an optical motion capture system, to assess the signals’ properties. However, this process raises the issue of synchronizing the signal and the reference data streams, as each sensor is subject to different latency, time drift, reference clocks and initialization timings. This paper presents an empirical quantification of the latency communication stages in a setup consisting of a Qualisys optical motion capture (mocap) system and a wireless microcontroller-based sensor device. We performed event-to-end tests on the critical components of the hybrid setup to determine the synchronization suitability. Overall, further synchronization is viable because of the near individual average latencies of around 25ms for both the mocap system and the wireless sensor interface.
@inproceedings{NIME21_51, author = {Santos, Geise and Wang, Johnty and Brum, Carolina and Wanderley, Marcelo M. and Tavares, Tiago and Rocha, Anderson}, title = {Comparative Latency Analysis of Optical and Inertial Motion Capture Systems for Gestural Analysis and Musical Performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {51}, doi = {10.21428/92fbeb44.51b1c3a1}, url = {https://nime.pubpub.org/pub/wmcqkvw1}, presentation-video = {https://youtu.be/a1TVvr9F7hE} }
Henrique Portovedo, Paulo Ferreira Lopes, Ricardo Mendes, and Tiago Gala. 2021. HASGS: Five Years of Reduced Augmented Evolution. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.643abd8c
Abstract
Download PDF DOI
The work presented here is based on the Hybrid Augmented Saxophone of Gestural Symbioses (HASGS) system with a focus on and its evolution over the last five years, and an emphasis on its functional structure and the repertoire. The HASGS system was intended to retain focus on the performance of the acoustic instrument, keeping gestures centralised within the habitual practice of the instrument, and reducing the use of external devices to control electronic parameters in mixed music. Taking a reduced approach, the technology chosen to prototype HASGS was developed in order to serve the aesthetic intentions of the pieces being written for it. This strategy proved to avoid an overload of solutions that could bring artefacts and superficial use of the augmentation processes, which sometimes occur on augmented instruments, specially prototyped for improvisational intentionality. Here, we discuss how the repertoire, hardware, and software of the system can be mutually affected by this approach. We understand this project as an empirically-based study which can both serve as a model for analysis, as well provide composers and performers with pathways and creative strategies for the development of augmentation processes.
@inproceedings{NIME21_52, author = {Portovedo, Henrique and Lopes, Paulo Ferreira and Mendes, Ricardo and Gala, Tiago}, title = {HASGS: Five Years of Reduced Augmented Evolution}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {52}, doi = {10.21428/92fbeb44.643abd8c}, url = {https://nime.pubpub.org/pub/1293exfw}, presentation-video = {https://youtu.be/wRygkMgx2Oc} }
Valérian Fraisse, Catherine Guastavino, and Marcelo M. Wanderley. 2021. A Visualization Tool to Explore Interactive Sound Installations. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.4fd9089c
Abstract
Download PDF DOI
This paper presents a theoretical framework for describing interactive sound installations, along with an interactive database, on a web application, for visualizing various features of sound installations. A corpus of 195 interactive sound installations was reviewed to derive a taxonomy describing them across three perspectives: Artistic Intention, Interaction and System Design. A web application is provided to dynamically visualize and explore the corpus of sound installations using interactive charts (https://isi-database.herokuapp.com/). Our contribution is two-sided: we provide a theoretical framework to characterize interactive sound installations as well as a tool to inform sound artists and designers about up-to-date practices regarding interactive sound installations design.
@inproceedings{NIME21_53, author = {Fraisse, Valérian and Guastavino, Catherine and Wanderley, Marcelo M.}, title = {A Visualization Tool to Explore Interactive Sound Installations}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {53}, doi = {10.21428/92fbeb44.4fd9089c}, url = {https://nime.pubpub.org/pub/i1rx1t2e}, presentation-video = {https://youtu.be/MtIVB7P3bs4} }
Alice Eldridge, Chris Kiefer, Dan Overholt, and Halldor Ulfarsson. 2021. Self-resonating Vibrotactile Feedback Instruments \textbar\textbar: Making, Playing, Conceptualising :\textbar\textbar. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.1f29a09e
Abstract
Download PDF DOI
Self-resonating vibrotactile instruments (SRIs) are hybrid feedback instruments, characterised by an electro-mechanical feedback loop that is both the means of sound production and the expressive interface. Through the lens of contemporary SRIs, we reflect on how they are characterised, designed, and played. By considering reports from designers and players of this species of instrument-performance system, we explore the experience of playing them. With a view to supporting future research and practice in the field, we illustrate the value of conceptualising SRIs in Cybernetic and systems theoretic terms and suggest that this offers an intuitive, yet powerful basis for future performance, analysis and making; in doing so we close the loop in the making, playing and conceptualisation of SRIs with the aim of nourishing the evolution of theory, creative and technical practice in this field.
@inproceedings{NIME21_54, author = {Eldridge, Alice and Kiefer, Chris and Overholt, Dan and Ulfarsson, Halldor}, title = {Self-resonating Vibrotactile Feedback Instruments {\textbar}{\textbar}: Making, Playing, Conceptualising :{\textbar}{\textbar}}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {54}, doi = {10.21428/92fbeb44.1f29a09e}, url = {https://nime.pubpub.org/pub/6mhrjiqt}, presentation-video = {https://youtu.be/EP1G4vCVm_E} }
Vincent Reynaert, Florent Berthaut, Yosra Rekik, and laurent grisoni. 2021. The Effect of Control-Display Ratio on User Experience in Immersive Virtual Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c47be986
Abstract
Download PDF DOI
Virtual reality (VR) offers novel possibilities of design choices for Digital Musical Instruments in terms of shapes, sizes, sounds or colours, removing many constraints inherent to physical interfaces. In particular, the size and position of the interface components of Immersive Virtual Musical Instruments (IVMIs) can be freely chosen to elicit large or small hand gestures. In addition, VR allows for the manipulation of what users visually perceive of their actual physical actions, through redirections and changes in Control-Display Ratio (CDR). Visual and gestural amplitudes can therefore be defined separately, potentially affecting the user experience in new ways. In this paper, we investigate the use of CDR to enrich the design with a control over the user perceived fatigue, sense of presence and musical expression. Our findings suggest that the CDR has an impact on the sense of presence, on the perceived difficulty of controlling the sound and on the distance covered by the hand. From these results, we derive a set of insights and guidelines for the design of IVMIs.
@inproceedings{NIME21_55, author = {Reynaert, Vincent and Berthaut, Florent and Rekik, Yosra and grisoni, laurent}, title = {The Effect of Control-Display Ratio on User Experience in Immersive Virtual Musical Instruments}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {55}, doi = {10.21428/92fbeb44.c47be986}, url = {https://nime.pubpub.org/pub/8n8br4cc}, presentation-video = {https://youtu.be/d1DthYt8EUw} }
Alex Lucas, Jacob Harrison, Franziska Schroeder, and Miguel Ortiz. 2021. Cross-Pollinating Ecological Perspectives in ADMI Design and Evaluation. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.ff09de34
Abstract
Download PDF DOI
This paper explores ecological perspectives of human activity in the use of digital musical instruments and assistive technology. While such perspectives are relatively nascent in DMI design and evaluation, ecological frameworks have a long-standing foundation in occupational therapy and the design of assistive technology products and services. Informed by two case studies, the authors’ critique, compare and marry concepts from each domain to guide future research into accessible music technology. The authors discover that ecological frameworks used by occupational therapists are helpful in describing the nature of individual impairment, disability and situated context. However, such frameworks seemingly flounder when attempting to describe the personal value of music-making.
@inproceedings{NIME21_56, author = {Lucas, Alex and Harrison, Jacob and Schroeder, Franziska and Ortiz, Miguel}, title = {Cross-Pollinating Ecological Perspectives in ADMI Design and Evaluation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {56}, doi = {10.21428/92fbeb44.ff09de34}, url = {https://nime.pubpub.org/pub/d72sylsq}, presentation-video = {https://youtu.be/Khk05vKMrao} }
Matthew Skarha, Vincent Cusson, Christian Frisson, and Marcelo M. Wanderley. 2021. Le Bâton: A Digital Musical Instrument Based on the Chaotic Triple Pendulum. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.09ecc54d
Abstract
Download PDF DOI
This paper describes Le Bâton, a new digital musical instrument based on the nonlinear dynamics of the triple pendulum. The triple pendulum is a simple physical system constructed by attaching three pendulums vertically such that each joint can swing freely. When subjected to large oscillations, its motion is chaotic and is often described as unexpectedly mesmerizing. Le Bâton uses wireless inertial measurement units (IMUs) embedded in each pendulum arm to send real-time motion data to Max/MSP. Additionally, we implemented a control mechanism, allowing a user to remotely interact with it by setting the initial release angle. Here, we explain the motivation and design of Le Bâton and describe mapping strategies. To conclude, we discuss how its nature of user interaction complicates its status as a digital musical instrument.
@inproceedings{NIME21_57, author = {Skarha, Matthew and Cusson, Vincent and Frisson, Christian and Wanderley, Marcelo M.}, title = {Le Bâton: A Digital Musical Instrument Based on the Chaotic Triple Pendulum}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {57}, doi = {10.21428/92fbeb44.09ecc54d}, url = {https://nime.pubpub.org/pub/uh1zfz1f}, presentation-video = {https://youtu.be/bLx5b9aqwgI} }
Claire Pelofi, Michal Goldstein, Dana Bevilacqua, Michael McPhee, Ellie Abrams, and Pablo Ripollés. 2021. CHILLER: a Computer Human Interface for the Live Labeling of Emotional Responses. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.5da1ca0b
Abstract
Download PDF DOI
The CHILLER (a Computer-Human Interface for the Live Labeling of Emotional Responses) is a prototype of an affordable and easy-to-use wearable sensor for the real-time detection and visualization of one of the most accurate biomarkers of musical emotional processing: the piloerection of the skin (i.e., the goosebumps) that accompany musical chills (also known as musical frissons or shivers down the spine). In controlled laboratory experiments, electrodermal activity (EDA) has been traditionally used to measure fluctuations of musical emotion. EDA is, however, ill-suited for real-world settings (e.g., live concerts) because of its sensitivity to movement, electronic noise and variations in the contact between the skin and the recording electrodes. The CHILLER, based on the Raspberry Pi architecture, overcomes these limitations by using a well-known algorithm capable of detecting goosebumps from a video recording of a patch of skin. The CHILLER has potential applications in both academia and industry and could be used as a tool to broaden participation in STEM, as it brings together concepts from experimental psychology, neuroscience, physiology and computer science in an inexpensive, do-it-yourself device well-suited for educational purposes.
@inproceedings{NIME21_58, author = {Pelofi, Claire and Goldstein, Michal and Bevilacqua, Dana and McPhee, Michael and Abrams, Ellie and Ripollés, Pablo}, title = {CHILLER: a Computer Human Interface for the Live Labeling of Emotional Responses}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {58}, doi = {10.21428/92fbeb44.5da1ca0b}, url = {https://nime.pubpub.org/pub/kdahf9fq}, presentation-video = {https://youtu.be/JujnpqoSdR4} }
Jeffrey A. T. Lupker. 2021. Score-Transformer: A Deep Learning Aid for Music Composition. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.21d4fd1f
Abstract
Download PDF DOI
Creating an artificially intelligent (AI) aid for music composers requires a practical and modular approach, one that allows the composer to manipulate the technology when needed in the search for new sounds. Many existing approaches fail to capture the interest of composers as they are limited beyond their demonstrative purposes, allow for only minimal interaction from the composer or require GPU access to generate samples quickly. This paper introduces Score-Transformer (ST), a practical integration of deep learning technology to aid in the creation of new music which works seamlessly alongside any popular software notation (Finale, Sibelius, etc.). Score-Transformer is built upon a variant of the powerful transformer model, currently used in state-of-the-art natural language models. Owing to hierarchical and sequential similarities between music and language, the transformer model can learn to write polyphonic MIDI music based on any styles, genres, or composers it is trained upon. This paper briefly outlines how the model learns and later notates music based upon any prompt given to it from the user. Furthermore, ST can be updated at any time on additional MIDI recordings minimizing the risk of the software becoming outdated or impractical for continued use.
@inproceedings{NIME21_59, author = {Lupker, Jeffrey A. T.}, title = {Score-Transformer: A Deep Learning Aid for Music Composition}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {59}, doi = {10.21428/92fbeb44.21d4fd1f}, url = {https://nime.pubpub.org/pub/7a6ij1ak}, presentation-video = {https://youtu.be/CZO8nj6YzVI} }
Jon Gillick and David Bamman. 2021. What to Play and How to Play it: Guiding Generative Music Models with Multiple Demonstrations. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.06e2d5f4
Abstract
Download PDF DOI
We propose and evaluate an approach to incorporating multiple user-provided inputs, each demonstrating a complementary set of musical characteristics, to guide the output of a generative model for synthesizing short music performances or loops. We focus on user inputs that describe both “what to play” (via scores in MIDI format) and “how to play it” (via rhythmic inputs to specify expressive timing and dynamics). Through experiments, we demonstrate that our method can facilitate human-AI co-creation of drum loops with diverse and customizable outputs. In the process, we argue for the interaction paradigm of mapping by demonstration as a promising approach to working with deep learning models that are capable of generating complex and realistic musical parts.
@inproceedings{NIME21_6, author = {Gillick, Jon and Bamman, David}, title = {What to Play and How to Play it: Guiding Generative Music Models with Multiple Demonstrations}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {6}, doi = {10.21428/92fbeb44.06e2d5f4}, url = {https://nime.pubpub.org/pub/s3x60926}, presentation-video = {https://youtu.be/Q2M_smiN6oo} }
Romain Michon, Catinca Dumitrascu, Sandrine Chudet, Yann Orlarey, Stéphane Letz, and Dominique Fober. 2021. Amstramgrame: Making Scientific Concepts More Tangible Through Music Technology at School. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.a84edd3f
Abstract
Download PDF DOI
Amstramgrame is a music technology STEAM (Science Technology Engineering Arts and Mathematics) project aiming at making more tangible abstract scientific concepts through the programming of a Digital Musical Instrument (DMI): the Gramophone. Various custom tools ranging from online programming environments to the Gramophone itself have been developed as part of this project. An innovative method anchored in the reality of the field as well as a wide range of key-turn pedagogical scenarios are also part of the Amtramgrame toolkit. This article presents the tools and the method of Amstramgrame as well as the results of its pilot phase. Future directions along with some insights on the implementation of this kind of project are provided as well.
@inproceedings{NIME21_60, author = {Michon, Romain and Dumitrascu, Catinca and Chudet, Sandrine and Orlarey, Yann and Letz, Stéphane and Fober, Dominique}, title = {Amstramgrame: Making Scientific Concepts More Tangible Through Music Technology at School}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {60}, doi = {10.21428/92fbeb44.a84edd3f}, url = {https://nime.pubpub.org/pub/3zeala6v}, presentation-video = {https://youtu.be/KTgl4suQ_Ks} }
Vivian Reuter and Lorenz Schwarz. 2021. Wireless Sound Modules. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.07c72a46
Abstract
Download PDF DOI
We study the question of how wireless, self-contained CMOS-synthesizers with built-in speakers can be used to achieve low-threshold operability of multichannel sound fields. We deliberately use low-tech and DIY approaches to build simple sound modules for music interaction and education in order to ensure accessibility of the technology. The modules are operated by wireless power transfer (WPT). A multichannel sound field can be easily generated and modulated by placing several sound objects in proximity to the induction coils. Alterations in sound are caused by repositioning, moving or grouping the sound modules. Although not physically linked to each other, the objects start interacting electro-acoustically when they share the same magnetic field. Because they are equipped with electronic sound generators and transducers, the sound modules can work independently from a sound studio situation.
@inproceedings{NIME21_61, author = {Reuter, Vivian and Schwarz, Lorenz}, title = {Wireless Sound Modules}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {61}, doi = {10.21428/92fbeb44.07c72a46}, url = {https://nime.pubpub.org/pub/muvvx0y5}, presentation-video = {https://youtu.be/08kfv74Z880} }
Joshua Ryan Lam and Charalampos Saitis. 2021. The Timbre Explorer: A Synthesizer Interface for Educational Purposes and Perceptual Studies. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.92a95683
Abstract
Download PDF DOI
When two sounds are played at the same loudness, pitch, and duration, what sets them apart are their timbres. This study documents the design and implementation of the Timbre Explorer, a synthesizer interface based on efforts to dimensionalize this perceptual concept. The resulting prototype controls four perceptually salient dimensions of timbre in real-time: attack time, brightness, spectral flux, and spectral density. A graphical user interface supports user understanding with live visualizations of the effects of each dimension. The applications of this interface are three-fold; further perceptual timbre studies, usage as a practical shortcut for synthesizers, and educating users about the frequency domain, sound synthesis, and the concept of timbre. The project has since been expanded to a standalone version independent of a computer and a purely online web-audio version.
@inproceedings{NIME21_62, author = {Lam, Joshua Ryan and Saitis, Charalampos}, title = {The Timbre Explorer: A Synthesizer Interface for Educational Purposes and Perceptual Studies}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {62}, doi = {10.21428/92fbeb44.92a95683}, url = {https://nime.pubpub.org/pub/q5oc20wg}, presentation-video = {https://youtu.be/EJ0ZAhOdBTw} }
Maria Svahn, Josefine Hölling, Fanny Curtsson, and Nina Nokelainen. 2021. The Rullen Band. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.e795c9b5
Abstract
Download PDF DOI
Music education is an important part of the school curriculum; it teaches children to be creative and to collaborate with others. Music gives individuals another medium to communicate through, which is especially important for individuals with cognitive or physical disabilities. Teachers of children with severe disabilities have expressed a lack of musical instruments adapted for these children, which leads to an incomplete music education for this group. This study aims at designing and evaluating a set of collaborative musical instruments for children with cognitive and physical disabilities, and the research is done together with the special education school Rullen in Stockholm, Sweden. The process was divided into three main parts; a pre-study, building and designing, and finally a user study. Based on findings from previous research, together with input received from teachers at Rullen during the pre-study, the resulting design consists of four musical instruments that are connected to a central hub. The results show that the instruments functioned as intended and that the design makes musical learning accessible in a way traditional instruments do not, as well as creates a good basis for a collaborative musical experience. However, fully evaluating the effect of playing together requires more time for the children to get comfortable with the instruments and also for the experiment leaders to test different setups to optimize the conditions for a good interplay.
@inproceedings{NIME21_63, author = {Svahn, Maria and Hölling, Josefine and Curtsson, Fanny and Nokelainen, Nina}, title = {The Rullen Band}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {63}, doi = {10.21428/92fbeb44.e795c9b5}, url = {https://nime.pubpub.org/pub/pvd6davm}, presentation-video = {https://youtu.be/2cD9f493oJM} }
Stefan Püst, Lena Gieseke, and Angela Brennecke. 2021. Interaction Taxonomy for Sequencer-Based Music Performances. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.0d5ab18d
Abstract
Download PDF DOI
Sequencer-based live performances of electronic music require a variety of interactions. These interactions depend strongly on the affordances and constraints of the used instrument. Musicians may perceive the available interactions offered by the used instrument as limiting. For furthering the development of instruments for live performances and expanding the interaction possibilities, first, a systematic overview of interactions in current sequencer-based music performance is needed. To that end, we propose a taxonomy of interactions in sequencer-based music performances of electronic music. We identify two performance modes sequencing and sound design and four interaction classes creation, modification, selection, and evaluation. Furthermore, we discuss the influence of the different interaction classes on both, musicians as well as the audience and use the proposed taxonomy to analyze six commercially available hardware devices.
@inproceedings{NIME21_64, author = {Püst, Stefan and Gieseke, Lena and Brennecke, Angela}, title = {Interaction Taxonomy for Sequencer-Based Music Performances}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {64}, doi = {10.21428/92fbeb44.0d5ab18d}, url = {https://nime.pubpub.org/pub/gq2ukghi}, presentation-video = {https://youtu.be/c4MUKWpneg0} }
Isabela Corintha and Giordano Cabral. 2021. Improvised Sound-Making within Musical Apprenticeship and Enactivism: An Intersection between the 4E‘s Model and DMIs. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.56a01d33
Abstract
Download PDF DOI
From an epistemological perspective, this work presents a discussion of how the paradigm of enactive music cognition is related to improvisation in the context of the skills and needs of 21st-century music learners. Improvisation in music education is addressed within the perspective of an alternative but an increasingly influential enactive approach to mind (Varela et al., 1993) followed by the four theories known as the 4E of cognition - embedded, embodied, enactive and extended - which naturally have characteristics in common that led them to be grouped in this way. I discuss the “autopoietic” (self-maintain systems that auto-reproduce over time based on their own set of internal rules) nature of the embodied musical mind. To conclude, an overview concerning the enactivist approach within DMIs design in order to provide a better understanding of the experiences and benefits of using new technologies in musical learning contexts is outlined.
@inproceedings{NIME21_65, author = {Corintha, Isabela and Cabral, Giordano}, title = {Improvised Sound-Making within Musical Apprenticeship and Enactivism: An Intersection between the 4E`s Model and DMIs}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {65}, doi = {10.21428/92fbeb44.56a01d33}, url = {https://nime.pubpub.org/pub/e4lsrn6c}, presentation-video = {https://youtu.be/dGb5tl_tA58} }
Tim Murray-Browne and Panagiotis Tigas. 2021. Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.9d4bcd4b
Abstract
Download PDF DOI
In many contexts, creating mappings for gestural interactions can form part of an artistic process. Creators seeking a mapping that is expressive, novel, and affords them a sense of authorship may not know how to program it up in a signal processing patch. Tools like Wekinator [1] and MIMIC [2] allow creators to use supervised machine learning to learn mappings from example input/output pairings. However, a creator may know a good mapping when they encounter it yet start with little sense of what the inputs or outputs should be. We call this an open-ended mapping process. Addressing this need, we introduce the latent mapping, which leverages the latent space of an unsupervised machine learning algorithm such as a Variational Autoencoder trained on a corpus of unlabelled gestural data from the creator. We illustrate it with Sonified Body, a system mapping full-body movement to sound which we explore in a residency with three dancers.
@inproceedings{NIME21_66, author = {Murray-Browne, Tim and Tigas, Panagiotis}, title = {Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {66}, doi = {10.21428/92fbeb44.9d4bcd4b}, url = {https://nime.pubpub.org/pub/latent-mappings}, presentation-video = {https://youtu.be/zBOHWyIGaYc} }
Graham Wakefield. 2021. A streamlined workflow from Max/gen~ to modular hardware. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.e32fde90
Abstract
Download PDF DOI
This paper describes Oopsy, which provides a streamlined process for editing digital signal processing algorithms for precise and sample accurate sound generation, transformation and modulation, and placing them in the context of embedded hardware and modular synthesizers. This pipeline gives digital instrument designers the development flexibility of established software with the deployment benefits of working on hardware. Specifically, algorithm design takes place in the flexible context of gen in Max, and Oopsy automatically and fluently translates this and uploads it onto the open-ended Daisy embedded hardware. The paper locates this work in the context of related software/hardware workflows, and provides detail of its contributions in design, implementation, and use.
@inproceedings{NIME21_67, author = {Wakefield, Graham}, title = {A streamlined workflow from Max/gen{\textasciitilde} to modular hardware}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {67}, doi = {10.21428/92fbeb44.e32fde90}, url = {https://nime.pubpub.org/pub/0u3ruj23}, presentation-video = {https://youtu.be/xJwI9F9Spbo} }
Roger B. Dannenberg. 2021. Canons for Conlon: Composing and Performing Multiple Tempi on the Web. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.a41fe2c5
Abstract
Download PDF DOI
In response to the 2020 pandemic, a new work was composed inspired by the limitations and challenges of performing over the network. Since synchronization is one of the big challenges, or perhaps something to be avoided due to network latency, this work explicitly calls for desynchronization in a controlled way, using metronomes running at different rates to take performers in and out of approximate synchronization. A special editor was developed to visualize the music because conventional editors do not support multiple continuously varying tempi.
@inproceedings{NIME21_68, author = {Dannenberg, Roger B.}, title = {Canons for Conlon: Composing and Performing Multiple Tempi on the Web}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {68}, doi = {10.21428/92fbeb44.a41fe2c5}, url = {https://nime.pubpub.org/pub/jxo0v8r7}, presentation-video = {https://youtu.be/MhcZyE2SCck} }
Artemi-Maria Gioti. 2021. A Compositional Exploration of Computational Aesthetic Evaluation and AI Bias. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.de74b046
Abstract
Download PDF DOI
This paper describes a subversive compositional approach to machine learning, focused on the exploration of AI bias and computational aesthetic evaluation. In Bias, for bass clarinet and Interactive Music System, a computer music system using two Neural Networks trained to develop “aesthetic bias” interacts with the musician by evaluating the sound input based on its “subjective” aesthetic judgments. The composition problematizes the discrepancies between the concepts of error and accuracy, associated with supervised machine learning, and aesthetic judgments as inherently subjective and intangible. The methods used in the compositional process are discussed with respect to the objective of balancing the trade-off between musical authorship and interpretative freedom in interactive musical works.
@inproceedings{NIME21_69, author = {Gioti, Artemi-Maria}, title = {A Compositional Exploration of Computational Aesthetic Evaluation and AI Bias.}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {69}, doi = {10.21428/92fbeb44.de74b046}, url = {https://nime.pubpub.org/pub/zpvgmv74}, presentation-video = {https://youtu.be/9l8NeGmvpDU} }
Paul Dunham, Dr. Mo H. Zareei, Prof. Dale Carnegie, and Dr. Dugal McKinnon. 2021. Click::RAND#2. An Indeterminate Sound Sculpture. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.5cc6d157
Abstract
Download PDF DOI
Can random digit data be transformed and utilized as a sound installation that provides a referential connection between a book and the electromechanical computer? What happens when the text of A Million Random Digits with 100,000 Normal Deviates is ‘vocalized’ by an electro-mechanical object? Using a media archaeological research approach, Click::RAND^(#)2, an indeterminate sound sculpture utilising relays as sound objects, is an audio-visual reinterpretation and representation of an historical relationship between a book of random digits and the electromechanical relay. Developed by the first author, Click::RAND^(#)2 is the physical re-presentation of random digit data sets as compositional elements to complement the physical presence of the work through spatialized sound patterns framed within the context of Henri Lefebvre’s rhythmanalysis and experienced as synchronous, syncopated or discordant rhythms.
@inproceedings{NIME21_7, author = {Dunham, Paul and Zareei, Dr. Mo H. and Carnegie, Prof. Dale and McKinnon, Dr. Dugal}, title = {Click::RAND#2. An Indeterminate Sound Sculpture}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {7}, doi = {10.21428/92fbeb44.5cc6d157}, url = {https://nime.pubpub.org/pub/lac4s48h}, presentation-video = {https://youtu.be/vJynbs8txuA} }
Raghavasimhan Sankaranarayanan and Gil Weinberg. 2021. Design of Hathaani - A Robotic Violinist for Carnatic Music. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.0ad83109
Abstract
Download PDF DOI
We present a novel robotic violinist that is designed to play Carnatic music - a music system popular in the southern part of India. The robot plays the D string and uses a single finger mechanism inspired by the Chitravina - a fretless Indian lute. A fingerboard traversal system with a dynamic finger tip apparatus enables the robot to play gamakas - pitch based embellishments in-between notes, which are at the core of Carnatic music. A double roller design is used for bowing which reduces space, produces a tone that resembles the tone of a conventional violin bow, and facilitates super human playing techniques such as infinite bowing. The design also enables the user to change the bow hair tightness to help capture a variety of performing techniques in different musical styles. Objective assessments and subjective listening tests were conducted to evaluate our design, indicating that the robot can play gamakas in a realistic manner and thus, can perform Carnatic music.
@inproceedings{NIME21_70, author = {Sankaranarayanan, Raghavasimhan and Weinberg, Gil}, title = {Design of Hathaani - A Robotic Violinist for Carnatic Music}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {70}, doi = {10.21428/92fbeb44.0ad83109}, url = {https://nime.pubpub.org/pub/225tmviw}, presentation-video = {https://youtu.be/4vNZm2Zewqs} }
Damian Mills, Franziska Schroeder, and John D’Arcy. 2021. GIVME: Guided Interactions in Virtual Musical Environments: . Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.5443652c
Abstract
Download PDF DOI
The current generation of commercial hardware and software for virtual reality and immersive environments presents possibilities for a wealth of creative solutions for new musical expression and interaction. This paper explores the affordances of virtual musical environments with the disabled music-making community of Drake Music Project Northern Ireland. Recent collaborations have investigated strategies for Guided Interactions in Virtual Musical Environments (GIVME), a novel concept the authors introduce here. This paper gives some background on disabled music-making with digital musical instruments before sharing recent research projects that facilitate disabled music performance in virtual reality immersive environments. We expand on the premise of GIVME as a potential guideline for musical interaction design for disabled musicians in VR, and take an explorative look at the possibilities and constraints for instrument design for disabled musicians as virtual worlds integrate ever more closely with the real.
@inproceedings{NIME21_71, author = {Mills, Damian and Schroeder, Franziska and D'Arcy, John}, title = {GIVME: Guided Interactions in Virtual Musical Environments: }, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {71}, doi = {10.21428/92fbeb44.5443652c}, url = {https://nime.pubpub.org/pub/h14o4oit}, presentation-video = {https://youtu.be/sI0K9sMYc80} }
Anne Hege, Camille Noufi, Elena Georgieva, and Ge Wang. 2021. Instrument Design for The Furies: A LaptOpera. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.dde5029a
Abstract
Download PDF DOI
In this article, we discuss the creation of The Furies: A LaptOpera, a new opera for laptop orchestra and live vocal soloists that tells the story of the Greek tragedy Electra. We outline the principles that guided our instrument design with the aim of forging direct and visceral connections between the music, the narrative, and the relationship between characters in ways we can simultaneously hear, see, and feel. Through detailed case studies of three instruments—The Rope and BeatPlayer, the tether chorus, and the autonomous speaker orchestra—this paper offers tools and reflections to guide instrument-building in service of narrative-based works through a unified multimedia art form.
@inproceedings{NIME21_72, author = {Hege, Anne and Noufi, Camille and Georgieva, Elena and Wang, Ge}, title = {Instrument Design for The Furies: A LaptOpera}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {72}, doi = {10.21428/92fbeb44.dde5029a}, url = {https://nime.pubpub.org/pub/gx6klqui}, presentation-video = {https://youtu.be/QC_-h4cVVog} }
Staas de Jong. 2021. Human noise at the fingertip: Positional (non)control under varying haptic × musical conditions. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.9765f11d
Abstract
Download PDF DOI
As technologies and interfaces for the instrumental control of musical sound get ever better at tracking aspects of human position and motion in space, a fundamental problem emerges: Unintended or even counter-intentional control may result when humans themselves become a source of positional noise. A clear case of what is meant by this, is the “stillness movement” of a body part, occurring despite the simultaneous explicit intention for that body part to remain still. In this paper, we present the results of a randomized, controlled experiment investigating this phenomenon along a vertical axis relative to the human fingertip. The results include characterizations of both the spatial distribution and frequency distribution of the stillness movement observed. Also included are results indicating a possible role for constant forces and viscosities in reducing stillness movement amplitude, thereby potentially enabling the implementation of more positional control of musical sound within the same available spatial range. Importantly, the above is summarized in a form that is directly interpretable for anyone designing technologies, interactions, or performances that involve fingertip control of musical sound. Also, a complete data set of the experimental results is included in the separate Appendices to this paper, again in a format that is directly interpretable.
@inproceedings{NIME21_73, author = {de Jong, Staas}, title = {Human noise at the fingertip: Positional (non)control under varying haptic × musical conditions}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {73}, doi = {10.21428/92fbeb44.9765f11d}, url = {https://nime.pubpub.org/pub/bol2r7nr}, presentation-video = {https://youtu.be/L_WhJ3N-v8c} }
Christian Faubel. 2021. Emergent Polyrhythmic Patterns with a Neuromorph Electronic Network. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.e66a8542
Abstract
Download PDF DOI
In this paper I show how it is possible to create polyrhythmic patterns with analogue oscillators by setting up a network of variable resistances that connect these oscillators. The system I present is build with electronic circuits connected to dc-motors and allows for a very tangible and playful exploration of the dynamic properties of artificial neural networks. The theoretical underpinnings of this approach stem from observation and models of synchronization in living organisms, where synchronization and phase-locking is not only an observable phenomenon but can also be seen as a marker of the quality of interaction. Realized as a technical system of analogue oscillators synchronization also appears between oscillators tuned at different basic rhythm and stable polyrhythmic patterns emerge as the result of electrical connections.
@inproceedings{NIME21_74, author = {Faubel, Christian}, title = {Emergent Polyrhythmic Patterns with a Neuromorph Electronic Network}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {74}, doi = {10.21428/92fbeb44.e66a8542}, url = {https://nime.pubpub.org/pub/g04egsqn}, presentation-video = {https://youtu.be/pJlxVJTMRto} }
João Tragtenberg, Gabriel Albuquerque, and Filipe Calegario. 2021. Gambiarra and Techno-Vernacular Creativity in NIME Research. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.98354a15
Abstract
Download PDF DOI
Over past editions of the NIME Conference, there has been a growing concern towards diversity and inclusion. It is relevant for an international community whose vast majority of its members are in Europe, the USA, and Canada to seek a richer cultural diversity. To contribute to a decolonial perspective in the inclusion of underrepresented countries and ethnic/racial groups, we discuss Gambiarra and Techno-Vernacular Creativity concepts. We believe these concepts may help structure and stimulate individuals from these underrepresented contexts to perform research in the NIME field.
@inproceedings{NIME21_75, author = {Tragtenberg, João and Albuquerque, Gabriel and Calegario, Filipe}, title = {Gambiarra and Techno-Vernacular Creativity in NIME Research}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {75}, doi = {10.21428/92fbeb44.98354a15}, url = {https://nime.pubpub.org/pub/aqm27581}, presentation-video = {https://youtu.be/iJ8g7vBPFYw} }
Timothy Roth, Aiyun Huang, and Tyler Cunningham. 2021. On Parallel Performance Practices: Some Observations on Personalizing DMIs as Percussionists. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c61b9546
Abstract
Download PDF DOI
Digital musical instrument (DMI) design and performance is primarily practiced by those with backgrounds in music technology and human-computer interaction. Research on these topics is rarely led by performers, much less by those without backgrounds in technology. In this study, we explore DMI design and performance from the perspective of a singular community of classically-trained percussionists. We use a practiced-based methodology informed by our skillset as percussionists to study how instrumental skills and sensibilities can be incorporated into the personalization of, and performance with, DMIs. We introduced a simple and adaptable digital musical instrument, built using the Arduino Uno, that individuals (percussionists) could personalize and extend in order to improvise, compose and create music (études). Our analysis maps parallel percussion practices emerging from the resultant DMI compositions and performances by examining the functionality of each Arduino instrument through the lens of material-oriented and communication-oriented approaches to interactivity.
@inproceedings{NIME21_76, author = {Roth, Timothy and Huang, Aiyun and Cunningham, Tyler}, title = {On Parallel Performance Practices: Some Observations on Personalizing DMIs as Percussionists}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {76}, doi = {10.21428/92fbeb44.c61b9546}, url = {https://nime.pubpub.org/pub/226jlaug}, presentation-video = {https://youtu.be/kjQDN907FXs} }
Sofy Yuditskaya, Sophia Sun, and Margaret Schedel. 2021. Synthetic Erudition Assist Lattice. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.0282a79c
Abstract
Download PDF DOI
The Seals are a political, feminist, noise, and AI-inspired electronic sorta-surf rock band composed of the talents of Margaret Schedel, Susie Green, Sophia Sun, Ria Rajan, and Sofy Yuditskaya, augmented by the S.E.A.L. (Synthetic Erudition Assist Lattice), as we call the collection of AIs that assist us in creating usable content with which to mold and shape our music and visuals. Our concerts begin by invoking one another through internet conferencing software; during the concert, we play skull augmented theremins while reading GPT2 & GPT3 (Machine Learning language models) generated dialogue over pre-generated songs. As a distributed band we designed our performance to take place over video conferencing systems deliberately incorporating the glitch artifacts that they bring. We use one of the oldest forms of generative operations, throwing dice, as well as the latest in ML technology to create our collaborative music over a distance. In this paper, we illustrate how we leverage the multiple novel interfaces that we use to create our unique sound.
@inproceedings{NIME21_77, author = {Yuditskaya, Sofy and Sun, Sophia and Schedel, Margaret}, title = {Synthetic Erudition Assist Lattice}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {77}, doi = {10.21428/92fbeb44.0282a79c}, url = {https://nime.pubpub.org/pub/5oupvoun}, presentation-video = {https://youtu.be/FmTbEUyePXg} }
Michael Blandino and Edgar Berdahl. 2021. Using a Pursuit Tracking Task to Compare Continuous Control of Various NIME Sensors. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c2b5a672
Abstract
Download PDF DOI
This study investigates how accurately users can continuously control a variety of one degree of freedom sensors commonly used in electronic music interfaces. Analysis within an information-theoretic model yields channel capacities of maximum information throughput in bits/sec that can support a unified comparison. The results may inform the design of digital musical instruments and the design of systems with similarly demanding control tasks.
@inproceedings{NIME21_78, author = {Blandino, Michael and Berdahl, Edgar}, title = {Using a Pursuit Tracking Task to Compare Continuous Control of Various NIME Sensors}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {78}, doi = {10.21428/92fbeb44.c2b5a672}, url = {https://nime.pubpub.org/pub/using-a-pursuit-tracking-task-to-compare-continuous-control-of-various-nime-sensors}, presentation-video = {https://youtu.be/-p7mp3LFsQg} }
Margaret Schedel, Brian Smith, Robert Cosgrove, and Nick Hwang. 2021. RhumbLine: Plectrohyla Exquisita — Spatial Listening of Zoomorphic Musical Robots. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.9e1312b1
Abstract
Download PDF DOI
Contending with ecosystem silencing in the Anthropocene, RhumbLine: Plectrohyla Exquisita is an installation-scale instrument featuring an ensemble of zoomorphic musical robots that generate an acoustic soundscape from behind an acousmatic veil, highlighting the spatial attributes of acoustic sound. Originally conceived as a physical installation, the global COVID-19 pandemic catalyzed a reconceptualization of the work that allowed it to function remotely and collaboratively with users seeding robotic frog callers with improvised rhythmic calls via the internet—transforming a physical installation into a web-based performable installation-scale instrument. The performed calls from online visitors evolve using AI as they pass through the frog collective. After performing a rhythm, audiences listen ambisonically from behind a virtual veil and attempt to map the formation of the frogs, based on the spatial information embedded in their calls. After listening, audience members can reveal the frogs and their formation. By reconceiving rhumb lines—navigational tools that create paths of constant bearing to navigate space—as sonic tools to spatially orient listeners, RhumbLine: Plectrohyla Exquisita functions as a new interface for spatial musical expression (NISME) in both its physical and virtual instantiations.
@inproceedings{NIME21_79, author = {Schedel, Margaret and Smith, Brian and Cosgrove, Robert and Hwang, Nick}, title = {RhumbLine: Plectrohyla Exquisita — Spatial Listening of Zoomorphic Musical Robots}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {79}, doi = {10.21428/92fbeb44.9e1312b1}, url = {https://nime.pubpub.org/pub/f5jtuy87}, presentation-video = {https://youtu.be/twzpxObh9jw} }
S. M. Astrid Bin. 2021. Discourse is critical: Towards a collaborative NIME history. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.ac5d43e1
Abstract
Download PDF DOI
Recent work in NIME has questioned the political and social implications of work in this field, and has called for direct action on problems in the areas of diversity, representation and political engagement. Though there is motivation to address these problems, there is an open question of how to meaningfully do so. This paper proposes that NIME’s historical record is the best tool for understanding our own output but this record is incomplete, and makes the case for collective action to improve how we document our work. I begin by contrasting NIME’s output with its discourse, and explore the nature of this discourse through NIME history and examine our inherited epistemological complexity. I assert that, if left unexamined, this complexity can undermine our community values of diversity and inclusion. I argue that meaningfully addressing current problems demands critical reflection on our work, and explore how NIME’s historical record is currently used as a means of doing so. I then review what NIME’s historical record contains (and what it does not), and evaluate its fitness for use as a tool of inquiry. Finally I make the case for collective action to establish better documentation practices, and suggest features that may be helpful for the process as well as the result.
@inproceedings{NIME21_8, author = {Bin, S. M. Astrid}, title = {Discourse is critical: Towards a collaborative NIME history}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {8}, doi = {10.21428/92fbeb44.ac5d43e1}, url = {https://nime.pubpub.org/pub/nbrrk8ll}, presentation-video = {https://youtu.be/omnMRlj7miA} }
Koray Tahiroğlu, Miranda Kastemaa, and Oskar Koli. 2021. AI-terity 2.0: An Autonomous NIME Featuring GANSpaceSynth Deep Learning Model. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.3d0e9e12
Abstract
Download PDF DOI
In this paper we present the recent developments in the AI-terity instrument. AI-terity is a deformable, non-rigid musical instrument that comprises a particular artificial intelligence (AI) method for generating audio samples for real-time audio synthesis. As an improvement, we developed the control interface structure with additional sensor hardware. In addition, we implemented a new hybrid deep learning architecture, GANSpaceSynth, in which we applied the GANSpace method on the GANSynth model. Following the deep learning model improvement, we developed new autonomous features for the instrument that aim at keeping the musician in an active and uncertain state of exploration. Through these new features, the instrument enables more accurate control on GAN latent space. Further, we intend to investigate the current developments through a musical composition that idiomatically reflects the new autonomous features of the AI-terity instrument. We argue that the present technology of AI is suitable for enabling alternative autonomous features in audio domain for the creative practices of musicians.
@inproceedings{NIME21_80, author = {Tahiroğlu, Koray and Kastemaa, Miranda and Koli, Oskar}, title = {AI-terity 2.0: An Autonomous NIME Featuring GANSpaceSynth Deep Learning Model}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {80}, doi = {10.21428/92fbeb44.3d0e9e12}, url = {https://nime.pubpub.org/pub/9zu49nu5}, presentation-video = {https://youtu.be/WVAIPwI-3P8} }
Alex Champagne, Bob Pritchard, Paul Dietz, and Sidney Fels. 2021. Investigation of a Novel Shape Sensor for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.a72b68dd
Abstract
Download PDF DOI
A novel, high-fidelity, shape-sensing technology, BendShape [1], is investigated as an expressive music controller for sound effects, direct sound manipulation, and voice synthesis. Various approaches are considered for developing mapping strategies that create transparent metaphors to facilitate expression for both the performer and the audience. We explore strategies in the input, intermediate, and output mapping layers using a two-step approach guided by Perry’s Principles [2]. First, we use trial-and-error to establish simple mappings between single input parameter control and effects to identify promising directions for further study. Then, we compose a specific piece that supports different uses of the BendShape mappings in a performance context: this allows us to study a performer trying different types of expressive techniques, enabling us to analyse the role each mapping has in facilitating musical expression. We also investigate the effects these mapping strategies have on performer bandwidth. Our main finding is that the high fidelity of the novel BendShape sensor facilitates creating interpretable input representations to control sound representations, and thereby match interpretations that provide better expressive mappings, such as with vocal shape to vocal sound and bumpiness control; however, direct mappings of individual, independent sensor mappings to effects does not provide obvious advantages over simpler controls. Furthermore, while the BendShape sensor enables rich explorations for sound, the ability to find expressive interpretable shape-to-sound representations while respecting the performer’s bandwidth limitations (caused by having many coupled input degrees of freedom) remains a challenge and an opportunity.
@inproceedings{NIME21_81, author = {Champagne, Alex and Pritchard, Bob and Dietz, Paul and Fels, Sidney}, title = {Investigation of a Novel Shape Sensor for Musical Expression}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {81}, doi = {10.21428/92fbeb44.a72b68dd}, url = {https://nime.pubpub.org/pub/bu2jb1d6}, presentation-video = {https://youtu.be/CnJmH6fX6XA} }
Frederic Anthony Robinson. 2021. Debris: A playful interface for direct manipulation of audio waveforms. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.02005035
Abstract
Download PDF DOI
Debris is a playful interface for direct manipulation of audio waveforms. Audio data is represented as a collection of waveform elements, which provide a low-resolution visualisation of the audio sample. Each element, however, can be individually examined, re-positioned, or broken down into smaller fragments, thereby becoming a tangible representation of a moment in the sample. Debris is built around the idea of looking at a sound not as a linear event to be played from beginning to end, but as a non-linear collection of moments, timbres, and sound fragments which can be explored, closely examined and interacted with. This paper positions the work among conceptually related NIME interfaces, details the various user interactions and their mappings and ends with a discussion around the interface’s constraints.
@inproceedings{NIME21_82, author = {Robinson, Frederic Anthony}, title = {Debris: A playful interface for direct manipulation of audio waveforms}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {82}, doi = {10.21428/92fbeb44.02005035}, url = {https://nime.pubpub.org/pub/xn761337}, presentation-video = {https://youtu.be/H04LgbZqc-c} }
Jeff Gregorio and Youngmoo E. Kim. 2021. Evaluation of Timbre-Based Control of a Parametric Synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.31419bf9
Abstract
Download PDF DOI
Musical audio synthesis often requires systems-level knowledge and uniquely analytical approaches to music making, thus a number of machine learning systems have been proposed to replace traditional parameter spaces with more intuitive control spaces based on spatial arrangement of sonic qualities. Some prior evaluations of simplified control spaces have shown increased user efficacy via quantitative metrics in sound design tasks, and some indicate that simplification may lower barriers to entry to synthesis. However, the level and nature of the appeal of simplified interfaces to synthesists merits investigation, particularly in relation to the type of task, prior expertise, and aesthetic values. Toward addressing these unknowns, this work investigates user experience in a sample of 20 musicians with varying degrees of synthesis expertise, and uses a one-week, at-home, multi-task evaluation of a novel instrument presenting a simplified mode of control alongside the full parameter space. We find that our participants generally give primacy to parameter space and seek understanding of parameter-sound relationships, yet most do report finding some creative utility in timbre-space control for discovery of sounds, timbral transposition, and expressive modulations of parameters. Although we find some articulations of particular aesthetic values, relationships to user experience remain difficult to characterize generally.
@inproceedings{NIME21_83, author = {Gregorio, Jeff and Kim, Youngmoo E.}, title = {Evaluation of Timbre-Based Control of a Parametric Synthesizer}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {83}, doi = {10.21428/92fbeb44.31419bf9}, url = {https://nime.pubpub.org/pub/adtb2zl5}, presentation-video = {https://youtu.be/m7IqWceQmuk} }
Milton Riaño. 2021. Hybridization No. 1: Standing at the Boundary between Physical and Virtual Space. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.d3354ff3
Abstract
Download PDF DOI
Hybridization No. 1 is a wireless hand-held rotary instrument that allows the performer to simultaneously interact with physical and virtual spaces. The instrument emits visible laser lights and invisible ultrasonic waves which scan the architecture of a physical space. The instrument is also connected to a virtual 3D model of the same space, which allows the performer to create an immersive audiovisual composition that blurs the limits between physical and virtual space. In this paper I describe the instrument, its operation and its integrated multimedia system.
@inproceedings{NIME21_84, author = {Riaño, Milton}, title = {Hybridization No. 1: Standing at the Boundary between Physical and Virtual Space}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {84}, doi = {10.21428/92fbeb44.d3354ff3}, url = {https://nime.pubpub.org/pub/h1} }
Lloyd May and Peter Larsson. 2021. Nerve Sensors in Inclusive Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.82c5626f
Abstract
Download PDF DOI
We present the methods and findings of a multi-day performance research lab that evaluated the efficacy of a novel nerve sensor in the context of a physically inclusive performance practice. Nerve sensors are a variant of surface electromyography that are optimized to detect signals from nerve firings rather than skeletal muscle movement, allowing performers with altered muscle physiology or control to use the sensors more effectively. Through iterative co-design and musical performance evaluation, we compared the performative affordances and limitations of the nerve sensor to other contemporary sensor-based gestural instruments. The nerve sensor afforded the communication of gestural effort in a manner that other gestural instruments did not, while offering a smaller palette of reliably classifiable gestures.
@inproceedings{NIME21_85, author = {May, Lloyd and Larsson, Peter}, title = {Nerve Sensors in Inclusive Musical Performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {85}, doi = {10.21428/92fbeb44.82c5626f}, url = {https://nime.pubpub.org/pub/yxcp36ii}, presentation-video = {https://youtu.be/qsRVcBl2gAo} }
Guadalupe Babio Fernandez and Kent Larson. 2021. Tune Field. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.2305755b
Abstract
Download PDF DOI
This paper introduces Tune Field, a 3-dimensional tangible interface that combines and alters previously existing concepts of topographical, field sensing and capacitive touch interfaces as a method for musical expression and sound visualization. Users are invited to create experimental sound textures while modifying the topography of antennas. The interface’s touch antennas are randomly located on a box promoting exploration and discovery of gesture-to-sound relationships. This way, the interface opens space to playfully producing sound and triggering visuals; thus, converting Tune Field into a sensorial experience.
@inproceedings{NIME21_86, author = {Fernandez, Guadalupe Babio and Larson, Kent}, title = {Tune Field}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {86}, doi = {10.21428/92fbeb44.2305755b}, url = {https://nime.pubpub.org/pub/eqvxspw3}, presentation-video = {https://youtu.be/2lB8idO_yDs} }
Taejun Kim, Yi-Hsuan Yang, and Juhan Nam. 2021. Reverse-Engineering The Transition Regions of Real-World DJ Mixes using Sub-band Analysis with Convex Optimization. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.4b2fc7b9
Abstract
Download PDF DOI
The basic role of DJs is creating a seamless sequence of music tracks. In order to make the DJ mix a single continuous audio stream, DJs control various audio effects on a DJ mixer system particularly in the transition region between one track and the next track and modify the audio signals in terms of volume, timbre, tempo, and other musical elements. There have been research efforts to imitate the DJ mixing techniques but they are mainly rule-based approaches based on domain knowledge. In this paper, we propose a method to analyze the DJ mixer control from real-world DJ mixes toward a data-driven approach to imitate the DJ performance. Specifically, we estimate the mixing gain trajectories between the two tracks using sub-band analysis with constrained convex optimization. We evaluate the method by reconstructing the original tracks using the two source tracks and the gain estimate, and show that the proposed method outperforms the linear crossfading as a baseline and the single-band analysis. A listening test from the survey of 14 participants also confirms that our proposed method is superior among them.
@inproceedings{NIME21_87, author = {Kim, Taejun and Yang, Yi-Hsuan and Nam, Juhan}, title = {Reverse-Engineering The Transition Regions of Real-World DJ Mixes using Sub-band Analysis with Convex Optimization}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {87}, doi = {10.21428/92fbeb44.4b2fc7b9}, url = {https://nime.pubpub.org/pub/g7avj1a7}, presentation-video = {https://youtu.be/ju0P-Zq8Bwo} }
Benedict Gaster and Ryan Challinor. 2021. Bespoke Anywhere. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.02c348fb
Abstract
Download PDF DOI
This paper reports on a project aimed to break away from the portability concerns of native DSP code between different platforms, thus freeing the instrument designer from the burden of porting new Digital Musical Instruments (DMIs) to different architectures. Bespoke Anywhere is a live modular style software DMI with an instance of the Audio Anywhere (AA) framework, that enables working with audio plugins that are compiled once and run anywhere. At the heart of Audio Anywhere is an audio engine whose Digital Signal Processing (DSP) components are written in Faust and deployed with Web Assembly (Wasm). We demonstrate Bespoke Anywhere as a hosting application, for live performance, and music production. We focus on an instance of AA using Faust for DSP, that is statically complied to portable Wasm, and Graphical User Interfaces (GUIs) described in JSON, both of which are loaded dynamically into our modified version of Bespoke.
@inproceedings{NIME21_88, author = {Gaster, Benedict and Challinor, Ryan}, title = {Bespoke Anywhere}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {88}, doi = {10.21428/92fbeb44.02c348fb}, url = {https://nime.pubpub.org/pub/8jaqbl7m}, presentation-video = {https://youtu.be/ayJzFVRXPMs} }
Sang-won Leigh and Jeonghyun (Jonna) Lee. 2021. A Study on Learning Advanced Skills on Co-Playable Robotic Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.002be215
Abstract
Download PDF DOI
Learning advanced skills on a musical instrument takes a range of physical and cognitive efforts. For instance, practicing polyrhythm is a complex task that requires the development of both musical and physical skills. This paper explores the use of automation in the context of learning advanced skills on the guitar. Our robotic guitar is capable of physically plucking on the strings along with a musician, providing both haptic and audio guidance to the musician. We hypothesize that a multimodal and first-person experience of “being able to play” could increase learning efficacy. We discuss the novel learning application and a user study, through which we illustrate the implication and potential issues in systems that provide temporary skills and in-situ multimodal guidance for learning.
@inproceedings{NIME21_9, author = {Leigh, Sang-won and Lee, Jeonghyun (Jonna)}, title = {A Study on Learning Advanced Skills on Co-Playable Robotic Instruments}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {9}, doi = {10.21428/92fbeb44.002be215}, url = {https://nime.pubpub.org/pub/h5dqsvpm}, presentation-video = {https://youtu.be/MeXrN95jajU} }
2020
Ruolun Weng. 2020. Interactive Mobile Musical Application using faust2smartphone. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 1–4. http://doi.org/10.5281/zenodo.4813164
Abstract
Download PDF DOI
We introduce faust2smartphone, a tool to generate an edit-ready project for musical mobile application, which connects Faust programming language and mobile application’s development. It is an extended implementation of faust2api. Faust DSP objects can be easily embedded as a high level API so that the developers can access various functions and elements across different mobile platforms. This paper provides several modes and technical details on the structures and implementation of this system as well as some applications and future directions for this tool.
@inproceedings{NIME20_0, author = {Weng, Ruolun}, title = {Interactive Mobile Musical Application using faust2smartphone}, pages = {1--4}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813164}, url = {https://www.nime.org/proceedings/2020/nime2020_paper0.pdf} }
John Sullivan, Julian Vanasse, Catherine Guastavino, and Marcelo Wanderley. 2020. Reinventing the Noisebox: Designing Embedded Instruments for Active Musicians. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 5–10. http://doi.org/10.5281/zenodo.4813166
Abstract
Download PDF DOI
This paper reports on the user-driven redesign of an embedded digital musical instrument that has yielded a trio of new instruments, informed by early user feedback and co-design workshops organized with active musicians. Collectively, they share a stand-alone design, digitally fabricated enclosures, and a common sensor acquisition and sound synthesis architecture, yet each is unique in its playing technique and sonic output. We focus on the technical design of the instruments and provide examples of key design specifications that were derived from user input, while reflecting on the challenges to, and opportunities for, creating instruments that support active practices of performing musicians.
@inproceedings{NIME20_1, author = {Sullivan, John and Vanasse, Julian and Guastavino, Catherine and Wanderley, Marcelo}, title = {Reinventing the Noisebox: Designing Embedded Instruments for Active Musicians}, pages = {5--10}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813166}, url = {https://www.nime.org/proceedings/2020/nime2020_paper1.pdf}, presentation-video = {https://youtu.be/DUMXJw-CTVo} }
Darrell J Gibson and Richard Polfreman. 2020. Star Interpolator – A Novel Visualization Paradigm for Graphical Interpolators. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 49–54. http://doi.org/10.5281/zenodo.4813168
Abstract
Download PDF DOI
This paper presents a new visualization paradigm for graphical interpolation systems, known as Star Interpolation, that has been specifically created for sound design applications. Through the presented investigation of previous visualizations, it becomes apparent that the existing visuals in this class of system, generally relate to the interpolation model that determines the weightings of the presets and not the sonic output. The Star Interpolator looks to resolve this deficiency by providing visual cues that relate to the parameter space. Through comparative exploration it has been found this visualization provides a number of benefits over the previous systems. It is also shown that hybrid visualization can be generated that combined benefits of the new visualization with the existing interpolation models. These can then be accessed by using an Interactive Visualization (IV) approach. The results from our exploration of these visualizations are encouraging and they appear to be advantageous when using the interpolators for sound designs tasks. Therefore, it is proposed that formal usability testing is undertaken to measure the potential value of this form of visualization.
@inproceedings{NIME20_10, author = {Gibson, Darrell J and Polfreman, Richard}, title = {Star Interpolator – A Novel Visualization Paradigm for Graphical Interpolators}, pages = {49--54}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813168}, url = {https://www.nime.org/proceedings/2020/nime2020_paper10.pdf}, presentation-video = {https://youtu.be/3ImRZdSsP-M} }
Laurel S Pardue, Miguel Ortiz, Maarten van Walstijn, Paul Stapleton, and Matthew Rodger. 2020. Vodhrán: collaborative design for evolving a physical model and interface into a proto-instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 523–524. http://doi.org/10.5281/zenodo.4813170
Abstract
Download PDF DOI
This paper reports on the process of development of a virtual-acoustic proto-instrument, Vodhrán, based on a physical model of a plate, within a musical performance-driven ecosystemic environment. Performers explore the plate model via tactile interaction through a Sensel Morph interface, chosen to allow damping and localised striking consistent with playing hand percussion. Through an iteration of prototypes, we have designed an embedded proto-instrument that allows a bodily interaction between the performer and the virtual-acoustic plate in a way that redirects from the perception of the Sensel as a touchpad and reframes it as a percussive surface. Due to the computational effort required to run such a rich physical model and the necessity to provide a natural interaction, the audio processing is implemented on a high powered single board computer. We describe the design challenges and report on the technological solutions we have found in the implementation of Vodhrán which we believe are valuable to the wider NIME community.
@inproceedings{NIME20_100, author = {Pardue, Laurel S and Ortiz, Miguel and van Walstijn, Maarten and Stapleton, Paul and Rodger, Matthew}, title = {Vodhrán: collaborative design for evolving a physical model and interface into a proto-instrument}, pages = {523--524}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813170}, url = {https://www.nime.org/proceedings/2020/nime2020_paper100.pdf} }
Satvik Venkatesh, Edward Braund, and Eduardo Miranda. 2020. Designing Brain-computer Interfaces for Sonic Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 525–530. http://doi.org/10.5281/zenodo.4813172
Abstract
Download PDF DOI
Brain-computer interfaces (BCIs) are beneficial for patients who are suffering from motor disabilities because it offers them a way of creative expression, which improves mental well-being. BCIs aim to establish a direct communication medium between the brain and the computer. Therefore, unlike conventional musical interfaces, it does not require muscular power. This paper explores the potential of building sound synthesisers with BCIs that are based on steady-state visually evoked potential (SSVEP). It investigates novel ways to enable patients with motor disabilities to express themselves. It presents a new concept called sonic expression, that is to express oneself purely by the synthesis of sound. It introduces new layouts and designs for BCI-based sound synthesisers and the limitations of these interfaces are discussed. An evaluation of different sound synthesis techniques is conducted to find an appropriate one for such systems. Synthesis techniques are evaluated and compared based on a framework governed by sonic expression.
@inproceedings{NIME20_101, author = {Venkatesh, Satvik and Braund, Edward and Miranda, Eduardo}, title = {Designing Brain-computer Interfaces for Sonic Expression}, pages = {525--530}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813172}, url = {https://www.nime.org/proceedings/2020/nime2020_paper101.pdf} }
Duncan A.H. Williams, Bruno Fazenda, Victoria J. Williamson, and Gyorgy Fazekas. 2020. Biophysiologically synchronous computer generated music improves performance and reduces perceived effort in trail runners. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 531–536. http://doi.org/10.5281/zenodo.4813174
Abstract
Download PDF DOI
Music has previously been shown to be beneficial in improving runners performance in treadmill based experiments. This paper evaluates a generative music system, HEARTBEATS, designed to create biosignal synchronous music in real-time according to an individual athlete’s heart-rate or cadence (steps per minute). The tempo, melody, and timbral features of the generated music are modulated according to biosensor input from each runner using a wearable Bluetooth sensor. We compare the relative performance of athletes listening to heart-rate and cadence synchronous music, across a randomized trial (N=57) on a trail course with 76ft of elevation. Participants were instructed to continue until perceived effort went beyond an 18 using the Borg rating of perceived exertion scale. We found that cadence-synchronous music improved performance and decreased perceived effort in male runners, and improved performance but not perceived effort in female runners, in comparison to heart-rate synchronous music. This work has implications for the future design and implementation of novel portable music systems and in music-assisted coaching.
@inproceedings{NIME20_102, author = {Williams, Duncan A.H. and Fazenda, Bruno and Williamson, Victoria J. and Fazekas, Gyorgy}, title = {Biophysiologically synchronous computer generated music improves performance and reduces perceived effort in trail runners}, pages = {531--536}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813174}, url = {https://www.nime.org/proceedings/2020/nime2020_paper102.pdf} }
Gilberto Bernardes and Gilberto Bernardes. 2020. Interfacing Sounds: Hierarchical Audio-Content Morphologies for Creative Re-purposing in earGram 2.0. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 537–542. http://doi.org/10.5281/zenodo.4813176
Abstract
Download PDF DOI
Audio content-based processing has become a pervasive methodology for techno-fluent musicians. System architectures typically create thumbnail audio descriptions, based on signal processing methods, to visualize, retrieve and transform musical audio efficiently. Towards enhanced usability of these descriptor-based frameworks for the music community, the paper advances a minimal content-based audio description scheme, rooted on primary musical notation attributes at the threefold sound object, meso and macro hierarchies. Multiple perceptually-guided viewpoints from rhythmic, harmonic, timbral and dynamic attributes define a discrete and finite alphabet with minimal formal and subjective assumptions using unsupervised and user-guided methods. The Factor Oracle automaton is then adopted to model and visualize temporal morphology. The generative musical applications enabled by the descriptor-based framework at multiple structural hierarchies are discussed.
@inproceedings{NIME20_103, author = {Bernardes, Gilberto and Bernardes, Gilberto}, title = {Interfacing Sounds: Hierarchical Audio-Content Morphologies for Creative Re-purposing in earGram 2.0}, pages = {537--542}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813176}, url = {https://www.nime.org/proceedings/2020/nime2020_paper103.pdf}, presentation-video = {https://youtu.be/zEg9Cpir8zA} }
Joung Min Han and Yasuaki Kakehi. 2020. ParaSampling: A Musical Instrument with Handheld Tapehead Interfaces for Impromptu Recording and Playing on a Magnetic Tape. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 543–544. http://doi.org/10.5281/zenodo.4813178
Abstract
Download PDF DOI
For a long time, magnetic tape has been commonly utilized as one of physical media for recording and playing music. In this research, we propose a novel interactive musical instrument called ParaSampling that utilizes the technology of magnetic sound recording, and a improvisational sound playing method based on the instrument. While a conventional cassette tape player has a single tapehead, which rigidly placed, our instrument utilizes multiple handheld tapehead modules as an interface. Players can hold the interfaces and press them against the rotating magnetic tape at an any point to record or reproduce sounds The player can also easily erase and rewrite the sound recorded on the tape. With this instrument, they can achieve improvised and unique musical expressions through tangible and spatial interactions. In this paper, we describe the system design of ParaSampling, the implementation of the prototype system, and discuss music expressions enabled by the system.
@inproceedings{NIME20_104, author = {Han, Joung Min and Kakehi, Yasuaki}, title = {ParaSampling: A Musical Instrument with Handheld Tapehead Interfaces for Impromptu Recording and Playing on a Magnetic Tape}, pages = {543--544}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813178}, url = {https://www.nime.org/proceedings/2020/nime2020_paper104.pdf} }
Giorgos Filandrianos, Natalia Kotsani, Edmund G Dervakos, Giorgos Stamou, Vaios Amprazis, and Panagiotis Kiourtzoglou. 2020. Brainwaves-driven Effects Automation in Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 545–546. http://doi.org/10.5281/zenodo.4813180
Abstract
Download PDF DOI
A variety of controllers with multifarious sensors and functions have maximized the real time performers control capabilities. The idea behind this project was to create an interface which enables the interaction between the performers and the effect processor measuring their brain waves amplitudes, e.g., alpha, beta, theta, delta and gamma, not necessarily with the user’s awareness. We achieved this by using an electroencephalography (EEG) sensor for detecting performer’s different emotional states and, based on these, sending midi messages for digital processing units automation. The aim is to create a new generation of digital processor units that could be automatically configured in real-time given the emotions or thoughts of the performer or the audience. By introducing emotional state information in the real time control of several aspects of artistic expression, we highlight the impact of surprise and uniqueness in the artistic performance.
@inproceedings{NIME20_105, author = {Filandrianos, Giorgos and Kotsani, Natalia and Dervakos, Edmund G and Stamou, Giorgos and Amprazis, Vaios and Kiourtzoglou, Panagiotis}, title = {Brainwaves-driven Effects Automation in Musical Performance}, pages = {545--546}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813180}, url = {https://www.nime.org/proceedings/2020/nime2020_paper105.pdf} }
Graham Wakefield, Michael Palumbo, and Alexander Zonta. 2020. Affordances and Constraints of Modular Synthesis in Virtual Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 547–550. http://doi.org/10.5281/zenodo.4813182
Abstract
Download PDF DOI
This article focuses on the rich potential of hybrid domain translation of modular synthesis (MS) into virtual reality (VR). It asks: to what extent can what is valued in studio-based MS practice find a natural home or rich new interpretations in the immersive capacities of VR? The article attends particularly to the relative affordances and constraints of each as they inform the design and development of a new system ("Mischmasch") supporting collaborative and performative patching of Max gen patches and operators within a shared room-scale VR space.
@inproceedings{NIME20_106, author = {Wakefield, Graham and Palumbo, Michael and Zonta, Alexander}, title = {Affordances and Constraints of Modular Synthesis in Virtual Reality}, pages = {547--550}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813182}, url = {https://www.nime.org/proceedings/2020/nime2020_paper106.pdf} }
emmanouil moraitis. 2020. Symbiosis: a biological taxonomy for modes of interaction in dance-music collaborations. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 551–556. http://doi.org/10.5281/zenodo.4813184
Abstract
Download PDF DOI
Focusing on interactive performance works borne out of dancer-musician collaborations, this paper investigates the relationship between the mediums of sound and movement through a conceptual interpretation of the biological phenomenon of symbiosis. Describing the close and persistent interactions between organisms of different species, symbioses manifest across a spectrum of relationship types, each identified according to the health effect experienced by the engaged organisms. This biological taxonomy is appropriated within a framework which identifies specific modes of interaction between sound and movement according to the collaborating practitioners’ intended outcome, and required provisions, cognition of affect, and system operation. Using the symbiotic framework as an analytical tool, six dancer-musician collaborations from the field of NIME are examined in respect to the employed modes of interaction within each of the four examined areas. The findings reveal the emergence of multiple modes in each work, as well as examples of mutation between different modes over the course of a performance. Furthermore, the symbiotic concept provides a novel understanding of the ways gesture recognition technologies (GRTs) have redefined the relationship dynamics between dancers and musicians, and suggests a more efficient and inclusive approach in communicating the potential and limitations presented by Human-Computer Interaction tools.
@inproceedings{NIME20_107, author = {moraitis, emmanouil}, title = {Symbiosis: a biological taxonomy for modes of interaction in dance-music collaborations}, pages = {551--556}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813184}, url = {https://www.nime.org/proceedings/2020/nime2020_paper107.pdf}, presentation-video = {https://youtu.be/5X6F_nL8SOg} }
Antonella Nonnis and Nick Bryan-Kinns. 2020. Όλοι: music making to scaffold social playful activities and self-regulation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 557–558. http://doi.org/10.5281/zenodo.4813186
Abstract
Download PDF DOI
We present Olly, a musical textile tangible user interface (TUI) designed around the observations of a group of five children with autism who like music. The intention is to support scaffolding social interactions and sensory regulation during a semi-structured and open-ended playful activity. Olly was tested in the dance studio of a special education needs (SEN) school in North-East London, UK, for a period of 5 weeks, every Thursday afternoon for 30 minutes. Olly uses one Bare touch board in midi mode and four stretch analog sensors embedded inside four elastic ribbons. These ribbons top the main body of the installation which is made by using an inflatable gym ball wrapped in felt. Each of the ribbons plays a different instrument and triggers different harmonic chords. Olly allows to play pleasant melodies if interacting with it in solo mode and more complex harmonies when playing together with others. Results show great potentials for carefully designed musical TUI implementation aimed at scaffolding social play while affording self-regulation in SEN contexts. We present a brief introduction on the background and motivations, design considerations and results.
@inproceedings{NIME20_108, author = {Nonnis, Antonella and Bryan-Kinns, Nick}, title = {Όλοι: music making to scaffold social playful activities and self-regulation}, pages = {557--558}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813186}, url = {https://www.nime.org/proceedings/2020/nime2020_paper108.pdf} }
Sara Sithi-Amnuai. 2020. Exploring Identity Through Design: A Focus on the Cultural Body Via Nami. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 559–563. http://doi.org/10.5281/zenodo.4813188
Abstract
Download PDF DOI
Identity is inextricably linked to culture and sustained through creation and performance of music and dance, yet discussion of agency and cultural tools informing design and performance application of gestural controllers is not widely discussed. The purpose of this paper is to discuss the cultural body, its consideration in existing gestural controller design, and how cultural design methods have the potential to extend musical/social identities and/or traditions within a technological context. In an effort to connect and reconnect with the author’s personal Nikkei heritage, this paper will discuss the design of Nami – a custom built gestural controller and its applicability to extend the author’s cultural body through a community-centric case study performance.
@inproceedings{NIME20_109, author = {Sithi-Amnuai, Sara}, title = {Exploring Identity Through Design: A Focus on the Cultural Body Via Nami}, pages = {559--563}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813188}, url = {https://www.nime.org/proceedings/2020/nime2020_paper109.pdf}, presentation-video = {https://youtu.be/QCUGtE_z1LE} }
Anna Xambó and Gerard Roma. 2020. Performing Audiences: Composition Strategies for Network Music using Mobile Phones. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 55–60. http://doi.org/10.5281/zenodo.4813192
Abstract
Download PDF DOI
With the development of web audio standards, it has quickly become technically easy to develop and deploy software for inviting audiences to participate in musical performances using their mobile phones. Thus, a new audience-centric musical genre has emerged, which aligns with artistic manifestations where there is an explicit inclusion of the public (e.g. participatory art, cinema or theatre). Previous research has focused on analysing this new genre from historical, social organisation and technical perspectives. This follow-up paper contributes with reflections on technical and aesthetic aspects of composing within this audience-centric approach. We propose a set of 13 composition dimensions that deal with the role of the performer, the role of the audience, the location of sound and the type of feedback, among others. From a reflective approach, four participatory pieces developed by the authors are analysed using the proposed dimensions. Finally, we discuss a set of recommendations and challenges for the composers-developers of this new and promising musical genre. This paper concludes discussing the implications of this research for the NIME community.
@inproceedings{NIME20_11, author = {Xambó, Anna and Roma, Gerard}, title = {Performing Audiences: Composition Strategies for Network Music using Mobile Phones}, pages = {55--60}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813192}, url = {https://www.nime.org/proceedings/2020/nime2020_paper11.pdf} }
Joe Wright. 2020. The Appropriation and Utility of Constrained ADMIs. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 564–569. http://doi.org/10.5281/zenodo.4813194
Abstract
Download PDF DOI
This paper reflects on players’ first responses to a constrained Accessible Digital Musical Instrument (ADMI) in open, child-led sessions with seven children at a special school. Each player’s gestures with the instrument were sketched, categorised and compared with those of others among the group. Additionally, sensor data from the instruments was recorded and analysed to give a secondary indication of playing style, based on note and silence durations. In accord with previous studies, the high degree of constraints led to a diverse range of playing styles, allowing each player to appropriate and explore the instruments within a short inaugural session. The open, undirected sessions also provided insights which could potentially direct future work based on each person’s responses to the instruments. The paper closes with a short discussion of these diverse styles, and the potential role constrained ADMIs could serve as ’ice-breakers’ in musical projects that seek to co-produce or co-design with neurodiverse children and young people.
@inproceedings{NIME20_110, author = {Wright, Joe}, title = {The Appropriation and Utility of Constrained ADMIs}, pages = {564--569}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813194}, url = {https://www.nime.org/proceedings/2020/nime2020_paper110.pdf}, presentation-video = {https://youtu.be/RhaIzCXQ3uo} }
Lia Mice and Andrew McPherson. 2020. From miming to NIMEing: the development of idiomatic gestural language on large scale DMIs. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 570–575. http://doi.org/10.5281/zenodo.4813200
Abstract
Download PDF DOI
When performing with new instruments, musicians often develop new performative gestures and playing techniques. Music performance studies on new instruments often consider interfaces that feature a spectrum of gestures similar to already existing sound production techniques. This paper considers the choices performers make when creating an idiomatic gestural language for an entirely unfamiliar instrument. We designed a musical interface with a unique large-scale layout to encourage new performers to create fully original instrument-body interactions. We conducted a study where trained musicians were invited to perform one of two versions of the same instrument, each physically identical but with a different tone mapping. The study results reveal insights into how musicians develop novel performance gestures when encountering a new instrument characterised by an unfamiliar shape and size. Our discussion highlights the impact of an instrument’s scale and layout on the emergence of new gestural vocabularies and on the qualities of the music performed.
@inproceedings{NIME20_111, author = {Mice, Lia and McPherson, Andrew}, title = {From miming to NIMEing: the development of idiomatic gestural language on large scale DMIs}, pages = {570--575}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813200}, url = {https://www.nime.org/proceedings/2020/nime2020_paper111.pdf}, presentation-video = {https://youtu.be/mnJN8ELneUU} }
William C Payne, Ann Paradiso, and Shaun Kane. 2020. Cyclops: Designing an eye-controlled instrument for accessibility and flexible use. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 576–580. http://doi.org/10.5281/zenodo.4813204
Abstract
Download PDF DOI
The Cyclops is an eye-gaze controlled instrument designed for live performance and improvisation. It is primarily mo- tivated by a need for expressive musical instruments that are more easily accessible to people who rely on eye track- ers for computer access, such as those with amyotrophic lateral sclerosis (ALS). At its current implementation, the Cyclops contains a synthesizer and sequencer, and provides the ability to easily create and automate musical parameters and effects through recording eye-gaze gestures on a two- dimensional canvas. In this paper, we frame our prototype in the context of previous eye-controlled instruments, and we discuss we designed the Cyclops to make gaze-controlled music making as fun, accessible, and seamless as possible despite notable interaction challenges like latency, inaccu- racy, and “Midas Touch.”
@inproceedings{NIME20_112, author = {Payne, William C and Paradiso, Ann and Kane, Shaun}, title = {Cyclops: Designing an eye-controlled instrument for accessibility and flexible use}, pages = {576--580}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813204}, url = {https://www.nime.org/proceedings/2020/nime2020_paper112.pdf}, presentation-video = {https://youtu.be/G6dxngoCx60} }
Adnan Marquez-Borbon. 2020. Collaborative Learning with Interactive Music Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 581–586. http://doi.org/10.5281/zenodo.4813206
Abstract
Download PDF DOI
This paper presents the results of an observational study focusing on the collaborative learning processes of a group of performers with an interactive musical system. The main goal of this study was to implement methods for learning and developing practice with these technological objects in order to generate future pedagogical methods. During the research period of six months, four participants regularly engaged in workshop-type scenarios where learning objectives were proposed and guided by themselves.The principal researcher, working as participant-observer, did not impose or prescribed learning objectives to the other members of the group. Rather, all participants had equal say in what was to be done and how it was to be accomplished. Results show that the group learning environment is rich in opportunities for learning, mutual teaching, and for establishing a comunal practice for a given interactive musical system.Key findings suggest that learning by demonstration, observation and modelling are significant for learning in this context. Additionally, it was observed that a dialogue and a continuous flow of information between the members of the community is needed in order to motivate and further their learning.
@inproceedings{NIME20_113, author = {Marquez-Borbon, Adnan}, title = {Collaborative Learning with Interactive Music Systems}, pages = {581--586}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813206}, url = {https://www.nime.org/proceedings/2020/nime2020_paper113.pdf}, presentation-video = {https://youtu.be/1G0bOVlWwyI} }
Jens Vetter. 2020. WELLE - a web-based music environment for the blind. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 587–590. http://doi.org/10.5281/zenodo.4813208
Abstract
Download PDF DOI
This paper presents WELLE, a web-based music environment for blind people, and describes its development, design, notation syntax and first experiences. WELLE is intended to serve as a collaborative, performative and educational tool to quickly create and record musical ideas. It is pattern-oriented, based on textual notation and focuses on accessibility, playful interaction and ease of use. WELLE was developed as part of the research project Tangible Signals and will also serve as a platform for the integration of upcoming new interfaces.
@inproceedings{NIME20_114, author = {Vetter, Jens}, title = {WELLE - a web-based music environment for the blind}, pages = {587--590}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813208}, url = {https://www.nime.org/proceedings/2020/nime2020_paper114.pdf} }
Margarida Pessoa, Cláudio Parauta, Pedro Luís, Isabela Corintha, and Gilberto Bernardes. 2020. Examining Temporal Trends and Design Goals of Digital Music Instruments for Education in NIME: A Proposed Taxonomy. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 591–595. http://doi.org/10.5281/zenodo.4813210
Abstract
Download PDF DOI
This paper presents an overview of the design principles behind Digital Music Instruments (DMIs) for education across all editions of the International Conference on New Interfaces for Music Expression (NIME). We compiled a comprehensive catalogue of over hundred DMIs with varying degrees of applicability in the educational practice. Each catalogue entry is annotated according to a proposed taxonomy for DMIs for education, rooted in the mechanics of control, mapping and feedback of an interactive music system, along with the required expertise of target user groups and the instrument learning curve. Global statistics unpack underlying trends and design goals across the chronological period of the NIME conference. In recent years, we note a growing number of DMIs targeting non-experts and with reduced requirements in terms of expertise. Stemming from the identified trends, we discuss future challenges in the design of DMIs for education towards enhanced degrees of variation and unpredictability.
@inproceedings{NIME20_115, author = {Pessoa, Margarida and Parauta, Cláudio and Luís, Pedro and Corintha, Isabela and Bernardes, Gilberto}, title = {Examining Temporal Trends and Design Goals of Digital Music Instruments for Education in NIME: A Proposed Taxonomy}, pages = {591--595}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813210}, url = {https://www.nime.org/proceedings/2020/nime2020_paper115.pdf} }
Laurel S Pardue, Kuljit Bhamra, Graham England, Phil Eddershaw, and Duncan Menzies. 2020. Demystifying tabla through the development of an electronic drum. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 596–599. http://doi.org/10.5281/zenodo.4813212
Abstract
Download PDF DOI
The tabla is a traditional pitched two-piece Indian drum set, popular not only within South East Asian music, but whose sounds also regularly feature in western music. Yet tabla remains an aural tradition, taught largely through a guru system heavy in custom and mystique. Tablas can also pose problems for school and professional performance environments as they are physically bulky, fragile, and reactive to environmental factors such as damp and heat. As part of a broader project to demystify tabla, we present an electronic tabla that plays nearly identically to an acoustic tabla and was created in order to make the tabla acces- sible and practical for a wider audience of students, pro- fessional musicians and composers. Along with develop- ment of standardised tabla notation and instructional educational aides, the electronic tabla is designed to be compact, robust, easily tuned, and the electronic nature allows for scoring tabla through playing. Further, used as an interface, it allows the use of learned tabla technique to control other percussive sounds. We also discuss the technological approaches used to accurately capture the localized multi-touch rapid-fire strikes and damping that combine to make tabla such a captivating and virtuosic instrument.
@inproceedings{NIME20_116, author = {Pardue, Laurel S and Bhamra, Kuljit and England, Graham and Eddershaw, Phil and Menzies, Duncan}, title = {Demystifying tabla through the development of an electronic drum}, pages = {596--599}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813212}, url = {https://www.nime.org/proceedings/2020/nime2020_paper116.pdf}, presentation-video = {https://youtu.be/PPaHq8fQjB0} }
Juan D Sierra. 2020. SpeakerDrum. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 600–604. http://doi.org/10.5281/zenodo.4813216
Abstract
Download PDF DOI
SpeakerDrum is an instrument composed of multiple Dual Voice Coil speakers (DVC) where two coils are used to drive the same membrane. However, in this case, one of them is used as a microphone which is then used by the performer as an input interface of percussive gestures. Of course, this leads to poten- tial feedback, but with enough control, a compelling exploration of resonance haptic feedback and sound embodiment is possible.
@inproceedings{NIME20_117, author = {Sierra, Juan D}, title = {SpeakerDrum}, pages = {600--604}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813216}, url = {https://www.nime.org/proceedings/2020/nime2020_paper117.pdf} }
Matthew Caren, Romain Michon, and Matthew Wright. 2020. The KeyWI: An Expressive and Accessible Electronic Wind Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 605–608. http://doi.org/10.5281/zenodo.4813218
Abstract
Download PDF DOI
This paper presents the KeyWI, an electronic wind instrument design based on the melodica that both improves upon limitations in current systems and is general and powerful enough to support a variety of applications. Four opportunities for growth are identified in current electronic wind instrument systems, which then are used as focuses in the development and evaluation of the instrument. The instrument features a breath pressure sensor with a large dynamic range, a keyboard that allows for polyphonic pitch selection, and a completely integrated construction. Sound synthesis is performed with Faust code compiled to the Bela Mini, which offers low-latency audio and a simple yet powerful development workflow. In order to be as accessible and versatile as possible, the hardware and software is entirely open-source, and fabrication requires only common maker tools.
@inproceedings{NIME20_118, author = {Caren, Matthew and Michon, Romain and Wright, Matthew}, title = {The KeyWI: An Expressive and Accessible Electronic Wind Instrument}, pages = {605--608}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813218}, url = {https://www.nime.org/proceedings/2020/nime2020_paper118.pdf} }
Pelle Juul Christensen, Dan Overholt, and Stefania Serafin. 2020. The Da ̈ıs: A Haptically Enabled New Interface for Musical Expression for Controlling Physical Models for Sound Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 609–612. http://doi.org/10.5281/zenodo.4813220
Abstract
Download PDF DOI
In this paper we provide a detailed description of the development of a new interface for musical expression, the da ̈ıs, with focus on an iterative development process, control of physical models for sounds synthesis, and haptic feedback. The development process, consisting of three iterations, is covered along with a discussion of the tools and methods used. The sound synthesis algorithm for the da ̈ıs, a physical model of a bowed string, is covered and the mapping from the interface parameters to those of the synthesis algorithms is described in detail. Using a qualitative test the affordances, advantages, and disadvantages of the chosen design, synthesis algorithm, and parameter mapping is highlighted. Lastly, the possibilities for future work is discussed with special focus on alternate sounds and mappings.
@inproceedings{NIME20_119, author = {Christensen, Pelle Juul and Overholt, Dan and Serafin, Stefania}, title = {The Da ̈ıs: A Haptically Enabled New Interface for Musical Expression for Controlling Physical Models for Sound Synthesis}, pages = {609--612}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813220}, url = {https://www.nime.org/proceedings/2020/nime2020_paper119.pdf}, presentation-video = {https://youtu.be/XOvnc_AKKX8} }
Samuel J Hunt, Tom Mitchell, and Chris Nash. 2020. Composing computer generated music, an observational study using IGME: the Interactive Generative Music Environment. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 61–66. http://doi.org/10.5281/zenodo.4813222
Abstract
Download PDF DOI
Computer composed music remains a novel and challenging problem to solve. Despite an abundance of techniques and systems little research has explored how these might be useful for end-users looking to compose with generative and algorithmic music techniques. User interfaces for generative music systems are often inaccessible to non-programmers and neglect established composition workflow and design paradigms that are familiar to computer-based music composers. We have developed a system called the Interactive Generative Music Environment (IGME) that attempts to bridge the gap between generative music and music sequencing software, through an easy to use score editing interface. This paper discusses a series of user studies in which users explore generative music composition with IGME. A questionnaire evaluates the user’s perception of interacting with generative music and from this provide recommendations for future generative music systems and interfaces.
@inproceedings{NIME20_12, author = {Hunt, Samuel J and Mitchell, Tom and Nash, Chris}, title = {Composing computer generated music, an observational study using IGME: the Interactive Generative Music Environment}, pages = {61--66}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813222}, url = {https://www.nime.org/proceedings/2020/nime2020_paper12.pdf} }
Joao Wilbert, Don D Haddad, Hiroshi Ishii, and Joseph Paradiso. 2020. Patch-corde: an expressive patch-cable for the modular synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 613–616. http://doi.org/10.5281/zenodo.4813224
Abstract
Download PDF DOI
Many opportunities and challenges in both the control and performative aspects of today’s modular synthesizers exist. The user interface prevailing in the world of synthesizers and music controllers has always been revolving around knobs, faders, switches, dials, buttons, or capacitive touchpads, to name a few. This paper presents a novel way of interaction with a modular synthesizer by exploring the affordances of cord-base UIs. A special patch cable was developed us- ing commercially available piezo-resistive rubber cords, and was adapted to fit to the 3.5 mm mono audio jack, making it compatible with the Eurorack modular-synth standard. Moreover, a module was developed to condition this stretch- able sensor/cable, to allow multiple Patch-cordes to be used in a given patch simultaneously. This paper also presents a vocabulary of interactions, labeled through various physical actions, turning the patch cable into an expressive controller that complements traditional patching techniques.
@inproceedings{NIME20_120, author = {Wilbert, Joao and Haddad, Don D and Ishii, Hiroshi and Paradiso, Joseph}, title = {Patch-corde: an expressive patch-cable for the modular synthesizer.}, pages = {613--616}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813224}, url = {https://www.nime.org/proceedings/2020/nime2020_paper120.pdf}, presentation-video = {https://youtu.be/7gklx8ek8U8} }
Jiří Suchánek. 2020. SOIL CHOIR v.1.3 - soil moisture sonification installation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 617–618. http://doi.org/10.5281/zenodo.4813226
Abstract
Download PDF DOI
The artistic sonification offers a creative method for putting direct semantic layers to the abstract sounds. This paper is dedicated to the sound installation “Soil choir v.1.3” that sonifies soil moisture in different depths and transforms this non-musical phenomenon into organized sound structures. The sonification of natural soil moisture processes tests the limits of our attention, patience and willingness to still perceive ultra-slow reactions and examines the mechanisms of our sense adaptation. Although the musical time of the installation is set to almost non-human – environmental time scale (changes happen within hours, days, weeks or even months…) this system can be explored and even played also as an instrument by putting sensors to different soil areas or pouring liquid into the soil and waiting for changes... The crucial aspect of the work was to design the sonification architecture that deals with extreme slow changes of input data – measured values from moisture sensors. The result is the sound installation consisting of three objects – each with different types of soil. Every object is compact, independent unit consisting of three low-cost capacitive soil moisture sensors, 1m long perspex tube filled with soil, full range loudspeaker and Bela platform with custom Supercollider code. I developed this installation during the year 2019 and this paper gives insight into the aspects and issues connected with creating this installation.
@inproceedings{NIME20_121, author = {Suchánek, Jiří}, title = {SOIL CHOIR v.1.3 - soil moisture sonification installation}, pages = {617--618}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813226}, url = {https://www.nime.org/proceedings/2020/nime2020_paper121.pdf} }
Marinos Koutsomichalis. 2020. Rough-hewn Hertzian Multimedia Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 619–624. http://doi.org/10.5281/zenodo.4813228
Abstract
Download PDF DOI
Three DIY electronic instruments that the author has used in real-life multimedia performance contexts are scrutinised herein. The instruments are made intentionally rough-hewn, non-optimal and user-unfriendly in several respects, and are shown to draw upon experimental traits in electronics de- sign and interfaces for music expression. The various different ways in which such design traits affects their performance are outlined, as are their overall consequence to the artistic outcome and to individual experiences of it. It is shown that, to a varying extent, they all embody, mediate, and aid actualise the specifics their parent projects revolve around. It is eventually suggested that in the context of an exploratory and hybrid artistic practice, bespoke instruments of sorts, their improvised performance, the material traits or processes they implement or pivot on, and the ideas/narratives that perturb thereof, may all intertwine and fuse into one another so that a clear distinction between one another is not always possible, or meaningful. In such a vein, this paper aims at being an account of such a practice upon which prospective researchers/artists may further build upon.
@inproceedings{NIME20_122, author = {Koutsomichalis, Marinos}, title = {Rough-hewn Hertzian Multimedia Instruments}, pages = {619--624}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813228}, url = {https://www.nime.org/proceedings/2020/nime2020_paper122.pdf}, presentation-video = {https://youtu.be/DWecR7exl8k} }
Taylor J Olsen. 2020. Animation, Sonification, and Fluid-Time: A Visual-Audioizer Prototype. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 625–630. http://doi.org/10.5281/zenodo.4813230
Abstract
Download PDF DOI
The visual-audioizer is a patch created in Max in which the concept of fluid-time animation techniques, in tandem with basic computer vision tracking methods, can be used as a tool to allow the visual time-based media artist to create music. Visual aspects relating to the animator’s knowledge of motion, animated loops, and auditory synchronization derived from computer vision tracking methods, allow an immediate connection between the generated audio derived from visuals—becoming a new way to experience and create audio-visual media. A conceptual overview, comparisons of past/current audio-visual contributors, and a summary of the Max patch will be discussed. The novelty of practice-based animation methods in the field of musical expression, considerations of utilizing the visual-audioizer, and the future of fluid-time animation techniques as a tool of musical creativity will also be addressed.
@inproceedings{NIME20_123, author = {Olsen, Taylor J}, title = {Animation, Sonification, and Fluid-Time: A Visual-Audioizer Prototype}, pages = {625--630}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813230}, url = {https://www.nime.org/proceedings/2020/nime2020_paper123.pdf} }
Virginia de las Pozas. 2020. Semi-Automated Mappings for Object-Manipulating Gestural Control of Electronic Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 631–634. http://doi.org/10.5281/zenodo.4813232
Abstract
Download PDF DOI
This paper describes a system for automating the generation of mapping schemes between human interaction with extramusical objects and electronic dance music. These mappings are determined through the comparison of sensor input to a synthesized matrix of sequenced audio. The goal of the system is to facilitate live performances that feature quotidian objects in the place of traditional musical instruments. The practical and artistic applications of musical control with quotidian objects is discussed. The associated object-manipulating gesture vocabularies are mapped to musical output so that the objects themselves may be perceived as DMIs. This strategy is used in a performance to explore the liveness qualities of the system.
@inproceedings{NIME20_124, author = {de las Pozas, Virginia}, title = {Semi-Automated Mappings for Object-Manipulating Gestural Control of Electronic Music}, pages = {631--634}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813232}, url = {https://www.nime.org/proceedings/2020/nime2020_paper124.pdf} }
Christodoulos Benetatos, Joseph VanderStel, and Zhiyao Duan. 2020. BachDuet: A Deep Learning System for Human-Machine Counterpoint Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 635–640. http://doi.org/10.5281/zenodo.4813234
Abstract
Download PDF DOI
During theBaroque period, improvisation was a key element of music performance and education. Great musicians, such as J.S. Bach, were better known as improvisers than composers. Today, however, there is a lack of improvisation culture in classical music performance and education; classical musicians either are not trained to improvise, or cannot find other people to improvise with. Motivated by this observation, we develop BachDuet, a system that enables real-time counterpoint improvisation between a human anda machine. This system uses a recurrent neural network toprocess the human musician’s monophonic performance ona MIDI keyboard and generates the machine’s monophonic performance in real time. We develop a GUI to visualize the generated music content and to facilitate this interaction. We conduct user studies with 13 musically trained users and show the feasibility of two-party duet counterpoint improvisation and the effectiveness of BachDuet for this purpose. We also conduct listening tests with 48 participants and show that they cannot tell the difference between duets generated by human-machine improvisation using BachDuet and those generated by human-human improvisation. Objective evaluation is also conducted to assess the degree to which these improvisations adhere to common rules of counterpoint, showing promising results.
@inproceedings{NIME20_125, author = {Benetatos, Christodoulos and VanderStel, Joseph and Duan, Zhiyao}, title = {BachDuet: A Deep Learning System for Human-Machine Counterpoint Improvisation}, pages = {635--640}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813234}, url = {https://www.nime.org/proceedings/2020/nime2020_paper125.pdf}, presentation-video = {https://youtu.be/wFGW0QzuPPk} }
Olivier Capra, Florent Berthaut, and Laurent Grisoni. 2020. All You Need Is LOD : Levels of Detail in Visual Augmentations for the Audience. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 67–72. http://doi.org/10.5281/zenodo.4813236
Abstract
Download PDF DOI
Because they break the physical link between gestures and sound, Digital Musical Instruments offer countless opportunities for musical expression. For the same reason however, they may hinder the audience experience, making the musician contribution and expressiveness difficult to perceive. In order to cope with this issue without altering the instruments, researchers and artists alike have designed techniques to augment their performances with additional information, through audio, haptic or visual modalities. These techniques have however only been designed to offer a fixed level of information, without taking into account the variety of spectators expertise and preferences. In this paper, we investigate the design, implementation and effect on audience experience of visual augmentations with controllable level of detail (LOD). We conduct a controlled experiment with 18 participants, including novices and experts. Our results show contrasts in the impact of LOD on experience and comprehension for experts and novices, and highlight the diversity of usage of visual augmentations by spectators.
@inproceedings{NIME20_13, author = {Capra, Olivier and Berthaut, Florent and Grisoni, Laurent}, title = {All You Need Is LOD : Levels of Detail in Visual Augmentations for the Audience}, pages = {67--72}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813236}, url = {https://www.nime.org/proceedings/2020/nime2020_paper13.pdf}, presentation-video = {https://youtu.be/3hIGu9QDn4o} }
Johnty Wang, Eduardo Meneses, and Marcelo Wanderley. 2020. The Scalability of WiFi for Mobile Embedded Sensor Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 73–76. http://doi.org/10.5281/zenodo.4813239
Abstract
Download PDF DOI
In this work we test the performance of multiple ESP32microcontrollers used as WiFi sensor interfaces in the context of real-time interactive systems. The number of devices from 1 to 13, and individual sending rates from 50 to 2300 messages per second are tested to provide examples of various network load situations that may resemble a performance configuration. The overall end-to-end latency and bandwidth are measured as the basic performance metrics of interest. The results show that a maximum message rate of 2300 Hz is possible on a 2.4 GHz network for a single embedded device and decreases as the number of devices are added. During testing it was possible to have up to 7 devices transmitting at 100 Hz while attaining less than 10 ms latency, but performance degrades with increasing sending rates and number of devices. Performance can also vary significantly from day to day depending on network usage in a crowded environment.
@inproceedings{NIME20_14, author = {Wang, Johnty and Meneses, Eduardo and Wanderley, Marcelo}, title = {The Scalability of WiFi for Mobile Embedded Sensor Interfaces}, pages = {73--76}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813239}, url = {https://www.nime.org/proceedings/2020/nime2020_paper14.pdf} }
Florent Berthaut and Luke Dahl. 2020. Adapting & Openness: Dynamics of Collaboration Interfaces for Heterogeneous Digital Orchestras. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 77–82. http://doi.org/10.5281/zenodo.4813241
Abstract
Download PDF DOI
Advanced musical cooperation, such as concurrent control of musical parameters or sharing data between instruments,has previously been investigated using multi-user instruments or orchestras of identical instruments. In the case of heterogeneous digital orchestras, where the instruments, interfaces, and control gestures can be very different, a number of issues may impede such collaboration opportunities. These include the lack of a standard method for sharing data or control, the incompatibility of parameter types, and limited awareness of other musicians’ activity and instrument structure. As a result, most collaborations remain limited to synchronising tempo or applying effects to audio outputs. In this paper we present two interfaces for real-time group collaboration amongst musicians with heterogeneous instruments. We conducted a qualitative study to investigate how these interfaces impact musicians’ experience and their musical output, we performed a thematic analysis of inter-views, and we analysed logs of interactions. From these results we derive principles and guidelines for the design of advanced collaboration systems for heterogeneous digital orchestras, namely Adapting (to) the System, Support Development, Default to Openness, and Minimise Friction to Support Expressivity.
@inproceedings{NIME20_15, author = {Berthaut, Florent and Dahl, Luke}, title = {Adapting & Openness: Dynamics of Collaboration Interfaces for Heterogeneous Digital Orchestras}, pages = {77--82}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813241}, url = {https://www.nime.org/proceedings/2020/nime2020_paper15.pdf}, presentation-video = {https://youtu.be/jGpKkbWq_TY} }
Andreas Förster, Christina Komesker, and Norbert Schnell. 2020. SnoeSky and SonicDive - Design and Evaluation of Two Accessible Digital Musical Instruments for a SEN School. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 83–88. http://doi.org/10.5281/zenodo.4813243
Abstract
Download PDF DOI
Music technology can provide persons who experience physical and/or intellectual barriers using traditional musical instruments with a unique access to active music making. This applies particularly but not exclusively to the so-called group of people with physical and/or mental disabilities. This paper presents two Accessible Digital Musical Instruments (ADMIs) that were specifically designed for the students of a Special Educational Needs (SEN) school with a focus on intellectual disabilities. With SnoeSky, we present an ADMI in the form of an interactive starry sky that integrates into the Snoezel-Room. Here, users can ’play’ with ’melodic constellations’ using a flashlight. SonicDive is an interactive installation that enables users to explore a complex water soundscape through their movement inside a ball pool. The underlying goal of both ADMIs was the promotion of self-efficacy experiences while stimulating the users’ relaxation and activation. This paper reports on the design process involving the users and their environment. In addition, it describes some details of the technical implementaion of the ADMIs as well as first indices for their effectiveness.
@inproceedings{NIME20_16, author = {Förster, Andreas and Komesker, Christina and Schnell, Norbert}, title = {SnoeSky and SonicDive - Design and Evaluation of Two Accessible Digital Musical Instruments for a SEN School}, pages = {83--88}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813243}, url = {https://www.nime.org/proceedings/2020/nime2020_paper16.pdf} }
Robert Pritchard and Ian Lavery. 2020. Inexpensive Colour Tracking to Overcome Performer ID Loss . Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 89–92. http://doi.org/10.5281/zenodo.4813245
Abstract
Download PDF DOI
The NuiTrack IDE supports writing code for an active infrared camera to track up to six bodies, with up to 25 target points on each person. The system automatically assigns IDs to performers/users as they enter the tracking area, but when occlusion of a performer occurs, or when a user exits and then re-enters the tracking area, upon rediscovery of the user the system generates a new tracking ID. Because of this any assigned and registered target tracking points for specific users are lost, as are the linked abilities of that performer to control media based on their movements. We describe a single camera system for overcoming this problem by assigning IDs based on the colours worn by the performers, and then using the colour tracking for updating and confirming identification when the performer reappears after occlusion or upon re-entry. A video link is supplied showing the system used for an interactive dance work with four dancers controlling individual audio tracks.
@inproceedings{NIME20_17, author = {Pritchard, Robert and Lavery, Ian}, title = {Inexpensive Colour Tracking to Overcome Performer ID Loss }, pages = {89--92}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813245}, url = {https://www.nime.org/proceedings/2020/nime2020_paper17.pdf} }
Kiyu Nishida and kazuhiro jo. 2020. Modules for analog synthesizers using Aloe vera biomemristor. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 93–96. http://doi.org/10.5281/zenodo.4813249
Abstract
Download PDF DOI
In this study, an analog synthesizer module using Aloe vera was proposed as a biomemristor. The recent revival of analog modular synthesizers explores novel possibilities of sounds based on unconventional technologies such as integrating biological forms and structures into traditional circuits. A biosignal has been used in experimental music as the material for composition. However, the recent development of a biocomputor using a slime mold biomemristor expands the use of biomemristors in music. Based on prior research, characteristics of Aloe vera as a biomemristor were electrically measured, and two types of analog synthesizer modules were developed, current to voltage converter and current spike to voltage converter. For this application, a live performance was conducted with the CVC module and the possibilities as a new interface for musical expression were examined.
@inproceedings{NIME20_18, author = {Nishida, Kiyu and jo, kazuhiro}, title = {Modules for analog synthesizers using Aloe vera biomemristor}, pages = {93--96}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813249}, url = {https://www.nime.org/proceedings/2020/nime2020_paper18.pdf}, presentation-video = {https://youtu.be/bZaCd6igKEA} }
Giulio Moro and Andrew McPherson. 2020. A platform for low-latency continuous keyboard sensing and sound generation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 97–102. http://doi.org/10.5281/zenodo.4813253
Abstract
Download PDF DOI
On several acoustic and electromechanical keyboard instruments, the produced sound is not always strictly dependent exclusively on a discrete key velocity parameter, and minute gesture details can affect the final sonic result. By contrast, subtle variations in articulation have a relatively limited effect on the sound generation when the keyboard controller uses the MIDI standard, used in the vast majority of digital keyboards. In this paper we present an embedded platform that can generate sound in response to a controller capable of sensing the continuous position of keys on a keyboard. This platform enables the creation of keyboard-based DMIs which allow for a richer set of interaction gestures than would be possible through a MIDI keyboard, which we demonstrate through two example instruments. First, in a Hammond organ emulator, the sensing device allows to recreate the nuances of the interaction with the original instrument in a way a velocity-based MIDI controller could not. Second, a nonlinear waveguide flute synthesizer is shown as an example of the expressive capabilities that a continuous-keyboard controller opens up in the creation of new keyboard-based DMIs.
@inproceedings{NIME20_19, author = {Moro, Giulio and McPherson, Andrew}, title = {A platform for low-latency continuous keyboard sensing and sound generation}, pages = {97--102}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813253}, url = {https://www.nime.org/proceedings/2020/nime2020_paper19.pdf}, presentation-video = {https://youtu.be/Y137M9UoKKg} }
Advait Sarkar and Henry Mattinson. 2020. Excello: exploring spreadsheets for music composition. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 11–16. http://doi.org/10.5281/zenodo.4813256
Abstract
Download PDF DOI
Excello is a spreadsheet-based music composition and programming environment. We co-developed Excello with feedback from 21 musicians at varying levels of musical and computing experience. We asked: can the spreadsheet interface be used for programmatic music creation? Our design process encountered questions such as how time should be represented, whether amplitude and octave should be encoded as properties of individual notes or entire phrases, and how best to leverage standard spreadsheet features, such as formulae and copy-paste. We present the user-centric rationale for our current design, and report a user study suggesting that Excello’s notation retains similar cognitive dimensions to conventional music composition tools, while allowing the user to write substantially complex programmatic music.
@inproceedings{NIME20_2, author = {Sarkar, Advait and Mattinson, Henry}, title = {Excello: exploring spreadsheets for music composition}, pages = {11--16}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813256}, url = {https://www.nime.org/proceedings/2020/nime2020_paper2.pdf} }
Andrea Guidi, Fabio Morreale, and Andrew McPherson. 2020. Design for auditory imagery: altering instruments to explore performer fluency. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 103–108. http://doi.org/10.5281/zenodo.4813260
Abstract
Download PDF DOI
In NIME design, thorough attention has been devoted to feedback modalities, including auditory, visual and haptic feedback. How the performer executes the gestures to achieve a sound on an instrument, by contrast, appears to be less examined. Previous research showed that auditory imagery, or the ability to hear or recreate sounds in the mind even when no audible sound is present, is essential to the sensorimotor control involved in playing an instrument. In this paper, we enquire whether auditory imagery can also help to support skill transfer between musical instruments resulting in possible implications for new instrument design. To answer this question, we performed two experimental studies on pitch accuracy and fluency where professional violinists were asked to play a modified violin. Results showed altered or even possibly irrelevant auditory feedback on a modified violin does not appear to be a significant impediment to performance. However, performers need to have coherent imagery of what they want to do, and the sonic outcome needs to be coupled to the motor program to achieve it. This finding shows that the design lens should be shifted from a direct feedback model of instrumental playing toward a model where imagery guides the playing process. This result is in agreement with recent research on skilled sensorimotor control that highlights the value of feedforward anticipation in embodied musical performance. It is also of primary importance for the design of new instruments: new sounds that cannot easily be imagined and that are not coupled to a motor program are not likely to be easily performed on the instrument.
@inproceedings{NIME20_20, author = {Guidi, Andrea and Morreale, Fabio and McPherson, Andrew}, title = {Design for auditory imagery: altering instruments to explore performer fluency}, pages = {103--108}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813260}, url = {https://www.nime.org/proceedings/2020/nime2020_paper20.pdf}, presentation-video = {https://youtu.be/yK7Tg1kW2No} }
Raul Masu, Paulo Bala, Muhammad Ahmad, et al. 2020. VR Open Scores: Scores as Inspiration for VR Scenarios. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 109–114. http://doi.org/10.5281/zenodo.4813262
Abstract
Download PDF DOI
In this paper, we introduce the concept of VR Open Scores: aleatoric score-based virtual scenarios where an aleatoric score is embedded in a virtual environment. This idea builds upon the notion of graphic scores and composed instrument, and apply them in a new context. Our proposal also explores possible parallels between open meaning in interaction design, and aleatoric score, conceptualized as Open Work by the Italian philosopher Umberto Eco. Our approach has two aims. The first aim is to create an environment where users can immerse themselves in the visual elements of a score while listening to the corresponding music. The second aim is to facilitate users to develop a personal relationship with both the system and the score. To achieve those aims, as a practical implementation of our proposed concept, we developed two immersive scenarios: a 360º video and an interactive space. We conclude presenting how our design aims were accomplished in the two scenarios, and describing positive and negative elements of our implementations.
@inproceedings{NIME20_21, author = {Masu, Raul and Bala, Paulo and Ahmad, Muhammad and Correia, Nuno N. and Nisi, Valentina and Nunes, Nuno and Romão, Teresa}, title = {VR Open Scores: Scores as Inspiration for VR Scenarios}, pages = {109--114}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813262}, url = {https://www.nime.org/proceedings/2020/nime2020_paper21.pdf}, presentation-video = {https://youtu.be/JSM6Rydz7iE} }
Amble H C Skuse and Shelly Knotts. 2020. Creating an Online Ensemble for Home Based Disabled Musicians: Disabled Access and Universal Design - why disabled people must be at the heart of developing technology. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 115–120. http://doi.org/10.5281/zenodo.4813266
Abstract
Download PDF DOI
The project takes a Universal Design approach to exploring the possibility of creating a software platform to facilitate a Networked Ensemble for Disabled musicians. In accordance with the Nothing About Us Without Us (Charlton, 1998) principle I worked with a group of 15 professional musicians who are also disabled. The group gave interviews as to their perspectives and needs around networked music practices and this data was then analysed to look at how software design could be developed to make it more accessible. We also identified key messages for the wider design of digital musical instrument makers, performers and event organisers to improve practice around working with and for disabled musicians.
@inproceedings{NIME20_22, author = {Skuse, Amble H C and Knotts, Shelly}, title = {Creating an Online Ensemble for Home Based Disabled Musicians: Disabled Access and Universal Design - why disabled people must be at the heart of developing technology.}, pages = {115--120}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813266}, url = {https://www.nime.org/proceedings/2020/nime2020_paper22.pdf}, presentation-video = {https://youtu.be/m4D4FBuHpnE} }
Anıl Çamcı, Matias Vilaplana, and Ruth Wang. 2020. Exploring the Affordances of VR for Musical Interaction Design with VIMEs. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 121–126. http://doi.org/10.5281/zenodo.4813268
Abstract
Download PDF DOI
As virtual reality (VR) continues to gain prominence as a medium for artistic expression, a growing number of projects explore the use of VR for musical interaction design. In this paper, we discuss the concept of VIMEs (Virtual Interfaces for Musical Expression) through four case studies that explore different aspects of musical interactions in virtual environments. We then describe a user study designed to evaluate these VIMEs in terms of various usability considerations, such as immersion, perception of control, learnability and physical effort. We offer the results of the study, articulating the relationship between the design of a VIME and the various performance behaviors observed among its users. Finally, we discuss how these results, combined with recent developments in VR technology, can inform the design of new VIMEs.
@inproceedings{NIME20_23, author = {Çamcı, Anıl and Vilaplana, Matias and Wang, Ruth}, title = {Exploring the Affordances of VR for Musical Interaction Design with VIMEs}, pages = {121--126}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813268}, url = {https://www.nime.org/proceedings/2020/nime2020_paper23.pdf} }
Anıl Çamcı, Aaron Willette, Nachiketa Gargi, Eugene Kim, Julia Xu, and Tanya Lai. 2020. Cross-platform and Cross-reality Design of Immersive Sonic Environments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 127–130. http://doi.org/10.5281/zenodo.4813270
Abstract
Download PDF DOI
The continued growth of modern VR (virtual reality) platforms into mass adoption is fundamentally driven by the work of content creators who offer engaging experiences. It is therefore essential to design accessible creativity support tools that can facilitate the work of a broad range of practitioners in this domain. In this paper, we focus on one facet of VR content creation, namely immersive audio design. We discuss a suite of design tools that enable both novice and expert users to rapidly prototype immersive sonic environments across desktop, virtual reality and augmented reality platforms. We discuss the design considerations adopted for each implementation, and how the individual systems informed one another in terms of interaction design. We then offer a preliminary evaluation of these systems with reports from first-time users. Finally, we discuss our road-map for improving individual and collaborative creative experiences across platforms and realities in the context of immersive audio.
@inproceedings{NIME20_24, author = {Çamcı, Anıl and Willette, Aaron and Gargi, Nachiketa and Kim, Eugene and Xu, Julia and Lai, Tanya}, title = {Cross-platform and Cross-reality Design of Immersive Sonic Environments}, pages = {127--130}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813270}, url = {https://www.nime.org/proceedings/2020/nime2020_paper24.pdf} }
Marius Schebella, Gertrud Fischbacher, and Matthew Mosher. 2020. Silver: A Textile Wireframe Interface for the Interactive Sound Installation Idiosynkrasia. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 131–132. http://doi.org/10.5281/zenodo.4813272
Abstract
Download PDF DOI
Silver is an artwork that deals with the emotional feeling of contact by exaggerating it acoustically. It originates from an interactive room installation, where several textile sculptures merge with sounds. Silver is made from a wire mesh and its surface is reactive to closeness and touch. This material property forms a hybrid of artwork and parametric controller for the real-time sound generation. The textile quality of the fine steel wire-mesh evokes a haptic familiarity inherent to textile materials. This makes it easy for the audience to overcome the initial threshold barrier to get in touch with the artwork in an exhibition situation. Additionally, the interaction is not dependent on visuals. The characteristics of the surface sensor allows a user to play the instrument without actually touching it.
@inproceedings{NIME20_25, author = {Schebella, Marius and Fischbacher, Gertrud and Mosher, Matthew}, title = {Silver: A Textile Wireframe Interface for the Interactive Sound Installation Idiosynkrasia}, pages = {131--132}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813272}, url = {https://www.nime.org/proceedings/2020/nime2020_paper25.pdf} }
Ning Yang, Richard Savery, Raghavasimhan Sankaranarayanan, Lisa Zahray, and Gil Weinberg. 2020. Mechatronics-Driven Musical Expressivity for Robotic Percussionists. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 133–138. http://doi.org/10.5281/zenodo.4813274
Abstract
Download PDF DOI
Musical expressivity is an important aspect of musical performance for humans as well as robotic musicians. We present a novel mechatronics-driven implementation of Brushless Direct Current (BLDC) motors in a robotic marimba player, named ANON, designed to improve speed, dynamic range (loudness), and ultimately perceived musical expressivity in comparison to state-of-the-art robotic percussionist actuators. In an objective test of dynamic range, we find that our implementation provides wider and more consistent dynamic range response in comparison with solenoid-based robotic percussionists. Our implementation also outperforms both solenoid and human marimba players in striking speed. In a subjective listening test measuring musical expressivity, our system performs significantly better than a solenoid-based system and is statistically indistinguishable from human performers.
@inproceedings{NIME20_26, author = {Yang, Ning and Savery, Richard and Sankaranarayanan, Raghavasimhan and Zahray, Lisa and Weinberg, Gil}, title = {Mechatronics-Driven Musical Expressivity for Robotic Percussionists}, pages = {133--138}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813274}, url = {https://www.nime.org/proceedings/2020/nime2020_paper26.pdf}, presentation-video = {https://youtu.be/KsQNlArUv2k} }
Paul Dunham. 2020. Click::RAND. A Minimalist Sound Sculpture. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 139–142. http://doi.org/10.5281/zenodo.4813276
Abstract
Download PDF DOI
Discovering outmoded or obsolete technologies and appropriating them in creative practice can uncover new relationships between those technologies. Using a media archaeological research approach, this paper presents the electromechanical relay and a book of random numbers as related forms of obsolete media. Situated within the context of electromechanical sound art, the work uses a non-deterministic approach to explore the non-linear and unpredictable agency and materiality of the objects in the work. Developed by the first author, Click::RAND is an object-based sound installation. The work has been developed as an audio-visual representation of a genealogy of connections between these two forms of media in the history of computing.
@inproceedings{NIME20_27, author = {Dunham, Paul}, title = {Click::RAND. A Minimalist Sound Sculpture.}, pages = {139--142}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813276}, url = {https://www.nime.org/proceedings/2020/nime2020_paper27.pdf}, presentation-video = {https://youtu.be/vWKw8H0F9cI} }
Enrique Tomás. 2020. A Playful Approach to Teaching NIME: Pedagogical Methods from a Practice-Based Perspective. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 143–148. http://doi.org/10.5281/zenodo.4813280
Abstract
Download PDF DOI
This paper reports on the experience gained after five years of teaching a NIME master course designed specifically for artists. A playful pedagogical approach based on practice-based methods is presented and evaluated. My goal was introducing the art of NIME design and performance giving less emphasis to technology. Instead of letting technology determine how we teach and think during the class, I propose fostering at first the student’s active construction and understanding of the field experimenting with physical materials,sound production and bodily movements. For this intention I developed a few classroom exercises which my students had to study and practice. During this period of five years, 95 students attended the course. At the end of the semester course, each student designed, built and performed a new interface for musical expression in front of an audience. Thus, in this paper I describe and discuss the benefits of applying playfulness and practice-based methods for teaching NIME in art universities. I introduce the methods and classroom exercises developed and finally I present some lessons learned from this pedagogical experience.
@inproceedings{NIME20_28, author = {Tomás, Enrique}, title = {A Playful Approach to Teaching NIME: Pedagogical Methods from a Practice-Based Perspective}, pages = {143--148}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813280}, url = {https://www.nime.org/proceedings/2020/nime2020_paper28.pdf}, presentation-video = {https://youtu.be/94o3J3ozhMs} }
Quinn D Jarvis Holland, Crystal Quartez, Francisco Botello, and Nathan Gammill. 2020. EXPANDING ACCESS TO MUSIC TECHNOLOGY- Rapid Prototyping Accessible Instrument Solutions For Musicians With Intellectual Disabilities. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 149–153. http://doi.org/10.5281/zenodo.4813286
Abstract
Download PDF DOI
Using open-source and creative coding frameworks, a team of artist-engineers from Portland Community College working with artists that experience Intellectual/Developmental disabilities prototyped an ensemble of adapted instruments and synthesizers that facilitate real-time in-key collaboration. The instruments employ a variety of sensors, sending the resulting musical controls to software sound generators via MIDI. Careful consideration was given to the balance between freedom of expression, and curating the possible sonic outcomes as adaptation. Evaluation of adapted instrument design may differ greatly from frameworks for evaluating traditional instruments or products intended for mass-market, though the results of such focused and individualised design have a variety of possible applications.
@inproceedings{NIME20_29, author = {Jarvis Holland, Quinn D and Quartez, Crystal and Botello, Francisco and Gammill, Nathan}, title = {EXPANDING ACCESS TO MUSIC TECHNOLOGY- Rapid Prototyping Accessible Instrument Solutions For Musicians With Intellectual Disabilities}, pages = {149--153}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813286}, url = {https://www.nime.org/proceedings/2020/nime2020_paper29.pdf} }
Alberto Boem, Giovanni M Troiano, Giacomo and Lepri, and Victor Zappi. 2020. Non-Rigid Musical Interfaces: Exploring Practices, Takes, and Future Perspective. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 17–22. http://doi.org/10.5281/zenodo.4813288
Abstract
Download PDF DOI
Non-rigid interfaces allow for exploring new interactive paradigms that rely on deformable input and shape change, and whose possible applications span several branches of human-computer interaction (HCI). While extensively explored as deformable game controllers, bendable smartphones, and shape-changing displays, non-rigid interfaces are rarely framed in a musical context, and their use for composition and performance is rather sparse and unsystematic. With this work, we start a systematic exploration of this relatively uncharted research area, by means of (1) briefly reviewing existing musical interfaces that capitalize on deformable input,and (2) surveying 11 among experts and pioneers in the field about their experience with and vision on non-rigid musical interfaces.Based on experts’ input, we suggest possible next steps of musical appropriation with deformable and shape-changing technologies.We conclude by discussing how cross-overs between NIME and HCI research will benefit non-rigid interfaces.
@inproceedings{NIME20_3, author = {Boem, Alberto and Troiano, Giovanni M and and Lepri, Giacomo and Zappi, Victor}, title = {Non-Rigid Musical Interfaces: Exploring Practices, Takes, and Future Perspective}, pages = {17--22}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813288}, url = {https://www.nime.org/proceedings/2020/nime2020_paper3.pdf}, presentation-video = {https://youtu.be/o4CuAglHvf4} }
Jack Atherton and Ge Wang. 2020. Curating Perspectives: Incorporating Virtual Reality into Laptop Orchestra Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 154–159. http://doi.org/10.5281/zenodo.4813290
Abstract
Download PDF DOI
Despite a history spanning nearly 30 years, best practices for the use of virtual reality (VR) in computer music performance remain exploratory. Here, we present a case study of a laptop orchestra performance entitled Resilience, involving one VR performer and an ensemble of instrumental performers, in order to explore values and design principles for incorporating this emerging technology into computer music performance. We present a brief history at the intersection of VR and the laptop orchestra. We then present the design of the piece and distill it into a set of design principles. Broadly, these design principles address the interplay between the different conflicting perspectives at play: those of the VR performer, the ensemble, and the audience. For example, one principle suggests that the perceptual link between the physical and virtual world maybe enhanced for the audience by improving the performers’ sense of embodiment. We argue that these design principles are a form of generalized knowledge about how we might design laptop orchestra pieces involving virtual reality.
@inproceedings{NIME20_30, author = {Atherton, Jack and Wang, Ge}, title = {Curating Perspectives: Incorporating Virtual Reality into Laptop Orchestra Performance}, pages = {154--159}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813290}, url = {https://www.nime.org/proceedings/2020/nime2020_paper30.pdf}, presentation-video = {https://youtu.be/tmeDO5hg56Y} }
Fabio Morreale, S. M. Astrid Bin, Andrew McPherson, Paul Stapleton, and Marcelo Wanderley. 2020. A NIME Of The Times: Developing an Outward-Looking Political Agenda For This Community. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 160–165. http://doi.org/10.5281/zenodo.4813294
Abstract
Download PDF DOI
So far, NIME research has been mostly inward-looking, dedicated to divulging and studying our own work and having limited engagement with trends outside our community. Though musical instruments as cultural artefacts are inherently political, we have so far not sufficiently engaged with confronting these themes in our own research. In this paper we argue that we should consider how our work is also political, and begin to develop a clear political agenda that includes social, ethical, and cultural considerations through which to consider not only our own musical instruments, but also those not created by us. Failing to do so would result in an unintentional but tacit acceptance and support of such ideologies. We explore one item to be included in this political agenda: the recent trend in music technology of “democratising music”, which carries implicit political ideologies grounded in techno-solutionism. We conclude with a number of recommendations for stimulating community-wide discussion on these themes in the hope that this leads to the development of an outward-facing perspective that fully engages with political topics.
@inproceedings{NIME20_31, author = {Morreale, Fabio and Bin, S. M. Astrid and McPherson, Andrew and Stapleton, Paul and Wanderley, Marcelo}, title = {A NIME Of The Times: Developing an Outward-Looking Political Agenda For This Community}, pages = {160--165}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813294}, url = {https://www.nime.org/proceedings/2020/nime2020_paper31.pdf}, presentation-video = {https://youtu.be/y2iDN24ZLTg} }
Chantelle L Ko and Lora Oehlberg. 2020. Touch Responsive Augmented Violin Interface System II: Integrating Sensors into a 3D Printed Fingerboard. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 166–171. http://doi.org/10.5281/zenodo.4813300
Abstract
Download PDF DOI
We present TRAVIS II, an augmented acoustic violin with touch sensors integrated into its 3D printed fingerboard that track left-hand finger gestures in real time. The fingerboard has four strips of conductive PLA filament which produce an electric signal when fingers press down on each string. While these sensors are physically robust, they are mechanically assembled and thus easy to replace if damaged. The performer can also trigger presets via four FSRs attached to the body of the violin. The instrument is completely wireless, giving the performer the freedom to move throughout the performance space. While the sensing fingerboard is installed in place of the traditional fingerboard, all other electronics can be removed from the augmented instrument, maintaining the aesthetics of a traditional violin. Our design allows violinists to naturally create music for interactive performance and improvisation without requiring new instrumental techniques. In this paper, we describe the design of the instrument, experiments leading to the sensing fingerboard, and performative applications of the instrument.
@inproceedings{NIME20_32, author = {Ko, Chantelle L and Oehlberg, Lora}, title = {Touch Responsive Augmented Violin Interface System II: Integrating Sensors into a 3D Printed Fingerboard}, pages = {166--171}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813300}, url = {https://www.nime.org/proceedings/2020/nime2020_paper32.pdf}, presentation-video = {https://youtu.be/XIAd_dr9PHE} }
Nicolas E Gold, Chongyang Wang, Temitayo Olugbade, Nadia Berthouze, and Amanda Williams. 2020. P(l)aying Attention: Multi-modal, multi-temporal music control. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 172–175. http://doi.org/10.5281/zenodo.4813303
Abstract
Download PDF DOI
The expressive control of sound and music through body movements is well-studied. For some people, body movement is demanding, and although they would prefer to express themselves freely using gestural control, they are unable to use such interfaces without difficulty. In this paper, we present the P(l)aying Attention framework for manipulating recorded music to support these people, and to help the therapists that work with them. The aim is to facilitate body awareness, exploration, and expressivity by allowing the manipulation of a pre-recorded ‘ensemble’ through an interpretation of body movement, provided by a machine-learning system trained on physiotherapist assessments and movement data from people with chronic pain. The system considers the nature of a person’s movement (e.g. protective) and offers an interpretation in terms of the joint-groups that are playing a major role in the determination at that point in the movement, and to which attention should perhaps be given (or the opposite at the user’s discretion). Using music to convey the interpretation offers informational (through movement sonification) and creative (through manipulating the ensemble by movement) possibilities. The approach offers the opportunity to explore movement and music at multiple timescales and under varying musical aesthetics.
@inproceedings{NIME20_33, author = {Gold, Nicolas E and Wang, Chongyang and Olugbade, Temitayo and Berthouze, Nadia and Williams, Amanda}, title = {P(l)aying Attention: Multi-modal, multi-temporal music control}, pages = {172--175}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813303}, url = {https://www.nime.org/proceedings/2020/nime2020_paper33.pdf} }
Doga Cavdir and Ge Wang. 2020. Felt Sound: A Shared Musical Experience for the Deaf and Hard of Hearing. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 176–181. http://doi.org/10.5281/zenodo.4813305
Abstract
Download PDF DOI
We present a musical interface specifically designed for inclusive performance that offers a shared experience for both individuals who are deaf and hard of hearing as well as those who are not. This interface borrows gestures (with or without overt meaning) from American Sign Language (ASL), rendered using low-frequency sounds that can be felt by everyone in the performance. The Deaf and Hard of Hearing cannot experience the sound in the same way. Instead, they are able to physically experience the vibrations, nuances, contours, as well as its correspondence with the hand gestures. Those who are not hard of hearing can experience the sound, but also feel it just the same, with the knowledge that the same physical vibrations are shared by everyone. The employment of sign language adds another aesthetic dimension to the instrument –a nuanced borrowing of a functional communication medium for an artistic end.
@inproceedings{NIME20_34, author = {Cavdir, Doga and Wang, Ge}, title = {Felt Sound: A Shared Musical Experience for the Deaf and Hard of Hearing}, pages = {176--181}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813305}, url = {https://www.nime.org/proceedings/2020/nime2020_paper34.pdf}, presentation-video = {https://youtu.be/JCvlHu4UaZ0} }
Sasha Leitman. 2020. Sound Based Sensors for NIMEs. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 182–187. http://doi.org/10.5281/zenodo.4813309
Abstract
Download PDF DOI
This paper examines the use of Sound Sensors and audio as input material for New Interfaces for Musical Expression (NIMEs), exploring the unique affordances and character of the interactions and instruments that leverage it. Examples of previous work in the literature that use audio as sensor input data are examined for insights into how the use of Sound Sensors provides unique opportunities within the NIME context. We present the results of a user study comparing sound-based sensors to other sensing modalities within the context of controlling parameters. The study suggests that the use of Sound Sensors can enhance gestural flexibility and nuance but that they also present challenges in accuracy and repeatability.
@inproceedings{NIME20_35, author = {Leitman, Sasha}, title = {Sound Based Sensors for NIMEs}, pages = {182--187}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813309}, url = {https://www.nime.org/proceedings/2020/nime2020_paper35.pdf} }
Yuma Ikawa and Akihiro Matsuura. 2020. Playful Audio-Visual Interaction with Spheroids . Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 188–189. http://doi.org/10.5281/zenodo.4813311
Abstract
Download PDF DOI
This paper presents a novel interactive system for creating audio-visual expressions on tabletop display by dynamically manipulating solids of revolution called spheroids. The four types of basic spinning and rolling movements of spheroids are recognized from the physical conditions such as the contact area, the location of the centroid, the (angular) velocity, and the curvature of the locus all obtained from sensor data on the display. They are then used for interactively generating audio-visual effects that match each of the movements. We developed a digital content that integrated these functionalities and enabled composition and live performance through manipulation of spheroids.
@inproceedings{NIME20_36, author = {Ikawa, Yuma and Matsuura, Akihiro}, title = {Playful Audio-Visual Interaction with Spheroids }, pages = {188--189}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813311}, url = {https://www.nime.org/proceedings/2020/nime2020_paper36.pdf} }
Sihwa Park. 2020. Collaborative Mobile Instruments in a Shared AR Space: a Case of ARLooper. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 190–195. http://doi.org/10.5281/zenodo.4813313
Abstract
Download PDF DOI
This paper presents ARLooper, an augmented reality mobile interface that allows multiple users to record sound and perform together in a shared AR space. ARLooper is an attempt to explore the potential of collaborative mobile AR instruments in supporting non-verbal communication for musical performances. With ARLooper, the user can record, manipulate, and play sounds being visualized as 3D waveforms in an AR space. ARLooper provides a shared AR environment wherein multiple users can observe each other’s activities in real time, supporting increasing the understanding of collaborative contexts. This paper provides the background of the research and the design and technical implementation of ARLooper, followed by a user study.
@inproceedings{NIME20_37, author = {Park, Sihwa}, title = {Collaborative Mobile Instruments in a Shared AR Space: a Case of ARLooper}, pages = {190--195}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813313}, url = {https://www.nime.org/proceedings/2020/nime2020_paper37.pdf}, presentation-video = {https://youtu.be/Trw4epKeUbM} }
Diemo Schwarz, Abby Wanyu Liu, and Frederic Bevilacqua. 2020. A Survey on the Use of 2D Touch Interfaces for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 196–201. http://doi.org/10.5281/zenodo.4813318
Abstract
Download PDF DOI
Expressive 2D multi-touch interfaces have in recent years moved from research prototypes to industrial products, from repurposed generic computer input devices to controllers specially designed for musical expression. A host of practicioners use this type of devices in many different ways, with different gestures and sound synthesis or transformation methods. In order to get an overview of existing and desired usages, we launched an on-line survey that collected 37 answers from practicioners in and outside of academic and design communities. In the survey we inquired about the participants’ devices, their strengths and weaknesses, the layout of control dimensions, the used gestures and mappings, the synthesis software or hardware and the use of audio descriptors and machine learning. The results can inform the design of future interfaces, gesture analysis and mapping, and give directions for the need and use of machine learning for user adaptation.
@inproceedings{NIME20_38, author = {Schwarz, Diemo and Liu, Abby Wanyu and Bevilacqua, Frederic}, title = {A Survey on the Use of 2D Touch Interfaces for Musical Expression}, pages = {196--201}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813318}, url = {https://www.nime.org/proceedings/2020/nime2020_paper38.pdf}, presentation-video = {https://youtu.be/eE8I3mecaB8} }
Harri L Renney, Tom Mitchell, and Benedict Gaster. 2020. There and Back Again: The Practicality of GPU Accelerated Digital Audio. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 202–207. http://doi.org/10.5281/zenodo.4813320
Abstract
Download PDF DOI
General-Purpose GPU computing is becoming an increasingly viable option for acceleration, including in the audio domain. Although it can improve performance, the intrinsic nature of a device like the GPU involves data transfers and execution commands which requires time to complete. Therefore, there is an understandable caution concerning the overhead involved with using the GPU for audio computation. This paper aims to clarify the limitations by presenting a performance benchmarking suite. The benchmarks utilize OpenCL and CUDA across various tests to highlight the considerations and limitations of processing audio in the GPU environment. The benchmarking suite has been used to gather a collection of results across various hardware. Salient results have been reviewed in order to highlight the benefits and limitations of the GPU for digital audio. The results in this work show that the minimal GPU overhead fits into the real-time audio requirements provided the buffer size is selected carefully. The baseline overhead is shown to be roughly 0.1ms, depending on the GPU. This means buffer sizes 8 and above are completed within the allocated time frame. Results from more demanding tests, involving physical modelling synthesis, demonstrated a balance was needed between meeting the sample rate and keeping within limits for latency and jitter. Buffer sizes from 1 to 16 failed to sustain the sample rate whilst buffer sizes 512 to 32768 exceeded either latency or jitter limits. Buffer sizes in between these ranges, such as 256, satisfied the sample rate, latency and jitter requirements chosen for this paper.
@inproceedings{NIME20_39, author = {Renney, Harri L and Mitchell, Tom and Gaster, Benedict}, title = {There and Back Again: The Practicality of GPU Accelerated Digital Audio}, pages = {202--207}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813320}, url = {https://www.nime.org/proceedings/2020/nime2020_paper39.pdf}, presentation-video = {https://youtu.be/xAVEHJZRIx0} }
Tim Shaw and John Bowers. 2020. Ambulation: Exploring Listening Technologies for an Extended Sound Walking Practice. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 23–28. http://doi.org/10.5281/zenodo.4813322
Abstract
Download PDF DOI
Ambulation is a sound walk that uses field recording techniques and listening technologies to create a walking performance using environmental sound. Ambulation engages with the act of recording as an improvised performance in response to the soundscapes it is presented within. In this paper we describe the work and place it in relationship to other artists engaged with field recording and extended sound walking practices. We will give technical details of the Ambulation system we developed as part of the creation of the piece, and conclude with a collection of observations that emerged from the project. The research around the development and presentation of Ambulation contributes to the idea of field recording as a live, procedural practice, moving away from the ideas of the movement of documentary material from one place to another. We will show how having an open, improvisational approach to technologically supported sound walking enables rich and unexpected results to occur and how this way of working can contribute to NIME design and thinking.
@inproceedings{NIME20_4, author = {Shaw, Tim and Bowers, John}, title = {Ambulation: Exploring Listening Technologies for an Extended Sound Walking Practice}, pages = {23--28}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813322}, url = {https://www.nime.org/proceedings/2020/nime2020_paper4.pdf}, presentation-video = {https://youtu.be/dDXkNnQXdN4} }
Gus Xia, Daniel Chin, Yian Zhang, Tianyu Zhang, and Junbo Zhao. 2020. Interactive Rainbow Score: A Visual-centered Multimodal Flute Tutoring System. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 208–213. http://doi.org/10.5281/zenodo.4813324
Abstract
Download PDF DOI
Learning to play an instrument is intrinsically multimodal, and we have seen a trend of applying visual and haptic feedback in music games and computer-aided music tutoring systems. However, most current systems are still designed to master individual pieces of music; it is unclear how well the learned skills can be generalized to new pieces. We aim to explore this question. In this study, we contribute Interactive Rainbow Score, an interactive visual system to boost the learning of sight-playing, the general musical skill to read music and map the visual representations to performance motions. The key design of Interactive Rainbow Score is to associate pitches (and the corresponding motions) with colored notation and further strengthen such association via real-time interactions. Quantitative results show that the interactive feature on average increases the learning efficiency by 31.1%. Further analysis indicates that it is critical to apply the interaction in the early period of learning.
@inproceedings{NIME20_40, author = {Xia, Gus and Chin, Daniel and Zhang, Yian and Zhang, Tianyu and Zhao, Junbo}, title = {Interactive Rainbow Score: A Visual-centered Multimodal Flute Tutoring System}, pages = {208--213}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813324}, url = {https://www.nime.org/proceedings/2020/nime2020_paper40.pdf} }
Nicola Davanzo and Federico Avanzini. 2020. A Dimension Space for the Evaluation of Accessible Digital Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 214–220. http://doi.org/10.5281/zenodo.4813326
Abstract
Download PDF DOI
Research on Accessible Digital Musical Instruments (ADMIs) has received growing attention over the past decades, carving out an increasingly large space in the literature. Despite the recent publication of state-of-the-art review works, there are still few systematic studies on ADMIs design analysis. In this paper we propose a formal tool to explore the main design aspects of ADMIs based on Dimension Space Analysis, a well established methodology in the NIME literature which allows to generate an effective visual representation of the design space. We therefore propose a set of relevant dimensions, which are based both on categories proposed in recent works in the research context, and on original contributions. We then proceed to demonstrate its applicability by selecting a set of relevant case studies, and analyzing a sample set of ADMIs found in the literature.
@inproceedings{NIME20_41, author = {Davanzo, Nicola and Avanzini, Federico}, title = {A Dimension Space for the Evaluation of Accessible Digital Musical Instruments}, pages = {214--220}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813326}, url = {https://www.nime.org/proceedings/2020/nime2020_paper41.pdf}, presentation-video = {https://youtu.be/pJlB5k8TV9M} }
Adam Pultz Melbye and Halldor A Ulfarsson. 2020. Sculpting the behaviour of the Feedback-Actuated Augmented Bass: Design strategies for subtle manipulations of string feedback using simple adaptive algorithms. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 221–226. http://doi.org/10.5281/zenodo.4813328
Abstract
Download PDF DOI
This paper describes physical and digital design strategies for the Feedback-Actuated Augmented Bass - a self-contained feedback double bass with embedded DSP capabilities. A primary goal of the research project is to create an instrument that responds well to the use of extended playing techniques and can manifest complex harmonic spectra while retaining the feel and sonic fingerprint of an acoustic double bass. While the physical con figuration of the instrument builds on similar feedback string instruments being developed in recent years, this project focuses on modifying the feedback behaviour through low-level audio feature extractions coupled to computationally lightweight filtering and amplitude management algorithms. We discuss these adaptive and time-variant processing strategies and how we apply them in sculpting the system’s dynamic and complex behaviour to our liking.
@inproceedings{NIME20_42, author = {Melbye, Adam Pultz and Ulfarsson, Halldor A}, title = {Sculpting the behaviour of the Feedback-Actuated Augmented Bass: Design strategies for subtle manipulations of string feedback using simple adaptive algorithms}, pages = {221--226}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813328}, url = {https://www.nime.org/proceedings/2020/nime2020_paper42.pdf}, presentation-video = {https://youtu.be/jXePge1MS8A} }
Gwendal Le Vaillant, Thierry Dutoit, and Rudi Giot. 2020. Analytic vs. holistic approaches for the live search of sound presets using graphical interpolation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 227–232. http://doi.org/10.5281/zenodo.4813330
Abstract
Download PDF DOI
The comparative study presented in this paper focuses on two approaches for the search of sound presets using a specific geometric touch app. The first approach is based on independent sliders on screen and is called analytic. The second is based on interpolation between presets represented by polygons on screen and is called holistic. Participants had to listen to, memorize, and search for sound presets characterized by four parameters. Ten different configurations of sound synthesis and processing were presented to each participant, once for each approach. The performance scores of 28 participants (not including early testers) were computed using two measured values: the search duration, and the parametric distance between the reference and answered presets. Compared to the analytic sliders-based interface, the holistic interpolation-based interface demonstrated a significant performance improvement for 60% of sound synthesizers. The other 40% led to equivalent results for the analytic and holistic interfaces. Using sliders, expert users performed nearly as well as they did with interpolation. Beginners and intermediate users struggled more with sliders, while the interpolation allowed them to get quite close to experts’ results.
@inproceedings{NIME20_43, author = {Le Vaillant, Gwendal and Dutoit, Thierry and Giot, Rudi}, title = {Analytic vs. holistic approaches for the live search of sound presets using graphical interpolation}, pages = {227--232}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813330}, url = {https://www.nime.org/proceedings/2020/nime2020_paper43.pdf}, presentation-video = {https://youtu.be/Korw3J_QvQE} }
Chase Mitchusson. 2020. Indeterminate Sample Sequencing in Virtual Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 233–236. http://doi.org/10.5281/zenodo.4813332
Abstract
Download PDF DOI
The purpose of this project is to develop an interface for writing and performing music using sequencers in virtual reality (VR). The VR sequencer deals with chance-based operations to select audio clips for playback and spatial orientation-based rhythm and melody generation, while incorporating three-dimensional (3-D) objects as omnidirectional playheads. Spheres which grow from a variable minimum size to a variable maximum size at a variable speed, constantly looping, represent the passage of time in this VR sequencer. The 3-D assets which represent samples are actually sample containers that come in six common dice shapes. As the dice come into contact with a sphere, their samples are triggered to play. This behavior mimics digital audio workstation (DAW) playheads reading MIDI left-to-right in popular professional and consumer software sequencers. To incorporate height into VR music making, the VR sequencer is capable of generating terrain at the press of a button. Each terrain will gradually change, creating the possibility for the dice to roll on their own. Audio effects are built in to each scene and mapped to terrain parameters, creating another opportunity for chance operations in the music making process. The chance-based sample selection, spatial orientation-defined rhythms, and variable terrain mapped to audio effects lead to indeterminacy in performance and replication of a single piece of music. This project aims to give the gaming community access to experimental music making by means of consumer virtual reality hardware.
@inproceedings{NIME20_44, author = {Mitchusson, Chase}, title = {Indeterminate Sample Sequencing in Virtual Reality}, pages = {233--236}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813332}, url = {https://www.nime.org/proceedings/2020/nime2020_paper44.pdf} }
Rebecca Fiebrink and Laetitia Sonami. 2020. Reflections on Eight Years of Instrument Creation with Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 237–242. http://doi.org/10.5281/zenodo.4813334
Abstract
Download PDF DOI
Machine learning (ML) has been used to create mappings for digital musical instruments for over twenty-five years, and numerous ML toolkits have been developed for the NIME community. However, little published work has studied how ML has been used in sustained instrument building and performance practices. This paper examines the experiences of instrument builder and performer Laetitia Sonami, who has been using ML to build and refine her Spring Spyre instrument since 2012. Using Sonami’s current practice as a case study, this paper explores the utility, opportunities, and challenges involved in using ML in practice over many years. This paper also reports the perspective of Rebecca Fiebrink, the creator of the Wekinator ML tool used by Sonami, revealing how her work with Sonami has led to changes to the software and to her teaching. This paper thus contributes a deeper understanding of the value of ML for NIME practitioners, and it can inform design considerations for future ML toolkits as well as NIME pedagogy. Further, it provides new perspectives on familiar NIME conversations about mapping strategies, expressivity, and control, informed by a dedicated practice over many years.
@inproceedings{NIME20_45, author = {Fiebrink, Rebecca and Sonami, Laetitia}, title = {Reflections on Eight Years of Instrument Creation with Machine Learning}, pages = {237--242}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813334}, url = {https://www.nime.org/proceedings/2020/nime2020_paper45.pdf}, presentation-video = {https://youtu.be/EvXZ9NayZhA} }
Alex Lucas, Miguel Ortiz, and Franziska Schroeder. 2020. The Longevity of Bespoke, Accessible Music Technology: A Case for Community. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 243–248. http://doi.org/10.5281/zenodo.4813338
Abstract
Download PDF DOI
Based on the experience garnered through a longitudinal ethnographic study, the authors reflect on the practice of designing and fabricating bespoke, accessible music tech- nologies. Of particular focus are the social, technical and environmental factors at play which make the provision of such technology a reality. The authors make suggestions of ways to achieve long-term, sustained use. Seemingly those involved in its design, fabrication and use could benefit from a concerted effort to share resources, knowledge and skill as a mobilised community of practitioners.
@inproceedings{NIME20_46, author = {Lucas, Alex and Ortiz, Miguel and Schroeder, Franziska}, title = {The Longevity of Bespoke, Accessible Music Technology: A Case for Community}, pages = {243--248}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813338}, url = {https://www.nime.org/proceedings/2020/nime2020_paper46.pdf}, presentation-video = {https://youtu.be/cLguyuZ9weI} }
Ivica I Bukvic, Disha Sardana, and Woohun Joo. 2020. New Interfaces for Spatial Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 249–254. http://doi.org/10.5281/zenodo.4813342
Abstract
Download PDF DOI
With the proliferation of venues equipped with the high density loudspeaker arrays there is a growing interest in developing new interfaces for spatial musical expression (NISME). Of particular interest are interfaces that focus on the emancipation of the spatial domain as the primary dimension for musical expression. Here we present Monet NISME that leverages multitouch pressure-sensitive surface and the D4 library’s spatial mask and thereby allows for a unique approach to interactive spatialization. Further, we present a study with 22 participants designed to assess its usefulness and compare it to the Locus, a NISME introduced in 2019 as part of a localization study which is built on the same design principles of using natural gestural interaction with the spatial content. Lastly, we briefly discuss the utilization of both NISMEs in two artistic performances and propose a set of guidelines for further exploration in the NISME domain.
@inproceedings{NIME20_47, author = {Bukvic, Ivica I and Sardana, Disha and Joo, Woohun}, title = {New Interfaces for Spatial Musical Expression}, pages = {249--254}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813342}, url = {https://www.nime.org/proceedings/2020/nime2020_paper47.pdf}, presentation-video = {https://youtu.be/GQ0552Lc1rw} }
Mark Durham. 2020. Inhabiting the Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 255–258. http://doi.org/10.5281/zenodo.4813344
Abstract
Download PDF DOI
This study presents an ecosystemic approach to music interaction, through the practice-based development of a mixed reality installation artwork. It fuses a generative, immersive audio composition with augmented reality visualisation, within an architectural space as part of a blended experience. Participants are encouraged to explore and interact with this combination of elements through physical engagement, to then develop an understanding of how the blending of real and virtual space occurs as the installation unfolds. The sonic layer forms a link between the two, as a three-dimensional sound composition. Connections in the system allow for multiple streams of data to run between the layers, which are used for the real-time modulation of parameters. These feedback mechanisms form a complete loop between the participant in real space, soundscape, and mixed reality visualisation, providing a participant mediated experience that exists somewhere between creator and observer.
@inproceedings{NIME20_48, author = {Durham, Mark}, title = {Inhabiting the Instrument}, pages = {255--258}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813344}, url = {https://www.nime.org/proceedings/2020/nime2020_paper48.pdf} }
Chris Nash. 2020. Crowd-driven Music: Interactive and Generative Approaches using Machine Vision and Manhattan. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 259–264. http://doi.org/10.5281/zenodo.4813346
Abstract
Download PDF DOI
This paper details technologies and artistic approaches to crowd-driven music, discussed in the context of a live public installation in which activity in a public space (a busy railway platform) is used to drive the automated composition and performance of music. The approach presented uses realtime machine vision applied to a live video feed of a scene, from which detected objects and people are fed into Manhattan (Nash, 2014), a digital music notation that integrates sequencing and programming to support the live creation of complex musical works that combine static, algorithmic, and interactive elements. The paper discusses the technical details of the system and artistic development of specific musical works, introducing novel techniques for mapping chaotic systems to musical expression and exploring issues of agency, aesthetic, accessibility and adaptability relating to composing interactive music for crowds and public spaces. In particular, performances as part of an installation for BBC Music Day 2018 are described. The paper subsequently details a practical workshop, delivered digitally, exploring the development of interactive performances in which the audience or general public actively or passively control live generation of a musical piece. Exercises support discussions on technical, aesthetic, and ontological issues arising from the identification and mapping of structure, order, and meaning in non-musical domains to analogous concepts in musical expression. Materials for the workshop are available freely with the Manhattan software.
@inproceedings{NIME20_49, author = {Nash, Chris}, title = {Crowd-driven Music: Interactive and Generative Approaches using Machine Vision and Manhattan}, pages = {259--264}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813346}, url = {https://www.nime.org/proceedings/2020/nime2020_paper49.pdf}, presentation-video = {https://youtu.be/DHIowP2lOsA} }
Michael J Krzyzaniak. 2020. Words to Music Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 29–34. http://doi.org/10.5281/zenodo.4813350
Abstract
Download PDF DOI
This paper discusses the design of a musical synthesizer that takes words as input, and attempts to generate music that somehow underscores those words. This is considered as a tool for sound designers who could, for example, enter dialogue from a film script and generate appropriate back- ground music. The synthesizer uses emotional valence and arousal as a common representation between words and mu- sic. It draws on previous studies that relate words and mu- sical features to valence and arousal. The synthesizer was evaluated with a user study. Participants listened to music generated by the synthesizer, and described the music with words. The arousal of the words they entered was highly correlated with the intended arousal of the music. The same was, surprisingly, not true for valence. The synthesizer is online, at [redacted URL].
@inproceedings{NIME20_5, author = {Krzyzaniak, Michael J}, title = {Words to Music Synthesis}, pages = {29--34}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813350}, url = {https://www.nime.org/proceedings/2020/nime2020_paper5.pdf} }
Alex Mclean. 2020. Algorithmic Pattern. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 265–270. http://doi.org/10.5281/zenodo.4813352
Abstract
Download PDF DOI
This paper brings together two main perspectives on algorithmic pattern. First, the writing of musical patterns in live coding performance, and second, the weaving of patterns in textiles. In both cases, algorithmic pattern is an interface between the human and the outcome, where small changes have far-reaching impact on the results. By bringing contemporary live coding and ancient textile approaches together, we reach a common view of pattern as algorithmic movement (e.g. looping, shifting, reflecting, interfering) in the making of things. This works beyond the usual definition of pattern used in musical interfaces, of mere repeating sequences. We conclude by considering the place of algorithmic pattern in a wider activity of making.
@inproceedings{NIME20_50, author = {Mclean, Alex}, title = {Algorithmic Pattern}, pages = {265--270}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813352}, url = {https://www.nime.org/proceedings/2020/nime2020_paper50.pdf}, presentation-video = {https://youtu.be/X9AkOAEDV08} }
Louis McCallum and Mick S Grierson. 2020. Supporting Interactive Machine Learning Approaches to Building Musical Instruments in the Browser. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 271–272. http://doi.org/10.5281/zenodo.4813357
Abstract
Download PDF DOI
Interactive machine learning (IML) is an approach to building interactive systems, including DMIs, focusing on iterative end-user data provision and direct evaluation. This paper describes the implementation of a Javascript library, encapsulating many of the boilerplate needs of building IML systems for creative tasks with minimal code inclusion and low barrier to entry. Further, we present a set of complimentary Audio Worklet-backed instruments to allow for in-browser creation of new musical systems able to run concurrently with various computationally expensive feature extractor and lightweight machine learning models without the interference often seen in interactive Web Audio applications.
@inproceedings{NIME20_51, author = {McCallum, Louis and Grierson, Mick S}, title = {Supporting Interactive Machine Learning Approaches to Building Musical Instruments in the Browser}, pages = {271--272}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813357}, url = {https://www.nime.org/proceedings/2020/nime2020_paper51.pdf} }
Mathias S Kirkegaard, Mathias Bredholt, Christian Frisson, and Marcelo Wanderley. 2020. TorqueTuner: A self contained module for designing rotary haptic force feedback for digital musical instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 273–278. http://doi.org/10.5281/zenodo.4813359
Abstract
Download PDF DOI
TorqueTuner is an embedded module that allows Digital Musical Instrument (DMI) designers to map sensors to parameters of haptic effects and dynamically modify rotary force feedback in real-time. We embedded inside TorqueTuner a collection of haptic effects (Wall, Magnet, Detents, Spring, Friction, Spin, Free) and a bi-directional interface through libmapper, a software library for making connections between data signals on a shared network. To increase affordability and portability of force-feedback implementations in DMI design, we designed our platform to be wireless, self-contained and built from commercially available components. To provide examples of modularity and portability, we integrated TorqueTuner into a standalone haptic knob and into an existing DMI, the T-Stick. We implemented 3 musical applications (Pitch wheel, Turntable and Exciter), by mapping sensors to sound synthesis in audio programming environment SuperCollider. While the original goal was to simulate the haptic feedback associated with turning a knob, we found that the platform allows for further expanding interaction possibilities in application scenarios where rotary control is familiar.
@inproceedings{NIME20_52, author = {Kirkegaard, Mathias S and Bredholt, Mathias and Frisson, Christian and Wanderley, Marcelo}, title = {TorqueTuner: A self contained module for designing rotary haptic force feedback for digital musical instruments}, pages = {273--278}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813359}, url = {https://www.nime.org/proceedings/2020/nime2020_paper52.pdf}, presentation-video = {https://youtu.be/V8WDMbuX9QA} }
Corey J Ford and Chris Nash. 2020. An Iterative Design ‘by proxy’ Method for Developing Educational Music Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 279–284. http://doi.org/10.5281/zenodo.4813361
Abstract
Download PDF DOI
Iterative design methods involving children and educators are difficult to conduct, given both the ethical implications and time commitments understandably required. The qualitative design process presented here recruits introductory teacher training students, towards discovering useful design insights relevant to music education technologies “by proxy”. Therefore, some of the barriers present in child-computer interaction research are avoided. As an example, the method is applied to the creation of a block-based music notation system, named Codetta. Building upon successful educational technologies that intersect both music and computer programming, Codetta seeks to enable child composition, whilst aiding generalist educator’s confidence in teaching music.
@inproceedings{NIME20_53, author = {Ford, Corey J and Nash, Chris}, title = {An Iterative Design ‘by proxy’ Method for Developing Educational Music Interfaces}, pages = {279--284}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813361}, url = {https://www.nime.org/proceedings/2020/nime2020_paper53.pdf}, presentation-video = {https://youtu.be/fPbZMQ5LEmk} }
Filipe Calegario, Marcelo Wanderley, João Tragtenberg, et al. 2020. Probatio 1.0: collaborative development of a toolkit for functional DMI prototypes. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 285–290. http://doi.org/10.5281/zenodo.4813363
Abstract
Download PDF DOI
Probatio is an open-source toolkit for prototyping new digital musical instruments created in 2016. Based on a morphological chart of postures and controls of musical instruments, it comprises a set of blocks, bases, hubs, and supports that, when combined, allows designers, artists, and musicians to experiment with different input devices for musical interaction in different positions and postures. Several musicians have used the system, and based on these past experiences, we assembled a list of improvements to implement version 1.0 of the toolkit through a unique international partnership between two laboratories in Brazil and Canada. In this paper, we present the original toolkit and its use so far, summarize the main lessons learned from musicians using it, and present the requirements behind, and the final design of, v1.0 of the project. We also detail the work developed in digital fabrication using two different techniques: laser cutting and 3D printing, comparing their pros and cons. We finally discuss the opportunities and challenges of fully sharing the project online and replicating its parts in both countries.
@inproceedings{NIME20_54, author = {Calegario, Filipe and Wanderley, Marcelo and Tragtenberg, João and Meneses, Eduardo and Wang, Johnty and Sullivan, John and Franco, Ivan and Kirkegaard, Mathias S and Bredholt, Mathias and Rohs, Josh}, title = {Probatio 1.0: collaborative development of a toolkit for functional DMI prototypes}, pages = {285--290}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813363}, url = {https://www.nime.org/proceedings/2020/nime2020_paper54.pdf}, presentation-video = {https://youtu.be/jkFnZZUA3xs} }
Travis J West, Marcelo Wanderley, and Baptiste Caramiaux. 2020. Making Mappings: Examining the Design Process. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 291–296. http://doi.org/10.5281/zenodo.4813365
Abstract
Download PDF DOI
We conducted a study which examines mappings from a relatively unexplored perspective: how they are made. Twelve skilled NIME users designed a mapping from a T-Stick to a subtractive synthesizer, and were interviewed about their approach to mapping design. We present a thematic analysis of the interviews, with reference to data recordings captured while the designers worked. Our results suggest that the mapping design process is an iterative process that alternates between two working modes: diffuse exploration and directed experimentation.
@inproceedings{NIME20_55, author = {West, Travis J and Wanderley, Marcelo and Caramiaux, Baptiste}, title = {Making Mappings: Examining the Design Process}, pages = {291--296}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813365}, url = {https://www.nime.org/proceedings/2020/nime2020_paper55.pdf}, presentation-video = {https://youtu.be/aaoResYjqmE} }
Michael Sidler, Matthew C Bisson, Jordan Grotz, and Scott Barton. 2020. Parthenope: A Robotic Musical Siren. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 297–300. http://doi.org/10.5281/zenodo.4813367
Abstract
Download PDF DOI
Parthenope is a robotic musical siren developed to produce unique timbres and sonic gestures. Parthenope uses perforated spinning disks through which air is directed to produce sound. Computer-control of disk speed and air flow in conjunction with a variety of nozzles allow pitches to be precisely produced at different volumes. The instrument is controlled via Open Sound Control (OSC) messages sent over an ethernet connection and can interface with common DAWs and physical controllers. Parthenope is capable of microtonal tuning, portamenti, rapid and precise articulation (and thus complex rhythms) and distinct timbres that result from its aerophonic character. It occupies a unique place among robotic musical instruments.
@inproceedings{NIME20_56, author = {Sidler, Michael and Bisson, Matthew C and Grotz, Jordan and Barton, Scott}, title = {Parthenope: A Robotic Musical Siren}, pages = {297--300}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813367}, url = {https://www.nime.org/proceedings/2020/nime2020_paper56.pdf}, presentation-video = {https://youtu.be/HQuR0aBJ70Y} }
Steven Kemper. 2020. Tremolo-Harp: A Vibration-Motor Actuated Robotic String Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 301–304. http://doi.org/10.5281/zenodo.4813369
Abstract
Download PDF DOI
The Tremolo-Harp is a twelve-stringed robotic instrument, where each string is actuated with a DC vibration motor to produce a mechatronic “tremolo” effect. It was inspired by instruments and musical styles that employ tremolo as a primary performance technique, including the hammered dulcimer, pipa, banjo, flamenco guitar, and surf rock guitar. Additionally, the Tremolo-Harp is designed to produce long, sustained textures and continuous dynamic variation. These capabilities represent a different approach from the majority of existing robotic string instruments, which tend to focus on actuation speed and rhythmic precision. The composition Tremolo-Harp Study 1 (2019) presents an initial exploration of the Tremolo-Harp’s unique timbre and capability for continuous dynamic variation.
@inproceedings{NIME20_57, author = {Kemper, Steven}, title = {Tremolo-Harp: A Vibration-Motor Actuated Robotic String Instrument}, pages = {301--304}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813369}, url = {https://www.nime.org/proceedings/2020/nime2020_paper57.pdf} }
Atsuya Kobayashi, Reo Anzai, and Nao Tokui. 2020. ExSampling: a system for the real-time ensemble performance of field-recorded environmental sounds. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 305–308. http://doi.org/10.5281/zenodo.4813371
Abstract
Download PDF DOI
We propose ExSampling: an integrated system of recording application and Deep Learning environment for a real-time music performance of environmental sounds sampled by field recording. Automated sound mapping to Ableton Live tracks by Deep Learning enables field recording to be applied to real-time performance, and create interactions among sound recorder, composers and performers.
@inproceedings{NIME20_58, author = {Kobayashi, Atsuya and Anzai, Reo and Tokui, Nao}, title = {ExSampling: a system for the real-time ensemble performance of field-recorded environmental sounds}, pages = {305--308}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813371}, url = {https://www.nime.org/proceedings/2020/nime2020_paper58.pdf} }
Juan Pablo Yepez Placencia, Jim Murphy, and Dale Carnegie. 2020. Designing an Expressive Pitch Shifting Mechanism for Mechatronic Chordophones. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 309–314. http://doi.org/10.5281/zenodo.4813375
Abstract
Download PDF DOI
The exploration of musical robots has been an area of interest due to the timbral and mechanical advantages they offer for music generation and performance. However, one of the greatest challenges in mechatronic music is to enable these robots to deliver a nuanced and expressive performance. This depends on their capability to integrate dynamics, articulation, and a variety of ornamental techniques while playing a given musical passage. In this paper we introduce a robot arm pitch shifter for a mechatronic monochord prototype. This is a fast, precise, and mechanically quiet system that enables sliding techniques during musical performance. We discuss the design and construction process, as well as the system’s advantages and restrictions. We also review the quantitative evaluation process used to assess if the instrument meets the design requirements. This process reveals how the pitch shifter outperforms existing configurations, and potential areas of improvement for future work.
@inproceedings{NIME20_59, author = {Yepez Placencia, Juan Pablo and Murphy, Jim and Carnegie, Dale}, title = {Designing an Expressive Pitch Shifting Mechanism for Mechatronic Chordophones}, pages = {309--314}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813375}, url = {https://www.nime.org/proceedings/2020/nime2020_paper59.pdf}, presentation-video = {https://youtu.be/rpX8LTZd-Zs} }
Marcel Ehrhardt, Max Neupert, and Clemens Wegener. 2020. Piezoelectric strings as a musical interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 35–36. http://doi.org/10.5281/zenodo.4813377
Abstract
Download PDF DOI
Flexible strings with piezoelectric properties have been developed but until date not evaluated for the use as part of a musical instrument. This paper is assessing the properties of these new fibers, looking at their possibilities for NIME applications.
@inproceedings{NIME20_6, author = {Ehrhardt, Marcel and Neupert, Max and Wegener, Clemens}, title = {Piezoelectric strings as a musical interface}, pages = {35--36}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813377}, url = {https://www.nime.org/proceedings/2020/nime2020_paper6.pdf} }
Alon A Ilsar, Matthew Hughes, and Andrew Johnston. 2020. NIME or Mime: A Sound-First Approach to Developing an Audio-Visual Gestural Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 315–320. http://doi.org/10.5281/zenodo.4813383
Abstract
Download PDF DOI
This paper outlines the development process of an audio-visual gestural instrument—the AirSticks—and elaborates on the role ‘miming’ has played in the formation of new mappings for the instrument. The AirSticks, although fully-functioning, were used as props in live performances in order to evaluate potential mapping strategies that were later implemented for real. This use of mime when designing Digital Musical Instruments (DMIs) can help overcome choice paralysis, break from established habits, and liberate creators to realise more meaningful parameter mappings. Bringing this process into an interactive performance environment acknowledges the audience as stakeholders in the design of these instruments, and also leads us to reflect upon the beliefs and assumptions made by an audience when engaging with the performance of such ‘magical’ devices. This paper establishes two opposing strategies to parameter mapping, ‘movement-first’ mapping, and the less conventional ‘sound-first’ mapping that incorporates mime. We discuss the performance ‘One Five Nine’, its transformation from a partial mime into a fully interactive presentation, and the influence this process has had on the outcome of the performance and the AirSticks as a whole.
@inproceedings{NIME20_60, author = {Ilsar, Alon A and Hughes, Matthew and Johnston, Andrew}, title = {NIME or Mime: A Sound-First Approach to Developing an Audio-Visual Gestural Instrument}, pages = {315--320}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813383}, url = {https://www.nime.org/proceedings/2020/nime2020_paper60.pdf}, presentation-video = {https://youtu.be/ZFQKKI3dFhE} }
Matthew Hughes and Andrew Johnston. 2020. URack: Audio-visual Composition and Performance using Unity and VCV Rack. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 321–322. http://doi.org/10.5281/zenodo.4813389
Abstract
Download PDF DOI
This demonstration presents URack, a custom-built audio-visual composition and performance environment that combines the Unity video-game engine with the VCV Rack software modular synthesiser. In alternative cross-modal solutions, a compromise is likely made in either the sonic or visual output, or the consistency and intuitiveness of the composition environment. By integrating control mechanisms for graphics inside VCV Rack, the music-making metaphors used to build a patch are extended into the visual domain. Users familiar with modular synthesizers are immediately able to start building high-fidelity graphics using the same control voltages regularly used to compose sound. Without needing to interact with two separate development environments, languages or metaphorical domains, users are encouraged to freely, creatively and enjoyably construct their own highly-integrated audio-visual instruments. This demonstration will showcase the construction of an audio-visual patch using URack, focusing on the integration of flexible GPU particle systems present in Unity with the vast library of creative audio composition modules inside VCV.
@inproceedings{NIME20_61, author = {Hughes, Matthew and Johnston, Andrew}, title = {URack: Audio-visual Composition and Performance using Unity and VCV Rack}, pages = {321--322}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813389}, url = {https://www.nime.org/proceedings/2020/nime2020_paper61.pdf} }
Irmandy Wicaksono and Joseph Paradiso. 2020. KnittedKeyboard: Digital Knitting of Electronic Textile Musical Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 323–326. http://doi.org/10.5281/zenodo.4813391
Abstract
Download PDF DOI
In this work, we have developed a textile-based interactive surface fabricated through digital knitting technology. Our prototype explores intarsia, interlock patterning, and a collection of functional and non-functional fibers to create a piano-pattern textile for expressive and virtuosic sonic interaction. We combined conductive, thermochromic, and composite yarns with high-flex polyester yarns to develop KnittedKeyboard with its soft physical properties and responsive sensing and display capabilities. The individual and combination of each key could simultaneously sense discrete touch, as well as continuous proximity and pressure. The KnittedKeyboard enables performers to experience fabric-based multimodal interaction as they explore the seamless texture and materiality of the electronic textile.
@inproceedings{NIME20_62, author = {Wicaksono, Irmandy and Paradiso, Joseph}, title = {KnittedKeyboard: Digital Knitting of Electronic Textile Musical Controllers}, pages = {323--326}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813391}, url = {https://www.nime.org/proceedings/2020/nime2020_paper62.pdf} }
Olivier Capra, Florent Berthaut, and Laurent Grisoni. 2020. A Taxonomy of Spectator Experience Augmentation Techniques. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 327–330. http://doi.org/10.5281/zenodo.4813396
Abstract
Download PDF DOI
In the context of artistic performances, the complexity and diversity of digital interfaces may impair the spectator experience, in particular hiding the engagement and virtuosity of the performers. Artists and researchers have made attempts at solving this by augmenting performances with additional information provided through visual, haptic or sonic modalities. However, the proposed techniques have not yet been formalized and we believe a clarification of their many aspects is necessary for future research. In this paper, we propose a taxonomy for what we define as Spectator Experience Augmentation Techniques (SEATs). We use it to analyse existing techniques and we demonstrate how it can serve as a basis for the exploration of novel ones.
@inproceedings{NIME20_63, author = {Capra, Olivier and Berthaut, Florent and Grisoni, Laurent}, title = {A Taxonomy of Spectator Experience Augmentation Techniques}, pages = {327--330}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813396}, url = {https://www.nime.org/proceedings/2020/nime2020_paper63.pdf} }
Sourya Sen, Koray Tahiroğlu, and Julia Lohmann. 2020. Sounding Brush: A Tablet based Musical Instrument for Drawing and Mark Making. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 331–336. http://doi.org/10.5281/zenodo.4813398
Abstract
Download PDF DOI
Existing applications of mobile music tools are often concerned with the simulation of acoustic or digital musical instruments, extended with graphical representations of keys, pads, etc. Following an intensive review of existing tools and approaches to mobile music making, we implemented a digital drawing tool, employing a time-based graphical/gestural interface for music composition and performance. In this paper, we introduce our Sounding Brush project, through which we explore music making in various forms with the natural gestures of drawing and mark making on a tablet device. Subsequently, we present the design and development of the Sounding Brush application. Utilising this project idea, we discuss the act of drawing as an activity that is not separated from the act of playing musical instrument. Drawing is essentially the act of playing music by means of a continuous process of observation, individualisation and exploring time and space in a unique way.
@inproceedings{NIME20_64, author = {Sen, Sourya and Tahiroğlu, Koray and Lohmann, Julia}, title = {Sounding Brush: A Tablet based Musical Instrument for Drawing and Mark Making}, pages = {331--336}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813398}, url = {https://www.nime.org/proceedings/2020/nime2020_paper64.pdf}, presentation-video = {https://youtu.be/7RkGbyGM-Ho} }
Koray Tahiroğlu, Miranda Kastemaa, and Oskar Koli. 2020. Al-terity: Non-Rigid Musical Instrument with Artificial Intelligence Applied to Real-Time Audio Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 337–342. http://doi.org/10.5281/zenodo.4813402
Abstract
Download PDF DOI
A deformable musical instrument can take numerous distinct shapes with its non-rigid features. Building audio synthesis module for such an interface behaviour can be challenging. In this paper, we present the Al-terity, a non-rigid musical instrument that comprises a deep learning model with generative adversarial network architecture and use it for generating audio samples for real-time audio synthesis. The particular deep learning model we use for this instrument was trained with existing data set as input for purposes of further experimentation. The main benefits of the model used are the ability to produce the realistic range of timbre of the trained data set and the ability to generate new audio samples in real-time, in the moment of playing, with the characteristics of sounds that the performer ever heard before. We argue that these advanced intelligence features on the audio synthesis level could allow us to explore performing music with particular response features that define the instrument’s digital idiomaticity and allow us reinvent the instrument in the act of music performance.
@inproceedings{NIME20_65, author = {Tahiroğlu, Koray and Kastemaa, Miranda and Koli, Oskar}, title = {Al-terity: Non-Rigid Musical Instrument with Artificial Intelligence Applied to Real-Time Audio Synthesis}, pages = {337--342}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813402}, url = {https://www.nime.org/proceedings/2020/nime2020_paper65.pdf}, presentation-video = {https://youtu.be/giYxFovZAvQ} }
Chris Kiefer, Dan Overholt, and Alice Eldridge. 2020. Shaping the behaviour of feedback instruments with complexity-controlled gain dynamics. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 343–348. http://doi.org/10.5281/zenodo.4813406
Abstract
Download PDF DOI
Feedback instruments offer radical new ways of engaging with instrument design and musicianship. They are defined by recurrent circulation of signals through the instrument, which give the instrument ‘a life of its own’ and a ’stimulating uncontrollability’. Arguably, the most interesting musical behaviour in these instruments happens when their dynamic complexity is maximised, without falling into saturating feedback. It is often challenging to keep the instrument in this zone; this research looks at algorithmic ways to manage the behaviour of feedback loops in order to make feedback instruments more playable and musical; to expand and maintain the ‘sweet spot’. We propose a solution that manages gain dynamics based on measurement of complexity, using a realtime implementation of the Effort to Compress algorithm. The system was evaluated with four musicians, each of whom have different variations of string-based feedback instruments, following an autobiographical design approach. Qualitative feedback was gathered, showing that the system was successful in modifying the behaviour of these instruments to allow easier access to edge transition zones, sometimes at the expense of losing some of the more compelling dynamics of the instruments. The basic efficacy of the system is evidenced by descriptive audio analysis. This paper is accompanied by a dataset of sounds collected during the study, and the open source software that was written to support the research.
@inproceedings{NIME20_66, author = {Kiefer, Chris and Overholt, Dan and Eldridge, Alice}, title = {Shaping the behaviour of feedback instruments with complexity-controlled gain dynamics}, pages = {343--348}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813406}, url = {https://www.nime.org/proceedings/2020/nime2020_paper66.pdf}, presentation-video = {https://youtu.be/sf6FwsUX-84} }
Duncan A.H. Williams. 2020. MINDMIX: Mapping of brain activity to congruent audio mixing features. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 349–352. http://doi.org/10.5281/zenodo.4813408
Abstract
Download PDF DOI
Brain-computer interfacing (BCI) offers novel methods to facilitate participation in audio engineering, providing access for individuals who might otherwise be unable to take part (either due to lack of training, or physical disability). This paper describes the development of a BCI system for conscious, or ‘active’, control of parameters on an audio mixer by generation of synchronous MIDI Machine Control messages. The mapping between neurophysiological cues and audio parameter must be intuitive for a neophyte audience (i.e., one without prior training or the physical skills developed by professional audio engineers when working with tactile interfaces). The prototype is dubbed MINDMIX (a portmanteau of ‘mind’ and ‘mixer’), combining discrete and many-to-many mappings of audio mixer parameters and BCI control signals measured via Electronecephalograph (EEG). In future, specific evaluation of discrete mappings would be useful for iterative system design.
@inproceedings{NIME20_67, author = {Williams, Duncan A.H.}, title = {MINDMIX: Mapping of brain activity to congruent audio mixing features}, pages = {349--352}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813408}, url = {https://www.nime.org/proceedings/2020/nime2020_paper67.pdf} }
Marcel O DeSmith, Andrew Piepenbrink, and Ajay Kapur. 2020. SQUISHBOI: A Multidimensional Controller for Complex Musical Interactions using Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 353–356. http://doi.org/10.5281/zenodo.4813412
Abstract
Download PDF DOI
We present SQUISHBOI, a continuous touch controller for interacting with complex musical systems. An elastic rubber membrane forms the playing surface of the instrument, while machine learning is used for dimensionality reduction and gesture recognition. The membrane is stretched over a hollow shell which permits considerable depth excursion, with an array of distance sensors tracking the surface displacement from underneath. The inherent dynamics of the membrane lead to cross-coupling between nearby sensors, however we do not see this as a flaw or limitation. Instead we find this coupling gives structure to the playing techniques and mapping schemes chosen by the user. The instrument is best utilized as a tool for actively designing abstraction and forming a relative control structure within a given system, one which allows for intuitive gestural control beyond what can be accomplished with conventional musical controllers.
@inproceedings{NIME20_68, author = {DeSmith, Marcel O and Piepenbrink, Andrew and Kapur, Ajay}, title = {SQUISHBOI: A Multidimensional Controller for Complex Musical Interactions using Machine Learning}, pages = {353--356}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813412}, url = {https://www.nime.org/proceedings/2020/nime2020_paper68.pdf} }
Nick Bryan-Kinns, LI ZIJIN, and Xiaohua Sun. 2020. On Digital Platforms and AI for Music in the UK and China. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 357–360. http://doi.org/10.5281/zenodo.4813414
Abstract
Download PDF DOI
Digital technologies play a fundamental role in New Interfaces for Musical Expression as well as music making and consumption more widely. This paper reports on two workshops with music professionals and researchers who undertook an initial exploration of the differences between digital platforms (software and online services) for music in the UK and China. Differences were found in primary target user groups of digital platforms in the UK and China as well as the stages of the culture creation cycle they were developed for. Reasons for the divergence of digital platforms include differences in culture, regulation, and infrastructure, as well as the inherent Western bias of software for music making such as Digital Audio Workstations. Using AI to bridge between Western and Chinese music traditions is suggested as an opportunity to address aspects of the divergent landscape of digital platforms for music inside and outside China.
@inproceedings{NIME20_69, author = {Bryan-Kinns, Nick and ZIJIN, LI and Sun, Xiaohua}, title = {On Digital Platforms and AI for Music in the UK and China}, pages = {357--360}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813414}, url = {https://www.nime.org/proceedings/2020/nime2020_paper69.pdf}, presentation-video = {https://youtu.be/c7nkCBBTnDA} }
Jean Chu and Jaewon Choi. 2020. Reinterpretation of Pottery as a Musical Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 37–38. http://doi.org/10.5281/zenodo.4813416
Abstract
Download PDF DOI
Digitally integrating the materiality, form, and tactility in everyday objects (e.g., pottery) provides inspiration for new ways of musical expression and performance. In this project we reinterpret the creative process and aesthetic philosophy of pottery as algorithmic music to help users rediscover the latent story behind pottery through a synesthetic experience. Projects Mobius I and Mobius II illustrate two potential directions toward a musical interface, one focusing on the circular form, and the other, on graphical ornaments of pottery. Six conductive graphics on the pottery function as capacitive sensors while retaining their resemblance to traditional ornamental patterns in pottery. Offering pottery as a musical interface, we invite users to orchestrate algorithmic music by physically touching the different graphics.
@inproceedings{NIME20_7, author = {Chu, Jean and Choi, Jaewon}, title = {Reinterpretation of Pottery as a Musical Interface}, pages = {37--38}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813416}, url = {https://www.nime.org/proceedings/2020/nime2020_paper7.pdf} }
Anders Eskildsen and Mads Walther-Hansen. 2020. Force dynamics as a design framework for mid-air musical interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 361–366. http://doi.org/10.5281/zenodo.4813418
Abstract
Download PDF DOI
In this paper we adopt the theory of force dynamics in human cognition as a fundamental design principle for the development of mid-air musical interfaces. We argue that this principle can provide more intuitive user experiences when the interface does not provide direct haptic feedback – such as interfaces made with various gesture-tracking technologies. Grounded in five concepts from the theoretical literature on force dynamics in musical cognition, the paper presents a set of principles for interaction design focused on five force schemas: Path restraint, Containment restraint, Counter-force, Attraction, and Compulsion. We describe an initial set of examples that implement these principles using a Leap Motion sensor for gesture tracking and SuperCollider for interactive audio design. Finally, the paper presents a pilot experiment that provides initial ratings of intuitiveness in the user experience.
@inproceedings{NIME20_70, author = {Eskildsen, Anders and Walther-Hansen, Mads}, title = {Force dynamics as a design framework for mid-air musical interfaces}, pages = {361--366}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813418}, url = {https://www.nime.org/proceedings/2020/nime2020_paper70.pdf}, presentation-video = {https://youtu.be/REe967aGVN4} }
Erik Nyström. 2020. Intra-Actions: Experiments with Velocity and Position in Continuous Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 367–368. http://doi.org/10.5281/zenodo.4813420
Abstract
Download PDF DOI
Continuous MIDI controllers commonly output their position only, with no influence of the performative energy with which they were set. In this paper, creative uses of time as a parameter in continuous controller mapping are demonstrated: the speed of movement affects the position mapping and control output. A set of SuperCollider classes are presented, developed in the author’s practice in computer music, where they have been used together with commercial MIDI controllers. The creative applications employ various approaches and metaphors for scaling time, but also machine learning for recognising patterns. In the techniques, performer, controller and synthesis ‘intra-act’, to use Karen Barad’s term: because position and velocity are derived from the same data, sound output cannot be predicted without the temporal context of performance.
@inproceedings{NIME20_71, author = {Nyström, Erik}, title = {Intra-Actions: Experiments with Velocity and Position in Continuous Controllers}, pages = {367--368}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813420}, url = {https://www.nime.org/proceedings/2020/nime2020_paper71.pdf} }
James Leonard and Andrea Giomi. 2020. Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 369–374. http://doi.org/10.5281/zenodo.4813422
Abstract
Download PDF DOI
This paper presents an ongoing research on hand gesture interactive sonification in dance performances. For this purpose, a conceptual framework and a multilayered mapping model issued from an experimental case study will be proposed. The goal of this research is twofold. On the one hand, we aim to determine action-based perceptual invariants that allow us to establish pertinent relations between gesture qualities and sound features. On the other hand, we are interested in analysing how an interactive model-based sonification can provide useful and effective feedback for dance practitioners. From this point of view, our research explicitly addresses the convergence between the scientific understandings provided by the field of movement sonification and the traditional know-how developed over the years within the digital instrument and interaction design communities. A key component of our study is the combination between physically-based sound synthesis and motion features analysis. This approach has proven effective in providing interesting insights for devising novel sonification models for artistic and scientific purposes, and for developing a collaborative platform involving the designer, the musician and the performer.
@inproceedings{NIME20_72, author = {Leonard, James and Giomi, Andrea}, title = {Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance}, pages = {369--374}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813422}, url = {https://www.nime.org/proceedings/2020/nime2020_paper72.pdf}, presentation-video = {https://youtu.be/HQqIjL-Z8dA} }
Romulo A Vieira and Flávio Luiz Schiavoni. 2020. Fliperama: An affordable Arduino based MIDI Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 375–379. http://doi.org/10.5281/zenodo.4813424
Abstract
Download PDF DOI
Lack of access to technological devices is a common exponent of a new form of social exclusion. Coupled with this, there are also the risk of increasing inequality between developed and underdeveloped countries when concerning technology access. Regarding Internet access, the percentage of young Africans who do not have access to this technology is around 60%, while in Europe the figure is 4%. This limitation also expands for musical instruments, whether electronic or not. In light of this worldwide problem, this paper aims to showcase a method for building a MIDI Controller, a prominent instrument for musical production and live performance, in an economically viable form that can be accessible to the poorest populations. It is also desirable that the equipment is suitable for teaching various subjects such as Music, Computer Science and Engineering. The outcome of this research is not an amazing controller or a brandy new cool interface but the experience of building a controller concerning all the bad conditions of doing it.
@inproceedings{NIME20_73, author = {Vieira, Romulo A and Schiavoni, Flávio Luiz}, title = {Fliperama: An affordable Arduino based MIDI Controller}, pages = {375--379}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813424}, url = {https://www.nime.org/proceedings/2020/nime2020_paper73.pdf}, presentation-video = {https://youtu.be/X1GE5jk2cgc} }
Alex MacLean. 2020. Immersive Dreams: A Shared VR Experience. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 380–381. http://doi.org/10.5281/zenodo.4813426
Abstract
Download PDF DOI
This paper reports on a project that aimed to break apart the isolation of VR and share an experience between both the wearer of a headset and a room of observers. It presented the user with an acoustically playable virtual environment in which their interactions with objects spawned audio events from the room’s 80 loudspeakers and animations on the room’s 3 display walls. This required the use of several Unity engines running on separate machines and SuperCollider running as the audio engine. The perspectives into what the wearer of the headset was doing allowed the audience to connect their movements to the sounds and images being experienced, effectively allowing them all to participate in the installation simultaneously.
@inproceedings{NIME20_74, author = {MacLean, Alex}, title = {Immersive Dreams: A Shared VR Experience}, pages = {380--381}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813426}, url = {https://www.nime.org/proceedings/2020/nime2020_paper74.pdf} }
Nick Bryan-Kinns and LI ZIJIN. 2020. ReImagining: Cross-cultural Co-Creation of a Chinese Traditional Musical Instrument with Digital Technologies. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 382–387. http://doi.org/10.5281/zenodo.4813428
Abstract
Download PDF DOI
There are many studies of Digital Musical Instrument (DMI) design, but there is little research on the cross-cultural co-creation of DMIs drawing on traditional musical instruments. We present a study of cross-cultural co-creation inspired by the Duxianqin - a traditional Chinese Jing ethnic minority single stringed musical instrument. We report on how we structured the co-creation with European and Chinese participants ranging from DMI designers to composers and performers. We discuss how we identified the ‘essence’ of the Duxianqin and used this to drive co-creation of three Duxianqin reimagined through digital technologies. Music was specially composed for these reimagined Duxianqin and performed in public as the culmination of the design process. We reflect on our co-creation process and how others could use such an approach to identify the essence of traditional instruments and reimagine them in the digital age.
@inproceedings{NIME20_75, author = {Bryan-Kinns, Nick and ZIJIN, LI}, title = {ReImagining: Cross-cultural Co-Creation of a Chinese Traditional Musical Instrument with Digital Technologies}, pages = {382--387}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813428}, url = {https://www.nime.org/proceedings/2020/nime2020_paper75.pdf}, presentation-video = {https://youtu.be/NvHcUQea82I} }
Konstantinos n/a Vasilakos, Scott Wilson, Thomas McCauley, Tsun Winston Yeung, Emma Margetson, and Milad Khosravi Mardakheh. 2020. Sonification of High Energy Physics Data Using Live Coding and Web Based Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 388–393. http://doi.org/10.5281/zenodo.4813430
Abstract
Download PDF DOI
This paper presents a discussion of Dark Matter, a sonification project using live coding and just-in-time programming techniques. The project uses data from proton-proton collisions produced by the Large Hadron Collider (LHC) at CERN, Switzerland, and then detected and reconstructed by the Compact Muon Solenoid (CMS) experiment, and was developed with the support of the art@CMS project. Work for the Dark Matter project included the development of a custom-made environment in the SuperCollider (SC) programming language that lets the performers of the group engage in collective improvisations using dynamic interventions and networked music systems. This paper will also provide information about a spin-off project entitled the Interactive Physics Sonification System (IPSOS), an interactive and standalone online application developed in the JavaScript programming language. It provides a web-based interface that allows users to map particle data to sound on commonly used web browsers, mobile devices, such as smartphones, tablets etc. The project was developed as an educational outreach tool to engage young students and the general public with data derived from LHC collisions.
@inproceedings{NIME20_76, author = {Vasilakos, Konstantinos n/a and Wilson, Scott and McCauley, Thomas and Yeung, Tsun Winston and Margetson, Emma and Khosravi Mardakheh, Milad}, title = {Sonification of High Energy Physics Data Using Live Coding and Web Based Interfaces.}, pages = {388--393}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813430}, url = {https://www.nime.org/proceedings/2020/nime2020_paper76.pdf}, presentation-video = {https://youtu.be/1vS_tFUyz7g} }
Haruya Takase and Shun Shiramatsu. 2020. Support System for Improvisational Ensemble Based on Long Short-Term Memory Using Smartphone Sensor. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 394–398. http://doi.org/10.5281/zenodo.4813434
Abstract
Download PDF DOI
Our goal is to develop an improvisational ensemble support system for music beginners who do not have knowledge of chord progressions and do not have enough experience of playing an instrument. We hypothesized that a music beginner cannot determine tonal pitches of melody over a particular chord but can use body movements to specify the pitch contour (i.e., melodic outline) and the attack timings (i.e., rhythm). We aim to realize a performance interface for supporting expressing intuitive pitch contour and attack timings using body motion and outputting harmonious pitches over the chord progression of the background music. Since the intended users of this system are not limited to people with music experience, we plan to develop a system that uses Android smartphones, which many people have. Our system consists of three modules: a module for specifying attack timing using smartphone sensors, module for estimating the vertical movement of the smartphone using smartphone sensors, and module for estimating the sound height using smartphone vertical movement and background chord progression. Each estimation module is developed using long short-term memory (LSTM), which is often used to estimate time series data. We conduct evaluation experiments for each module. As a result, the attack timing estimation had zero misjudgments, and the mean error time of the estimated attack timing was smaller than the sensor-acquisition interval. The accuracy of the vertical motion estimation was 64%, and that of the pitch estimation was 7.6%. The results indicate that the attack timing is accurate enough, but the vertical motion estimation and the pitch estimation need to be improved for actual use.
@inproceedings{NIME20_77, author = {Takase, Haruya and Shiramatsu, Shun}, title = {Support System for Improvisational Ensemble Based on Long Short-Term Memory Using Smartphone Sensor}, pages = {394--398}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813434}, url = {https://www.nime.org/proceedings/2020/nime2020_paper77.pdf}, presentation-video = {https://youtu.be/WhrGhas9Cvc} }
Augoustinos Tsiros and Alessandro Palladini. 2020. Towards a Human-Centric Design Framework for AI Assisted Music Production. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 399–404. http://doi.org/10.5281/zenodo.4813436
Abstract
Download PDF DOI
In this paper, we contribute to the discussion on how to best design human-centric MIR tools for live audio mixing by bridging the gap between research on complex systems, the psychology of automation and the design of tools that support creativity in music production. We present the design of the Channel-AI, an embedded AI system which performs instrument recognition and generates parameter settings suggestions for gain levels, gating, compression and equalization which are specific to the input signal and the instrument type. We discuss what we believe to be the key design principles and perspectives on the making of intelligent tools for creativity and for experts in the loop. We demonstrate how these principles have been applied to inform the design of the interaction between expert live audio mixing engineers with the Channel-AI (i.e. a corpus of AI features embedded in the Midas HD Console. We report the findings from a preliminary evaluation we conducted with three professional mixing engineers and reflect on mixing engineers’ comments about the Channel-AI on social media.
@inproceedings{NIME20_78, author = {Tsiros, Augoustinos and Palladini, Alessandro}, title = {Towards a Human-Centric Design Framework for AI Assisted Music Production}, pages = {399--404}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813436}, url = {https://www.nime.org/proceedings/2020/nime2020_paper78.pdf} }
Matthew Rodger, Paul Stapleton, Maarten van Walstijn, Miguel Ortiz, and Laurel S Pardue. 2020. What Makes a Good Musical Instrument? A Matter of Processes, Ecologies and Specificities . Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 405–410. http://doi.org/10.5281/zenodo.4813438
Abstract
Download PDF DOI
Understanding the question of what makes a good musical instrument raises several conceptual challenges. Researchers have regularly adopted tools from traditional HCI as a framework to address this issue, in which instrumental musical activities are taken to comprise a device and a user, and should be evaluated as such. We argue that this approach is not equipped to fully address the conceptual issues raised by this question. It is worth reflecting on what exactly an instrument is, and how instruments contribute toward meaningful musical experiences. Based on a theoretical framework that incorporates ideas from ecological psychology, enactivism, and phenomenology, we propose an alternative approach to studying musical instruments. According to this approach, instruments are better understood in terms of processes rather than as devices, while musicians are not users, but rather agents in musical ecologies. A consequence of this reframing is that any evaluations of instruments, if warranted, should align with the specificities of the relevant processes and ecologies concerned. We present an outline of this argument and conclude with a description of a current research project to illustrate how our approach can shape the design and performance of a musical instrument in-progress.
@inproceedings{NIME20_79, author = {Rodger, Matthew and Stapleton, Paul and van Walstijn, Maarten and Ortiz, Miguel and Pardue, Laurel S}, title = {What Makes a Good Musical Instrument? A Matter of Processes, Ecologies and Specificities }, pages = {405--410}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813438}, url = {https://www.nime.org/proceedings/2020/nime2020_paper79.pdf}, presentation-video = {https://youtu.be/ADLo-QdSwBc} }
Charles Patrick Martin, Zeruo Liu, Yichen Wang, Wennan He, and Henry Gardner. 2020. Sonic Sculpture: Activating Engagement with Head-Mounted Augmented Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 39–42. http://doi.org/10.5281/zenodo.4813445
Abstract
Download PDF DOI
We describe a sonic artwork, "Listening To Listening", that has been designed to accompany a real-world sculpture with two prototype interaction schemes. Our artwork is created for the HoloLens platform so that users can have an individual experience in a mixed reality context. Personal AR systems have recently become available and practical for integration into public art projects, however research into sonic sculpture works has yet to account for the affordances of current portable and mainstream AR systems. In this work, we take advantage of the HoloLens’ spatial awareness to build sonic spaces that have a precise spatial relationship to a given sculpture and where the sculpture itself is modelled in the augmented scene as an "invisible hologram". We describe the artistic rationale for our artwork, the design of the two interaction schemes, and the technical and usability feedback that we have obtained from demonstrations during iterative development. This work appears to be the first time that head-mounted AR has been used to build an interactive sonic landscape to engage with a public sculpture.
@inproceedings{NIME20_8, author = {Martin, Charles Patrick and Liu, Zeruo and Wang, Yichen and He, Wennan and Gardner, Henry}, title = {Sonic Sculpture: Activating Engagement with Head-Mounted Augmented Reality}, pages = {39--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813445}, url = {https://www.nime.org/proceedings/2020/nime2020_paper8.pdf}, presentation-video = {https://youtu.be/RlTWXnFOLN8} }
Giovanni Santini. 2020. Augmented Piano in Augmented Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 411–415. http://doi.org/10.5281/zenodo.4813449
Abstract
Download PDF DOI
Augmented instruments have been a widely explored research topic since the late 80s. The possibility to use sensors for providing an input for sound processing/synthesis units let composers and sound artist open up new ways for experimentation. Augmented Reality, by rendering virtual objects in the real world and by making those objects interactive (via some sensor-generated input), provides a new frame for this research field. In fact, the 3D visual feedback, delivering a precise indication of the spatial configuration/function of each virtual interface, can make the instrumental augmentation process more intuitive for the interpreter and more resourceful for a composer/creator: interfaces can change their behavior over time, can be reshaped, activated or deactivated. Each of these modifications can be made obvious to the performer by using strategies of visual feedback. In addition, it is possible to accurately sample space and to map it with differentiated functions. Augmenting interfaces can also be considered a visual expressive tool for the audience and designed accordingly: the performer’s point of view (or another point of view provided by an external camera) can be mirrored to a projector. This article will show some example of different designs of AR piano augmentation from the composition Studi sulla realtà nuova.
@inproceedings{NIME20_80, author = {Santini, Giovanni}, title = {Augmented Piano in Augmented Reality}, pages = {411--415}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813449}, url = {https://www.nime.org/proceedings/2020/nime2020_paper80.pdf}, presentation-video = {https://youtu.be/3HBWvKj2cqc} }
Tom Davis and Laura Reid. 2020. Taking Back Control: Taming the Feral Cello. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 416–421. http://doi.org/10.5281/zenodo.4813453
Abstract
Download PDF DOI
Whilst there is a large body of NIME papers that concentrate on the presentation of new technologies there are fewer papers that have focused on a longitudinal understanding of NIMEs in practice. This paper embodies the more recent acknowledgement of the importance of practice-based methods of evaluation [1,2,3,4] concerning the use of NIMEs within performance and the recognition that it is only within the situation of practice that the context is available to actually interpret and evaluate the instrument [2]. Within this context this paper revisits the Feral Cello performance system that was first presented at NIME 2017 [5]. This paper explores what has been learned through the artistic practice of performing and workshopping in this context by drawing heavily on the experiences of the performer/composer who has become an integral part of this project and co-author of this paper. The original philosophical context is also revisited and reflections are made on the tensions between this position and the need to ‘get something to work’. The authors feel the presentation of the semi-structured interview within the paper is the best method of staying truthful to Hayes understanding of musical improvisation as an enactive framework ‘in its ability to demonstrate the importance of participatory, relational, emergent, and embodied musical activities and processes’ [4].
@inproceedings{NIME20_81, author = {Davis, Tom and Reid, Laura}, title = {Taking Back Control: Taming the Feral Cello}, pages = {416--421}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813453}, url = {https://www.nime.org/proceedings/2020/nime2020_paper81.pdf}, presentation-video = {https://youtu.be/9npR0T6YGiA} }
Thibault Jaccard, Robert Lieck, and Martin Rohrmeier. 2020. AutoScale: Automatic and Dynamic Scale Selection for Live Jazz Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 422–427. http://doi.org/10.5281/zenodo.4813457
Abstract
Download PDF DOI
Becoming a practical musician traditionally requires an extensive amount of preparatory work to master the technical and theoretical challenges of the particular instrument and musical style before being able to devote oneself to musical expression. In particular, in jazz improvisation, one of the major barriers is the mastery and appropriate selection of scales from a wide range, according to harmonic context and style. In this paper, we present AutoScale, an interactive software for making jazz improvisation more accessible by lifting the burden of scale selection from the musician while still allowing full controllability if desired. This is realized by implementing a MIDI effect that dynamically maps the desired scales onto a standardized layout. Scale selection can be pre-programmed, automated based on algorithmic lead sheet analysis, or interactively adapted. We discuss the music-theoretical foundations underlying our approach, the design choices taken for building an intuitive user interface, and provide implementations as VST plugin and web applications for use with a Launchpad or traditional MIDI keyboard.
@inproceedings{NIME20_82, author = {Jaccard, Thibault and Lieck, Robert and Rohrmeier, Martin}, title = {AutoScale: Automatic and Dynamic Scale Selection for Live Jazz Improvisation}, pages = {422--427}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813457}, url = {https://www.nime.org/proceedings/2020/nime2020_paper82.pdf}, presentation-video = {https://youtu.be/KqGpTTQ9ZrE} }
Lauren Hayes and Adnan Marquez-Borbon. 2020. Nuanced and Interrelated Mediations and Exigencies (NIME): Addressing the Prevailing Political and Epistemological Crises. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 428–433. http://doi.org/10.5281/zenodo.4813459
Abstract
Download PDF DOI
Nearly two decades after its inception as a workshop at the ACM Conference on Human Factors in Computing Systems, NIME exists as an established international conference significantly distinct from its precursor. While this origin story is often noted, the implications of NIME’s history as emerging from a field predominantly dealing with human-computer interaction have rarely been discussed. In this paper we highlight many of the recent—and some not so recent—challenges that have been brought upon the NIME community as it attempts to maintain and expand its identity as a platform for multidisciplinary research into HCI, interface design, and electronic and computer music. We discuss the relationship between the market demands of the neoliberal university—which have underpinned academia’s drive for innovation—and the quantification and economisation of research performance which have facilitated certain disciplinary and social frictions to emerge within NIME-related research and practice. Drawing on work that engages with feminist theory and cultural studies, we suggest that critical reflection and moreover mediation is necessary in order to address burgeoning concerns which have been raised within the NIME discourse in relation to methodological approaches,’diversity and inclusion’, ’accessibility’, and the fostering of rigorous interdisciplinary research.
@inproceedings{NIME20_83, author = {Hayes, Lauren and Marquez-Borbon, Adnan}, title = {Nuanced and Interrelated Mediations and Exigencies (NIME): Addressing the Prevailing Political and Epistemological Crises}, pages = {428--433}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813459}, url = {https://www.nime.org/proceedings/2020/nime2020_paper83.pdf}, presentation-video = {https://youtu.be/4UERHlFUQzo} }
Andrew McPherson and Giacomo Lepri. 2020. Beholden to our tools: negotiating with technology while sketching digital instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 434–439. http://doi.org/10.5281/zenodo.4813461
Abstract
Download PDF DOI
Digital musical instrument design is often presented as an open-ended creative process in which technology is adopted and adapted to serve the musical will of the designer. The real-time music programming languages powering many new instruments often provide access to audio manipulation at a low level, theoretically allowing the creation of any sonic structure from primitive operations. As a result, designers may assume that these seemingly omnipotent tools are pliable vehicles for the expression of musical ideas. We present the outcomes of a compositional game in which sound designers were invited to create simple instruments using common sensors and the Pure Data programming language. We report on the patterns and structures that often emerged during the exercise, arguing that designers respond strongly to suggestions offered by the tools they use. We discuss the idea that current music programming languages may be as culturally loaded as the communities of practice that produce and use them. Instrument making is then best viewed as a protracted negotiation between designer and tools.
@inproceedings{NIME20_84, author = {McPherson, Andrew and Lepri, Giacomo}, title = {Beholden to our tools: negotiating with technology while sketching digital instruments}, pages = {434--439}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813461}, url = {https://www.nime.org/proceedings/2020/nime2020_paper84.pdf}, presentation-video = {https://youtu.be/-nRtaucPKx4} }
Andrea Martelloni, Andrew McPherson, and Mathieu Barthet. 2020. Percussive Fingerstyle Guitar through the Lens of NIME: an Interview Study. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 440–445. http://doi.org/10.5281/zenodo.4813463
Abstract
Download PDF DOI
Percussive fingerstyle is a playing technique adopted by many contemporary acoustic guitarists, and it has grown substantially in popularity over the last decade. Its foundations lie in the use of the guitar’s body for percussive lines, and in the extended range given by the novel use of altered tunings. There are very few formal accounts of percussive fingerstyle, therefore, we devised an interview study to investigate its approach to composition, performance and musical experimentation. Our aim was to gain insight into the technique from a gesture-based point of view, observe whether modern fingerstyle shares similarities to the approaches in NIME practice and investigate possible avenues for guitar augmentations inspired by the percussive technique. We conducted an inductive thematic analysis on the transcribed interviews: our findings highlight the participants’ material-based approach to musical interaction and we present a three-zone model of the most common percussive gestures on the guitar’s body. Furthermore, we examine current trends in Digital Musical Instruments, especially in guitar augmentation, and we discuss possible future directions in augmented guitars in light of the interviewees’ perspectives.
@inproceedings{NIME20_85, author = {Martelloni, Andrea and McPherson, Andrew and Barthet, Mathieu}, title = {Percussive Fingerstyle Guitar through the Lens of NIME: an Interview Study}, pages = {440--445}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813463}, url = {https://www.nime.org/proceedings/2020/nime2020_paper85.pdf}, presentation-video = {https://youtu.be/ON8ckEBcQ98} }
Robert Jack, Jacob Harrison, and Andrew McPherson. 2020. Digital Musical Instruments as Research Products. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 446–451. http://doi.org/10.5281/zenodo.4813465
Abstract
Download PDF DOI
In the field of human computer interaction (HCI) the limitations of prototypes as the primary artefact used in research are being realised. Prototypes often remain open in their design, are partially-finished, and have a focus on a specific aspect of interaction. Previous authors have proposed ‘research products’ as a specific category of artefact distinct from both research prototypes and commercial products. The characteristics of research products are their holistic completeness as a design artefact, their situatedness in a specific cultural context, and the fact that they are evaluated for what they are, not what they will become. This paper discusses the ways in which many instruments created within the context of New Interfaces for Musical Expression (NIME), including those that are used in performances, often fall into the category of prototype. We shall discuss why research products might be a useful framing for NIME research. Research products shall be weighed up against some of the main themes of NIME research: technological innovation; musical expression; instrumentality. We conclude this paper with a case study of Strummi, a digital musical instrument which we frame as research product.
@inproceedings{NIME20_86, author = {Jack, Robert and Harrison, Jacob and McPherson, Andrew}, title = {Digital Musical Instruments as Research Products}, pages = {446--451}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813465}, url = {https://www.nime.org/proceedings/2020/nime2020_paper86.pdf}, presentation-video = {https://youtu.be/luJwlZBeBqY} }
Amit D Patel and John Richards. 2020. Pop-up for Collaborative Music-making. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 452–457. http://doi.org/10.5281/zenodo.4813473
Abstract
Download PDF DOI
This paper presents a micro-residency in a pop-up shop and collaborative making amongst a group of researchers and practitioners. The making extends to sound(-making) objects, instruments, workshop, sound installation, performance and discourse on DIY electronic music. Our research builds on creative workshopping and speculative design and is informed by ideas of collective making. The ad hoc and temporary pop-up space is seen as formative in shaping the outcomes of the work. Through the lens of curated research, working together with a provocative brief, we explored handmade objects, craft, non-craft, human error, and the spirit of DIY, DIYness. We used the Studio Bench - a method that brings making, recording and performance together in one space - and viewed workshopping and performance as a holistic event. A range of methodologies were investigated in relation to NIME. These included the Hardware Mash-up, Speculative Sound Circuits and Reverse Design, from product to prototype, resulting in the instrument the Radical Nails. Finally, our work drew on the notion of design as performance and making in public and further developed our understanding of workshop-installation and performance-installation.
@inproceedings{NIME20_87, author = {Patel, Amit D and Richards, John}, title = {Pop-up for Collaborative Music-making}, pages = {452--457}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813473}, url = {https://www.nime.org/proceedings/2020/nime2020_paper87.pdf} }
Courtney Reed and Andrew McPherson. 2020. Surface Electromyography for Direct Vocal Control. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 458–463. http://doi.org/10.5281/zenodo.4813475
Abstract
Download PDF DOI
This paper introduces a new method for direct control using the voice via measurement of vocal muscular activation with surface electromyography (sEMG). Digital musical interfaces based on the voice have typically used indirect control, in which features extracted from audio signals control the parameters of sound generation, for example in audio to MIDI controllers. By contrast, focusing on the musculature of the singing voice allows direct muscular control, or alternatively, combined direct and indirect control in an augmented vocal instrument. In this way we aim to both preserve the intimate relationship a vocalist has with their instrument and key timbral and stylistic characteristics of the voice while expanding its sonic capabilities. This paper discusses other digital instruments which effectively utilise a combination of indirect and direct control as well as a history of controllers involving the voice. Subsequently, a new method of direct control from physiological aspects of singing through sEMG and its capabilities are discussed. Future developments of the system are further outlined along with usage in performance studies, interactive live vocal performance, and educational and practice tools.
@inproceedings{NIME20_88, author = {Reed, Courtney and McPherson, Andrew}, title = {Surface Electromyography for Direct Vocal Control}, pages = {458--463}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813475}, url = {https://www.nime.org/proceedings/2020/nime2020_paper88.pdf}, presentation-video = {https://youtu.be/1nWLgQGNh0g} }
Henrik von Coler, Steffen Lepa, and Stefan Weinzierl. 2020. User-Defined Mappings for Spatial Sound Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 464–469. http://doi.org/10.5281/zenodo.4813477
Abstract
Download PDF DOI
The presented sound synthesis system allows the individual spatialization of spectral components in real-time, using a sinusoidal modeling approach within 3-dimensional sound reproduction systems. A co-developed, dedicated haptic interface is used to jointly control spectral and spatial attributes of the sound. Within a user study, participants were asked to create an individual mapping between control parameters of the interface and rendering parameters of sound synthesis and spatialization, using a visual programming environment. Resulting mappings of all participants are evaluated, indicating the preference of single control parameters for specific tasks. In comparison with mappings intended by the development team, the results validate certain design decisions and indicate new directions.
@inproceedings{NIME20_89, author = {von Coler, Henrik and Lepa, Steffen and Weinzierl, Stefan}, title = {User-Defined Mappings for Spatial Sound Synthesis}, pages = {464--469}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813477}, url = {https://www.nime.org/proceedings/2020/nime2020_paper89.pdf} }
Rohan Proctor and Charles Patrick Martin. 2020. A Laptop Ensemble Performance System using Recurrent Neural Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 43–48. http://doi.org/10.5281/zenodo.4813481
Abstract
Download PDF DOI
The popularity of applying machine learning techniques in musical domains has created an inherent availability of freely accessible pre-trained neural network (NN) models ready for use in creative applications. This work outlines the implementation of one such application in the form of an assistance tool designed for live improvisational performances by laptop ensembles. The primary intention was to leverage off-the-shelf pre-trained NN models as a basis for assisting individual performers either as musical novices looking to engage with more experienced performers or as a tool to expand musical possibilities through new forms of creative expression. The system expands upon a variety of ideas found in different research areas including new interfaces for musical expression, generative music and group performance to produce a networked performance solution served via a web-browser interface. The final implementation of the system offers performers a mixture of high and low-level controls to influence the shape of sequences of notes output by locally run NN models in real time, also allowing performers to define their level of engagement with the assisting generative models. Two test performances were played, with the system shown to feasibly support four performers over a four minute piece while producing musically cohesive and engaging music. Iterations on the design of the system exposed technical constraints on the use of a JavaScript environment for generative models in a live music context, largely derived from inescapable processing overheads.
@inproceedings{NIME20_9, author = {Proctor, Rohan and Martin, Charles Patrick}, title = {A Laptop Ensemble Performance System using Recurrent Neural Networks}, pages = {43--48}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813481}, url = {https://www.nime.org/proceedings/2020/nime2020_paper9.pdf} }
Tiago Brizolara, Sylvie Gibet, and Caroline Larboulette. 2020. Elemental: a Gesturally Controlled System to Perform Meteorological Sounds. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 470–476. http://doi.org/10.5281/zenodo.4813483
Abstract
Download PDF DOI
In this paper, we present and evaluate Elemental, a NIME (New Interface for Musical Expression) based on audio synthesis of sounds of meteorological phenomena, namely rain, wind and thunder, intended for application in contemporary music/sound art, performing arts and entertainment. We first describe the system, controlled by the performer’s arms through Inertial Measuring Units and Electromyography sensors. The produced data is analyzed and used through mapping strategies as input of the sound synthesis engine. We conducted user studies to refine the sound synthesis engine, the choice of gestures and the mappings between them, and to finally evaluate this proof of concept. Indeed, the users approached the system with their own awareness ranging from the manipulation of abstract sound to the direct simulation of atmospheric phenomena - in the latter case, it could even be to revive memories or to create novel situations. This suggests that the approach of instrumentalization of sounds of known source may be a fruitful strategy for constructing expressive interactive sonic systems.
@inproceedings{NIME20_90, author = {Brizolara, Tiago and Gibet, Sylvie and Larboulette, Caroline}, title = {Elemental: a Gesturally Controlled System to Perform Meteorological Sounds}, pages = {470--476}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813483}, url = {https://www.nime.org/proceedings/2020/nime2020_paper90.pdf} }
Çağrı Erdem and Alexander Refsum Jensenius. 2020. RAW: Exploring Control Structures for Muscle-based Interaction in Collective Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 477–482. http://doi.org/10.5281/zenodo.4813485
Abstract
Download PDF DOI
This paper describes the ongoing process of developing RAW, a collaborative body–machine instrument that relies on ’sculpting’ the sonification of raw EMG signals. The instrument is built around two Myo armbands located on the forearms of the performer. These are used to investigate muscle contraction, which is again used as the basis for the sonic interaction design. Using a practice-based approach, the aim is to explore the musical aesthetics of naturally occurring bioelectric signals. We are particularly interested in exploring the differences between processing at audio rate versus control rate, and how the level of detail in the signal–and the complexity of the mappings–influence the experience of control in the instrument. This is exemplified through reflections on four concerts in which RAW has been used in different types of collective improvisation.
@inproceedings{NIME20_91, author = {Erdem, Çağrı and Jensenius, Alexander Refsum}, title = {RAW: Exploring Control Structures for Muscle-based Interaction in Collective Improvisation}, pages = {477--482}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813485}, url = {https://www.nime.org/proceedings/2020/nime2020_paper91.pdf}, presentation-video = {https://youtu.be/gX-X1iw7uWE} }
Travis C MacDonald, James Hughes, and Barry MacKenzie. 2020. SmartDrone: An Aurally Interactive Harmonic Drone. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 483–488. http://doi.org/10.5281/zenodo.4813488
Abstract
Download PDF DOI
Mobile devices provide musicians with the convenience of musical accompaniment wherever they are, granting them new methods for developing their craft. We developed the application SmartDrone to give users the freedom to practice in different harmonic settings with the assistance of their smartphone. This application further explores the area of dynamic accompaniment by implementing functionality so that chords are generated based on the key in which the user is playing. Since this app was designed to be a tool for scale practice, drone-like accompaniment was chosen so that musicians could experiment with combinations of melody and harmony. The details of the application development process are discussed in this paper, with the main focus on scale analysis and harmonic transposition. By using these two components, the application is able to dynamically alter key to reflect the user’s playing. As well as the design and implementation details, this paper reports and examines feedback from a small user study of undergraduate music students who used the app.
@inproceedings{NIME20_92, author = {MacDonald, Travis C and Hughes, James and MacKenzie, Barry}, title = {SmartDrone: An Aurally Interactive Harmonic Drone}, pages = {483--488}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813488}, url = {https://www.nime.org/proceedings/2020/nime2020_paper92.pdf} }
Juan P Martinez Avila, Vasiliki Tsaknaki, Pavel Karpashevich, et al. 2020. Soma Design for NIME. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 489–494. http://doi.org/10.5281/zenodo.4813491
Abstract
Download PDF DOI
Previous research on musical embodiment has reported that expert performers often regard their instruments as an extension of their body. Not every digital musical instrument seeks to create a close relationship between body and instrument, but even for the many that do, the design process often focuses heavily on technical and sonic factors, with relatively less attention to the bodily experience of the performer. In this paper we propose Somaesthetic design as an alternative to explore this space. The Soma method aims to attune the sensibilities of designers, as well as their experience of their body, and make use of these notions as a resource for creative design. We then report on a series of workshops exploring the relationship between the body and the guitar with a Soma design approach. The workshops resulted in a series of guitar-related artefacts and NIMEs that emerged from the somatic exploration of balance and tension during guitar performance. Lastly we present lessons learned from our research that could inform future Soma-based musical instrument design, and how NIME research may also inform Soma design.
@inproceedings{NIME20_93, author = {Martinez Avila, Juan P and Tsaknaki, Vasiliki and Karpashevich, Pavel and Windlin, Charles and Valenti, Niklas and Höök, Kristina and McPherson, Andrew and Benford, Steve}, title = {Soma Design for NIME}, pages = {489--494}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813491}, url = {https://www.nime.org/proceedings/2020/nime2020_paper93.pdf}, presentation-video = {https://youtu.be/i4UN_23A_SE} }
Laddy P Cadavid. 2020. Knotting the memory//Encoding the Khipu_: Reuse of an ancient Andean device as a NIME . Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 495–498. http://doi.org/10.5281/zenodo.4813495
Abstract
Download PDF DOI
The khipu is an information processing and transmission device used mainly by the Inca empire and previous Andean societies. This mnemotechnic interface is one of the first textile computers known, consisting of a central wool or cotton cord to which other strings are attached with knots of different shapes, colors, and sizes encrypting different kinds of values and information. The system was widely used until the Spanish colonization that banned their use and destroyed a large number of these devices. This paper introduces the creation process of a NIME based in a khipu converted into an electronic instrument for the interaction and generation of live experimental sound by weaving knots with conductive rubber cords, and its implementation in the performance Knotting the memory//Encoding the Khipu_ that aim to pay homage to this system, from a decolonial perspective continuing the interrupted legacy of this ancestral practice in a different experience of tangible live coding and computer music, as well as weaving the past with the present of the indigenous and people resistance of the Andean territory with their sounds.
@inproceedings{NIME20_94, author = {Cadavid, Laddy P}, title = {Knotting the memory//Encoding the Khipu_: Reuse of an ancient Andean device as a NIME }, pages = {495--498}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813495}, url = {https://www.nime.org/proceedings/2020/nime2020_paper94.pdf}, presentation-video = {https://youtu.be/nw5rbc15pT8} }
Shelly Knotts and Nick Collins. 2020. A survey on the uptake of Music AI Software. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 499–504. http://doi.org/10.5281/zenodo.4813499
Abstract
Download PDF DOI
The recent proliferation of commercial software claiming ground in the field of music AI has provided opportunity to engage with AI in music making without the need to use libraries aimed at those with programming skills. Pre-packaged music AI software has the potential to broaden access to machine learning tools but it is unclear how widely these softwares are used by music technologists or how engagement affects attitudes towards AI in music making. To interrogate these questions we undertook a survey in October 2019, gaining 117 responses. The survey collected statistical information on the use of pre-packaged and self-written music AI software. Respondents reported a range of musical outputs including producing recordings, live performance and generative work across many genres of music making. The survey also gauged general attitudes towards AI in music and provided an open field for general comments. The responses to the survey suggested a forward-looking attitude to music AI with participants often pointing to the future potential of AI tools, rather than present utility. Optimism was partially related to programming skill with those with more experience showing higher skepticism towards the current state and future potential of AI.
@inproceedings{NIME20_95, author = {Knotts, Shelly and Collins, Nick}, title = {A survey on the uptake of Music AI Software}, pages = {499--504}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813499}, url = {https://www.nime.org/proceedings/2020/nime2020_paper95.pdf}, presentation-video = {https://youtu.be/v6hT3ED3N60} }
Scott Barton. 2020. Circularity in Rhythmic Representation and Composition. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 505–508. http://doi.org/10.5281/zenodo.4813501
Abstract
Download PDF DOI
Cycle is a software tool for musical composition and improvisation that represents events along a circular timeline. In doing so, it breaks from the linear representational conventions of European Art music and modern Digital Audio Workstations. A user specifies time points on different layers, each of which corresponds to a particular sound. The layers are superimposed on a single circle, which allows a unique visual perspective on the relationships between musical voices given their geometric positions. Positions in-between quantizations are possible, which encourages experimentation with expressive timing and machine rhythms. User-selected transformations affect groups of notes, layers, and the pattern as a whole. Past and future states are also represented, synthesizing linear and cyclical notions of time. This paper will contemplate philosophical questions raised by circular rhythmic notation and will reflect on the ways in which the representational novelties and editing functions of Cycle have inspired creativity in musical composition.
@inproceedings{NIME20_96, author = {Barton, Scott}, title = {Circularity in Rhythmic Representation and Composition}, pages = {505--508}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813501}, url = {https://www.nime.org/proceedings/2020/nime2020_paper96.pdf}, presentation-video = {https://youtu.be/0CEKbyJUSw4} }
Thor Magnusson. 2020. Instrumental Investigations at Emute Lab. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 509–513. http://doi.org/10.5281/zenodo.4813503
Abstract
Download PDF DOI
This lab report discusses recent projects and activities of the Experimental Music Technologies Lab at the University of Sussex. The lab was founded in 2014 and has contributed to the development of the field of new musical technologies. The report introduces the lab’s agenda, gives examples of its activities through common themes and gives short description of lab members’ work. The lab environment, funding income and future vision are also presented.
@inproceedings{NIME20_97, author = {Magnusson, Thor}, title = {Instrumental Investigations at Emute Lab}, pages = {509--513}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813503}, url = {https://www.nime.org/proceedings/2020/nime2020_paper97.pdf} }
Satvik Venkatesh, Edward Braund, and Eduardo Miranda. 2020. Composing Popular Music with Physarum polycephalum-based Memristors. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 514–519. http://doi.org/10.5281/zenodo.4813507
Abstract
Download PDF DOI
Creative systems such as algorithmic composers often use Artificial Intelligence models like Markov chains, Artificial Neural Networks, and Genetic Algorithms in order to model stochastic processes. Unconventional Computing (UC) technologies explore non-digital ways of data storage, processing, input, and output. UC paradigms such as Quantum Computing and Biocomputing delve into domains beyond the binary bit to handle complex non-linear functions. In this paper, we harness Physarum polycephalum as memristors to process and generate creative data for popular music. While there has been research conducted in this area, the literature lacks examples of popular music and how the organism’s non-linear behaviour can be controlled while composing music. This is important because non-linear forms of representation are not as obvious as conventional digital means. This study aims at disseminating this technology to non-experts and musicians so that they can incorporate it in their creative processes. Furthermore, it combines resistors and memristors to have more flexibility while generating music and optimises parameters for faster processing and performance.
@inproceedings{NIME20_98, author = {Venkatesh, Satvik and Braund, Edward and Miranda, Eduardo}, title = {Composing Popular Music with Physarum polycephalum-based Memristors}, pages = {514--519}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813507}, url = {https://www.nime.org/proceedings/2020/nime2020_paper98.pdf}, presentation-video = {https://youtu.be/NBLa-KoMUh8} }
Fede Camara Halac and Shadrick Addy. 2020. PathoSonic: Performing Sound In Virtual Reality Feature Space. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 520–522. http://doi.org/10.5281/zenodo.4813510
Abstract
Download PDF DOI
PathoSonic is a VR experience that enables a participant to visualize and perform a sound file based on timbre feature descriptors displayed in space. The name comes from the different paths the participant can create through their sonic explorations. The goal of this research is to leverage affordances of virtual reality technology to visualize sound through different levels of performance-based interactivity that immerses the participant’s body in a spatial virtual environment. Through implementation of a multi-sensory experience, including visual aesthetics, sound, and haptic feedback, we explore inclusive approaches to sound visualization, making it more accessible to a wider audience including those with hearing, and mobility impairments. The online version of the paper can be accessed here: https://fdch.github.io/pathosonic
@inproceedings{NIME20_99, author = {Camara Halac, Fede and Addy, Shadrick}, title = {PathoSonic: Performing Sound In Virtual Reality Feature Space}, pages = {520--522}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813510}, url = {https://www.nime.org/proceedings/2020/nime2020_paper99.pdf} }
2019
Enrique Tomas, Thomas Gorbach, Hilda Tellioglu, and Martin Kaltenbrunner. 2019. Material embodiments of electroacoustic music: an experimental workshop study. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 1–6. http://doi.org/10.5281/zenodo.3672842
Abstract
Download PDF DOI
This paper reports on a workshop where participants produced physical mock-ups of musical interfaces directly after miming control of short electroacoustic music pieces. Our goal was understanding how people envision and materialize their own sound-producing gestures from spontaneous cognitive mappings. During the workshop, 50 participants from different creative backgrounds modeled more than 180 physical artifacts. Participants were filmed and interviewed for the later analysis of their different multimodal associations about music. Our initial hypothesis was that most of the physical mock-ups would be similar to the sound-producing objects that participants would identify in the musical pieces. Although the majority of artifacts clearly showed correlated design trajectories, our results indicate that a relevant number of participants intuitively decided to engineer alternative solutions emphasizing their personal design preferences. Therefore, in this paper we present and discuss the workshop format, its results and the possible applications for designing new musical interfaces.
@inproceedings{Tomas2019, author = {Tomas, Enrique and Gorbach, Thomas and Tellioglu, Hilda and Kaltenbrunner, Martin}, title = {Material embodiments of electroacoustic music: an experimental workshop study}, pages = {1--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672842}, url = {http://www.nime.org/proceedings/2019/nime2019_paper001.pdf} }
Yupu Lu, Yijie Wu, and Shijie Zhu. 2019. Collaborative Musical Performances with Automatic Harp Based on Image Recognition and Force Sensing Resistors. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 7–8. http://doi.org/10.5281/zenodo.3672846
Abstract
Download PDF DOI
In this paper, collaborative performance is defined as the performance of the piano by the performer and accompanied by an automatic harp. The automatic harp can play music based on the electronic score and change its speed according to the speed of the performer. We built a 32-channel automatic harp and designed a layered modular framework integrating both hardware and software, for experimental real-time control protocols. Considering that MIDI keyboard lacking information of force (acceleration) and fingering detection, both of which are important for expression, we designed force-sensor glove and achieved basic image recognition. They are used to accurately detect speed, force (corresponding to velocity in MIDI) and pitch when a performer plays the piano.
@inproceedings{Lu2019, author = {Lu, Yupu and Wu, Yijie and Zhu, Shijie}, title = {Collaborative Musical Performances with Automatic Harp Based on Image Recognition and Force Sensing Resistors}, pages = {7--8}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672846}, url = {http://www.nime.org/proceedings/2019/nime2019_paper002.pdf} }
Lior Arbel, Yoav Y. Schechner, and Noam Amir. 2019. The Symbaline — An Active Wine Glass Instrument with a Liquid Sloshing Vibrato Mechanism. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 9–14. http://doi.org/10.5281/zenodo.3672848
Abstract
Download PDF DOI
The Symbaline is an active instrument comprised of several partly-filled wine glasses excited by electromagnetic coils. This work describes an electromechanical system for incorporating frequency and amplitude modulation to the Symbaline’s sound. A pendulum having a magnetic bob is suspended inside the liquid in the wine glass. The pendulum is put into oscillation by driving infra-sound signals through the coil. The pendulum’s movement causes the liquid in the glass to slosh back and forth. Simultaneously, wine glass sounds are produced by driving audio-range signals through the coil, inducing vibrations in a small magnet attached to the glass surface and exciting glass vibrations. As the glass vibrates, the sloshing liquid periodically changes the glass’s resonance frequencies and dampens the glass, thus modulating both wine glass pitch and sound intensity.
@inproceedings{Arbel2019, author = {Arbel, Lior and Schechner, Yoav Y. and Amir, Noam}, title = {The Symbaline --- An Active Wine Glass Instrument with a Liquid Sloshing Vibrato Mechanism}, pages = {9--14}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672848}, url = {http://www.nime.org/proceedings/2019/nime2019_paper003.pdf} }
Helena de Souza Nunes, Federico Visi, Lydia Helena Wohl Coelho, and Rodrigo Schramm. 2019. SIBILIM: A low-cost customizable wireless musical interface. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 15–20. http://doi.org/10.5281/zenodo.3672850
Abstract
Download PDF DOI
This paper presents the SIBILIM, a low-cost musical interface composed of a resonance box made of cardboard containing customised push buttons that interact with a smartphone through its video camera. Each button can be mapped to a set of MIDI notes or control parameters. The sound is generated through synthesis or sample playback and can be amplified with the help of a transducer, which excites the resonance box. An essential contribution of this interface is the possibility of reconfiguration of the buttons layout without the need to hard rewire the system since it uses only the smartphone built-in camera. This features allow for quick instrument customisation for different use cases, such as low cost projects for schools or instrument building workshops. Our case study used the Sibilim for music education, where it was designed to develop the conscious of music perception and to stimulate creativity through exercises of short tonal musical compositions. We conducted a study with a group of twelve participants in an experimental workshop to verify its validity.
@inproceedings{deSouzaNunes2019, author = {de Souza Nunes, Helena and Visi, Federico and Coelho, Lydia Helena Wohl and Schramm, Rodrigo}, title = {SIBILIM: A low-cost customizable wireless musical interface}, pages = {15--20}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672850}, url = {http://www.nime.org/proceedings/2019/nime2019_paper004.pdf} }
Jonathan Bell. 2019. The Risset Cycle, Recent Use Cases With SmartVox. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 21–24. http://doi.org/10.5281/zenodo.3672852
Abstract
Download PDF DOI
The combination of graphic/animated scores, acoustic signals (audio-scores) and Head-Mounted Display (HMD) technology offers promising potentials in the context of distributed notation, for live performances and concerts involving voices, instruments and electronics. After an explanation of what SmartVox is technically, and how it is used by composers and performers, this paper explains why this form of technology-aided performance might help musicians for synchronization to an electronic tape and (spectral) tuning. Then, from an exploration of the concepts of distributed notation and networked music performances, it proposes solutions (in conjunction with INScore, BabelScores and the Decibel Score Player) seeking for the expansion of distributed notation practice to a wider community. It finally presents findings relative to the use of SmartVox with HMDs.
@inproceedings{Bell2019, author = {Bell, Jonathan}, title = {The Risset Cycle, Recent Use Cases With SmartVox}, pages = {21--24}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672852}, url = {http://www.nime.org/proceedings/2019/nime2019_paper005.pdf} }
Johnty Wang, Axel Mulder, and Marcelo Wanderley. 2019. Practical Considerations for MIDI over Bluetooth Low Energy as a Wireless Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 25–30. http://doi.org/10.5281/zenodo.3672854
Abstract
Download PDF DOI
This paper documents the key issues of performance and compatibility working with Musical Instrument Digital Interface (MIDI) via Bluetooth Low Energy (BLE) as a wireless interface for sensor or controller data and inter-module communication in the context of building interactive digital systems. An overview of BLE MIDI is presented along with a comparison of the protocol from the perspective of theoretical limits and interoperability, showing its widespread compatibility across platforms compared with other alternatives. Then we perform three complementary tests on BLE MIDI and alternative interfaces using prototype and commercial devices, showing that BLE MIDI has comparable performance with the tested WiFi implementations, with end-to-end (sensor input to audio output) latencies of under 10ms under certain conditions. Overall, BLE MIDI is an ideal choice for controllers and sensor interfaces that are designed to work on a wide variety of platforms.
@inproceedings{Wang2019, author = {Wang, Johnty and Mulder, Axel and Wanderley, Marcelo}, title = {Practical Considerations for {MIDI} over Bluetooth Low Energy as a Wireless Interface}, pages = {25--30}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672854}, url = {http://www.nime.org/proceedings/2019/nime2019_paper006.pdf} }
Richard Ramchurn, Juan Pablo Martinez-Avila, Sarah Martindale, Alan Chamberlain, Max L Wilson, and Steve Benford. 2019. Improvising a Live Score to an Interactive Brain-Controlled Film. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 31–36. http://doi.org/10.5281/zenodo.3672856
Abstract
Download PDF DOI
We report on the design and deployment of systems for the performance of live score accompaniment to an interactive movie by a Networked Musical Ensemble. In this case, the audio-visual content of the movie is selected in real time based on user input to a Brain-Computer Interface (BCI). Our system supports musical improvisation between human performers and automated systems responding to the BCI. We explore the performers’ roles during two performances when these socio-technical systems were implemented, in terms of coordination, problem-solving, managing uncertainty and musical responses to system constraints. This allows us to consider how features of these systems and practices might be incorporated into a general tool, aimed at any musician, which could scale for use in different performance settings involving interactive media.
@inproceedings{Ramchurn2019, author = {Ramchurn, Richard and Martinez-Avila, Juan Pablo and Martindale, Sarah and Chamberlain, Alan and Wilson, Max L and Benford, Steve}, title = {Improvising a Live Score to an Interactive Brain-Controlled Film}, pages = {31--36}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672856}, url = {http://www.nime.org/proceedings/2019/nime2019_paper007.pdf} }
Ajin Jiji Tom, Harish Jayanth Venkatesan, Ivan Franco, and Marcelo Wanderley. 2019. Rebuilding and Reinterpreting a Digital Musical Instrument — The Sponge. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 37–42. http://doi.org/10.5281/zenodo.3672858
Abstract
Download PDF DOI
Although several Digital Musical Instruments (DMIs) have been presented at NIME, very few of them remain accessible to the community. Rebuilding a DMI is often a necessary step to allow for performance with NIMEs. Rebuilding a DMI exactly similar to its original, however, might not be possible due to technology obsolescence, lack of documentation or other reasons. It might then be interesting to re-interpret a DMI and build an instrument inspired by the original one, creating novel performance opportunities. This paper presents the challenges and approaches involved in rebuilding and re-interpreting an existing DMI, The Sponge by Martin Marier. The rebuilt versions make use of newer/improved technology and customized design aspects like addition of vibrotactile feedback and implementation of different mapping strategies. It also discusses the implications of embedding sound synthesis within the DMI, by using the Prynth framework and further presents a comparison between this approach and the more traditional ground-up approach. As a result of the evaluation and comparison of the two rebuilt DMIs, we present a third version which combines the benefits and discuss performance issues with these devices.
@inproceedings{Tom2019, author = {Tom, Ajin Jiji and Venkatesan, Harish Jayanth and Franco, Ivan and Wanderley, Marcelo}, title = {Rebuilding and Reinterpreting a Digital Musical Instrument --- The Sponge}, pages = {37--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672858}, url = {http://www.nime.org/proceedings/2019/nime2019_paper008.pdf} }
Kiyu Nishida, Akishige Yuguchi, kazuhiro jo, Paul Modler, and Markus Noisternig. 2019. Border: A Live Performance Based on Web AR and a Gesture-Controlled Virtual Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 43–46. http://doi.org/10.5281/zenodo.3672860
Abstract
Download PDF DOI
Recent technological advances, such as increased CPU/GPU processing speed, along with the miniaturization of devices and sensors, have created new possibilities for integrating immersive technologies in music and performance art. Virtual and Augmented Reality (VR/AR) have become increasingly interesting as mobile device platforms, such as up-to-date smartphones, with necessary CPU resources entered the consumer market. In combination with recent web technologies, any mobile device can simply connect with a browser to a local server to access the latest technology. The web platform also eases the integration of collaborative situated media in participatory artwork. In this paper, we present the interactive music improvisation piece ‘Border,’ premiered in 2018 at the Beyond Festival at the Center for Art and Media Karlsruhe (ZKM). This piece explores the interaction between a performer and the audience using web-based applications – including AR, real-time 3D audio/video streaming, advanced web audio, and gesture-controlled virtual instruments – on smart mobile devices.
@inproceedings{Nishida2019, author = {Nishida, Kiyu and Yuguchi, Akishige and kazuhiro jo and Modler, Paul and Noisternig, Markus}, title = {Border: A Live Performance Based on Web {AR} and a Gesture-Controlled Virtual Instrument}, pages = {43--46}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672860}, url = {http://www.nime.org/proceedings/2019/nime2019_paper009.pdf} }
Palle Dahlstedt. 2019. Taming and Tickling the Beast — Multi-Touch Keyboard as Interface for a Physically Modelled Interconnected Resonating Super-Harp. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 47–52. http://doi.org/10.5281/zenodo.3672862
Abstract
Download PDF DOI
Libration Perturbed is a performance and an improvisation instrument, originally composed and designed for a multi-speaker dome. The performer controls a bank of 64 virtual inter-connected resonating strings, with individual and direct control of tuning and resonance characteristics through a multitouch-enhanced klavier interface (TouchKeys). It is a hybrid acoustic-electronic instrument, as all string vibrations originate from physical vibrations in the klavier and its casing, captured through contact microphones. In addition, there are gestural strings, called ropes, excited by performed musical gestures. All strings and ropes are connected, and inter-resonate together as a ”super-harp”, internally and through the performance space. With strong resonance, strings may go into chaotic motion or emergent quasi-periodic patterns, but custom adaptive leveling mechanisms keep loudness under the musician’s control at all times. The hybrid digital/acoustic approach and the enhanced keyboard provide for an expressive and very physical interaction, and a strong multi-channel immersive experience. The paper describes the aesthetic choices behind the design of the system, as well as the technical implementation, and – primarily – the interaction design, as it emerges from mapping, sound design, physical modeling and integration of the acoustic, the gestural, and the virtual. The work is evaluated based on the experiences from a series of performances.
@inproceedings{Dahlstedt2019, author = {Dahlstedt, Palle}, title = {Taming and Tickling the Beast --- Multi-Touch Keyboard as Interface for a Physically Modelled Interconnected Resonating Super-Harp}, pages = {47--52}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672862}, url = {http://www.nime.org/proceedings/2019/nime2019_paper010.pdf} }
Doga Cavdir, Juan Sierra, and Ge Wang. 2019. Taptop, Armtop, Blowtop: Evolving the Physical Laptop Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 53–58. http://doi.org/10.5281/zenodo.3672864
Abstract
Download PDF DOI
This research represents an evolution and evaluation of the embodied physical laptop instruments. Specifically, these are instruments that are physical in that they use bodily interaction, take advantage of the physical affordances of the laptop. They are embodied in the sense that instruments are played in such ways where the sound is embedded to be close to the instrument. Three distinct laptop instruments, Taptop, Armtop, and Blowtop, are introduced in this paper. We discuss the integrity of the design process with composing for laptop instruments and performing with them. In this process, our aim is to blur the boundaries of the composer and designer/engineer roles. How the physicality is achieved by leveraging musical gestures gained through traditional instrument practice is studied, as well as those inspired by body gestures. We aim to explore how using such interaction methods affects the communication between the ensemble and the audience. An aesthetic-first qualitative evaluation of these interfaces is discussed, through works and performances crafted specifically for these instruments and presented in the concert setting of the laptop orchestra. In so doing, we reflect on how such physical, embodied instrument design practices can inform a different kind of expressive and performance mindset.
@inproceedings{Cavdir2019, author = {Cavdir, Doga and Sierra, Juan and Wang, Ge}, title = {Taptop, Armtop, Blowtop: Evolving the Physical Laptop Instrument}, pages = {53--58}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672864}, url = {http://www.nime.org/proceedings/2019/nime2019_paper011.pdf} }
David Antonio Gómez Jáuregui, Irvin Dongo, and Nadine Couture. 2019. Automatic Recognition of Soundpainting for the Generation of Electronic Music Sounds. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 59–64. http://doi.org/10.5281/zenodo.3672866
Abstract
Download PDF DOI
This work aims to explore the use of a new gesture-based interaction built on automatic recognition of Soundpainting structured gestural language. In the proposed approach, a composer (called Soundpainter) performs Soundpainting gestures facing a Kinect sensor. Then, a gesture recognition system captures gestures that are sent to a sound generator software. The proposed method was used to stage an artistic show in which a Soundpainter had to improvise with 6 different gestures to generate a musical composition from different sounds in real time. The accuracy of the gesture recognition system was evaluated as well as Soundpainter’s user experience. In addition, a user evaluation study for using our proposed system in a learning context was also conducted. Current results open up perspectives for the design of new artistic expressions based on the use of automatic gestural recognition supported by Soundpainting language.
@inproceedings{GomezJauregui2019, author = {Jáuregui, David Antonio Gómez and Dongo, Irvin and Couture, Nadine}, title = {Automatic Recognition of Soundpainting for the Generation of Electronic Music Sounds}, pages = {59--64}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672866}, url = {http://www.nime.org/proceedings/2019/nime2019_paper012.pdf} }
Fabio Morreale, Andrea Guidi, and Andrew P. McPherson. 2019. Magpick: an Augmented Guitar Pick for Nuanced Control. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 65–70. http://doi.org/10.5281/zenodo.3672868
Abstract
Download PDF DOI
This paper introduces the Magpick, an augmented pick for electric guitar that uses electromagnetic induction to sense the motion of the pick with respect to the permanent magnets in the guitar pickup. The Magpick provides the guitarist with nuanced control of the sound which coexists with traditional plucking-hand technique. The paper presents three ways that the signal from the pick can modulate the guitar sound, followed by a case study of its use in which 11 guitarists tested the Magpick for five days and composed a piece with it. Reflecting on their comments and experiences, we outline the innovative features of this technology from the point of view of performance practice. In particular, compared to other augmentations, the high temporal resolution, low latency, and large dynamic range of the Magpick support a highly nuanced control over the sound. Our discussion highlights the utility of having the locus of augmentation coincide with the locus of interaction.
@inproceedings{Morreale2019, author = {Morreale, Fabio and Guidi, Andrea and McPherson, Andrew P.}, title = {Magpick: an Augmented Guitar Pick for Nuanced Control}, pages = {65--70}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672868}, url = {http://www.nime.org/proceedings/2019/nime2019_paper013.pdf} }
Bertrand Petit and manuel serrano. 2019. Composing and executing Interactive music using the HipHop.js language. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 71–76. http://doi.org/10.5281/zenodo.3672870
Abstract
Download PDF DOI
Skini is a platform for composing and producing live performances with audience participating using connected devices (smartphones, tablets, PC, etc.). The music composer creates beforehand musical elements such as melodic patterns, sound patterns, instruments, group of instruments, and a dynamic score that governs the way the basic elements will behave according to events produced by the audience. During the concert or the performance, the audience, by interacting with the system, gives birth to an original music composition. Skini music scores are expressed in terms of constraints that establish relationships between instruments. A constraint maybe instantaneous, for instance one may disable violins while trumpets are playing. A constraint may also be temporal, for instance, the piano cannot play more than 30 consecutive seconds. The Skini platform is implemented in Hop.js and HipHop.js. HipHop.js, a synchronous reactive DLS, is used for implementing the music scores as its elementary constructs consisting of high level operators such as parallel executions, sequences, awaits, synchronization points, etc, form an ideal core language for implementing Skini constraints. This paper presents the Skini platform. It reports about live performances and an educational project. It briefly overviews the use of HipHop.js for representing score.
@inproceedings{Petit2019, author = {Petit, Bertrand and manuel serrano}, title = {Composing and executing Interactive music using the HipHop.js language}, pages = {71--76}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672870}, url = {http://www.nime.org/proceedings/2019/nime2019_paper014.pdf} }
Gabriel Lopes Rocha, João Teixera Araújo, and Flávio Luiz Schiavoni. 2019. Ha Dou Ken Music: Different mappings to play music with joysticks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 77–78. http://doi.org/10.5281/zenodo.3672872
Abstract
Download PDF DOI
Due to video game controls great presence in popular culture and its ease of access, even people who are not in the habit of playing electronic games possibly interacted with this kind of interface once in a lifetime. Thus, gestures like pressing a sequence of buttons, pressing them simultaneously or sliding your fingers through the control can be mapped for musical creation. This work aims the elaboration of a strategy in which several gestures performed in a joystick control can influence one or several parameters of the sound synthesis, making a mapping denominated many to many. Buttons combinations used to perform game actions that are common in fighting games, like Street Fighter, were mapped to the synthesizer to create a music. Experiments show that this mapping is capable of influencing the musical expression of a DMI making it closer to an acoustic instrument.
@inproceedings{Rocha2019, author = {Rocha, Gabriel Lopes and Araújo, João Teixera and Schiavoni, Flávio Luiz}, title = {Ha Dou Ken Music: Different mappings to play music with joysticks}, pages = {77--78}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672872}, url = {http://www.nime.org/proceedings/2019/nime2019_paper015.pdf} }
Torgrim Rudland Næss and Charles Patrick Martin. 2019. A Physical Intelligent Instrument using Recurrent Neural Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 79–82. http://doi.org/10.5281/zenodo.3672874
Abstract
Download PDF DOI
This paper describes a new intelligent interactive instrument, based on an embedded computing platform, where deep neural networks are applied to interactive music generation. Even though using neural networks for music composition is not uncommon, a lot of these models tend to not support any form of user interaction. We introduce a self-contained intelligent instrument using generative models, with support for real-time interaction where the user can adjust high-level parameters to modify the music generated by the instrument. We describe the technical details of our generative model and discuss the experience of using the system as part of musical performance.
@inproceedings{Næss2019, author = {Næss, Torgrim Rudland and Martin, Charles Patrick}, title = {A Physical Intelligent Instrument using Recurrent Neural Networks}, pages = {79--82}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672874}, url = {http://www.nime.org/proceedings/2019/nime2019_paper016.pdf} }
Angelo Fraietta. 2019. Creating Order and Progress. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 83–88. http://doi.org/10.5281/zenodo.3672876
Abstract
Download PDF DOI
This paper details the mapping strategy of the work Order and Progress: a sonic segue across A Auriverde, a composition based upon the skyscape represented on the Brazilian flag. This work uses the Stellarium planetarium software as a performance interface, blending the political symbology, scientific data and musical mapping of each star represented on the flag as a multimedia performance. The work is interfaced through the Stellar Command module, a Java based program that converts the visible field of view from the Stellarium planetarium interface to astronomical data through the VizieR database of astronomical catalogues. This scientific data is then mapped to musical parameters through a Java based programming environment. I will discuss the strategies employed to create a work that was not only artistically novel, but also visually engaging and scientifically accurate.
@inproceedings{Fraietta2019, author = {Fraietta, Angelo}, title = {Creating Order and Progress}, pages = {83--88}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672876}, url = {http://www.nime.org/proceedings/2019/nime2019_paper017.pdf} }
João Nogueira Tragtenberg, Filipe Calegario, Giordano Cabral, and Geber L. Ramalho. 2019. Towards the Concept of Digital Dance and Music Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 89–94. http://doi.org/10.5281/zenodo.3672878
Abstract
Download PDF DOI
This paper discusses the creation of instruments in which music is intentionally generated by dance. We introduce the conceptual framework of Digital Dance and Music Instruments (DDMI). Several DDMI have already been created, but they have been developed isolatedly, and there is still a lack of a common process of ideation and development. Knowledge about Digital Musical Instruments (DMIs) and Interactive Dance Systems (IDSs) can contribute to the design of DDMI, but the former brings few contributions to the body’s expressiveness, and the latter brings few references to an instrumental relationship with music. Because of those different premises, the integration between both paradigms can be an arduous task for the designer of DDMI. The conceptual framework of DDMI can also serve as a bridge between DMIs and IDSs, serving as a lingua franca between both communities and facilitating the exchange of knowledge. The conceptual framework has shown to be a promising analytical tool for the design, development, and evaluation of new digital dance and music instrument.
@inproceedings{Tragtenberg2019, author = {Tragtenberg, João Nogueira and Calegario, Filipe and Cabral, Giordano and Ramalho, Geber L.}, title = {Towards the Concept of Digital Dance and Music Instruments}, pages = {89--94}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672878}, url = {http://www.nime.org/proceedings/2019/nime2019_paper018.pdf} }
Maros Suran Bomba and Palle Dahlstedt. 2019. Somacoustics: Interactive Body-as-Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 95–100. http://doi.org/10.5281/zenodo.3672880
Abstract
Download PDF DOI
Visitors interact with a blindfolded artist’s body, the motions of which are tracked and translated into synthesized four-channel sound, surrounding the participants. Through social-physical and aural interactions, they play his instrument-body, in a mutual dance. Crucial for this work has been the motion-to-sound mapping design, and the investigations of bodily interaction with normal lay-people and with professional contact-improvisation dancers. The extra layer of social-physical interaction both constrains and inspires the participant-artist relation and the sonic exploration, and through this, his body is transformed into an instrument, and physical space is transformed into a sound-space. The project aims to explore the experience of interaction between human and technology and its impact on one’s bodily perception and embodiment, as well as the relation between body and space, departing from a set of existing theories on embodiment. In the paper, its underlying aesthetics are described and discussed, as well as the sensitive motion research process behind it, and the technical implementation of the work. It is evaluated based on participant behavior and experiences and analysis of its premiere exhibition in 2018.
@inproceedings{Bomba2019, author = {Bomba, Maros Suran and Dahlstedt, Palle}, title = {Somacoustics: Interactive Body-as-Instrument}, pages = {95--100}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672880}, url = {http://www.nime.org/proceedings/2019/nime2019_paper019.pdf} }
Nathan Turczan and Ajay Kapur. 2019. The Scale Navigator: A System for Networked Algorithmic Harmony. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 101–104. http://doi.org/10.5281/zenodo.3672882
Abstract
Download PDF DOI
The Scale Navigator is a graphical interface implementation of Dmitri Tymoczko’s scale network designed to help generate algorithmic harmony and harmonically synchronize performers in a laptop or electro-acoustic orchestra. The user manipulates the Scale Navigator to direct harmony on a chord-to-chord level and on a scale-to-scale level. In a live performance setting, the interface broadcasts control data, MIDI, and real-time notation to an ensemble of live electronic performers, sight-reading improvisers, and musical generative algorithms.
@inproceedings{Turczan2019, author = {Turczan, Nathan and Kapur, Ajay}, title = {The Scale Navigator: A System for Networked Algorithmic Harmony}, pages = {101--104}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672882}, url = {http://www.nime.org/proceedings/2019/nime2019_paper020.pdf} }
Alex Michael Lucas, Miguel Ortiz, and Dr. Franziska Schroeder. 2019. Bespoke Design for Inclusive Music: The Challenges of Evaluation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 105–109. http://doi.org/10.5281/zenodo.3672884
Abstract
Download PDF DOI
In this paper, the authors describe the evaluation of a collection of bespoke knob cap designs intended to improve the ease in which a specific musician with dyskinetic cerebral palsy can operate rotary controls in a musical context. The authors highlight the importance of the performers perspective when using design as a means for overcoming access barriers to music. Also, while the authors were not able to find an ideal solution for the musician within the confines of this study, several useful observations on the process of evaluating bespoke assistive music technology are described; observations which may prove useful to digital musical instrument designers working within the field of inclusive music.
@inproceedings{Lucas2019, author = {Lucas, Alex Michael and Ortiz, Miguel and Schroeder, Dr. Franziska}, title = {Bespoke Design for Inclusive Music: The Challenges of Evaluation}, pages = {105--109}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672884}, url = {http://www.nime.org/proceedings/2019/nime2019_paper021.pdf} }
Xiao Xiao, Grégoire Locqueville, Christophe d’Alessandro, and Boris Doval. 2019. T-Voks: the Singing and Speaking Theremin. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 110–115. http://doi.org/10.5281/zenodo.3672886
Abstract
Download PDF DOI
T-Voks is an augmented theremin that controls Voks, a performative singing synthesizer. Originally developed for control with a graphic tablet interface, Voks allows for real-time pitch and time scaling, vocal effort modification and syllable sequencing for pre-recorded voice utterances. For T-Voks the theremin’s frequency antenna modifies the output pitch of the target utterance while the amplitude antenna controls not only volume as usual but also voice quality and vocal effort. Syllabic sequencing is handled by an additional pressure sensor attached to the player’s volume-control hand. This paper presents the system architecture of T-Voks, the preparation procedure for a song, playing gestures, and practice techniques, along with musical and poetic examples across four different languages and styles.
@inproceedings{Xiao2019, author = {Xiao, Xiao and Locqueville, Grégoire and d'Alessandro, Christophe and Doval, Boris}, title = {T-Voks: the Singing and Speaking Theremin}, pages = {110--115}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672886}, url = {http://www.nime.org/proceedings/2019/nime2019_paper022.pdf} }
Hunter Brown and spencer topel. 2019. DRMMR: An Augmented Percussion Implement. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 116–121. http://doi.org/10.5281/zenodo.3672888
Abstract
Download PDF DOI
Recent developments in music technology have enabled novel timbres to be acoustically synthesized using various actuation and excitation methods. Utilizing recent work in nonlinear acoustic synthesis, we propose a transducer based augmented percussion implement entitled DRMMR. This design enables the user to sustain computer sequencer-like drum rolls at faster speeds while also enabling the user to achieve nonlinear acoustic synthesis effects. Our acoustic evaluation shows drum rolls executed by DRMMR easily exhibit greater levels of regularity, speed, and precision than comparable transducer and electromagnetic-based actuation methods. DRMMR’s nonlinear acoustic synthesis functionality also presents possibilities for new kinds of sonic interactions on the surface of drum membranes.
@inproceedings{Brown2019, author = {Brown, Hunter and spencer topel}, title = {{DRMMR}: An Augmented Percussion Implement}, pages = {116--121}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672888}, url = {http://www.nime.org/proceedings/2019/nime2019_paper023.pdf} }
Giacomo Lepri and Andrew P. McPherson. 2019. Fictional instruments, real values: discovering musical backgrounds with non-functional prototypes. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 122–127. http://doi.org/10.5281/zenodo.3672890
Abstract
Download PDF DOI
The emergence of a new technology can be considered as the result of social, cultural and technical process. Instrument designs are particularly influenced by cultural and aesthetic values linked to the specific contexts and communities that produced them. In previous work, we ran a design fiction workshop in which musicians created non-functional instrument mockups. In the current paper, we report on an online survey in which music technologists were asked to speculate on the background of the musicians who designed particular instruments. Our results showed several cues for the interpretation of the artefacts’ origins, including physical features, body-instrument interactions, use of language and references to established music practices and tools. Tacit musical and cultural values were also identified based on intuitive and holistic judgments. Our discussion highlights the importance of cultural awareness and context-dependent values on the design and use of interactive musical systems.
@inproceedings{Lepri2019, author = {Lepri, Giacomo and McPherson, Andrew P.}, title = {Fictional instruments, real values: discovering musical backgrounds with non-functional prototypes}, pages = {122--127}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672890}, url = {http://www.nime.org/proceedings/2019/nime2019_paper024.pdf} }
Christopher Dewey and Jonathan P. Wakefield. 2019. Exploring the Container Metaphor for Equalisation Manipulation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 128–129. http://doi.org/10.5281/zenodo.3672892
Abstract
Download PDF DOI
This paper presents the first stage in the design and evaluation of a novel container metaphor interface for equalisation control. The prototype system harnesses the Pepper’s Ghost illusion to project mid-air a holographic data visualisation of an audio track’s long-term average and real-time frequency content as a deformable shape manipulated directly via hand gestures. The system uses HTML 5, JavaScript and the Web Audio API in conjunction with a Leap Motion controller and bespoke low budget projection system. During subjective evaluation users commented that the novel system was simpler and more intuitive to use than commercially established equalisation interface paradigms and most suited to creative, expressive and explorative equalisation tasks.
@inproceedings{Dewey2019, author = {Dewey, Christopher and Wakefield, Jonathan P.}, title = {Exploring the Container Metaphor for Equalisation Manipulation}, pages = {128--129}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672892}, url = {http://www.nime.org/proceedings/2019/nime2019_paper025.pdf} }
Alex Hofmann, Vasileios Chatziioannou, Sebastian Schmutzhard, Gökberk Erdogan, and Alexander Mayer. 2019. The Half-Physler: An oscillating real-time interface to a tube resonator model. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 130–133. http://doi.org/10.5281/zenodo.3672896
Abstract
Download PDF DOI
Physics-based sound synthesis allows to shape the sound by modifying parameters that reference to real world properties of acoustic instruments. This paper presents a hybrid physical modeling single reed instrument, where a virtual tube is coupled to a real mouthpiece with a sensor-equipped clarinet reed. The tube model is provided as an opcode for Csound which is running on the low-latency embedded audio-platform Bela. An actuator is connected to the audio output and the sensor-reed signal is fed back into the input of Bela. The performer can control the coupling between reed and actuator, and is also provided with a 3D-printed slider/knob interface to change parameters of the tube model in real-time.
@inproceedings{Hofmann2019, author = {Hofmann, Alex and Chatziioannou, Vasileios and Schmutzhard, Sebastian and Erdogan, Gökberk and Mayer, Alexander}, title = {The Half-Physler: An oscillating real-time interface to a tube resonator model}, pages = {130--133}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672896}, url = {http://www.nime.org/proceedings/2019/nime2019_paper026.pdf} }
Peter Bussigel, Stephan Moore, and Scott Smallwood. 2019. Reanimating the Readymade. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 134–139. http://doi.org/10.5281/zenodo.3672898
Abstract
Download PDF DOI
There is rich history of using found or “readymade” objects in music performances and sound installations. John Cage’s Water Walk, Carolee Schneeman’s Noise Bodies, and David Tudor’s Rainforest all lean on both the sonic and cultural affordances of found objects. Today, composers and sound artists continue to look at the everyday, combining readymades with microcontrollers and homemade electronics and repurposing known interfaces for their latent sonic potential. This paper gives a historical overview of work at the intersection of music and the readymade and then describes three recent sound installations/performances by the authors that further explore this space. The emphasis is on processes involved in working with found objects–the complex, practical, and playful explorations into sound and material culture.
@inproceedings{Bussigel2019, author = {Bussigel, Peter and Moore, Stephan and Smallwood, Scott}, title = {Reanimating the Readymade}, pages = {134--139}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672898}, url = {http://www.nime.org/proceedings/2019/nime2019_paper027.pdf} }
Yian Zhang, Yinmiao Li, Daniel Chin, and Gus Xia. 2019. Adaptive Multimodal Music Learning via Interactive Haptic Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 140–145. http://doi.org/10.5281/zenodo.3672900
Abstract
Download PDF DOI
Haptic interfaces have untapped the sense of touch to assist multimodal music learning. We have recently seen various improvements of interface design on tactile feedback and force guidance aiming to make instrument learning more effective. However, most interfaces are still quite static; they cannot yet sense the learning progress and adjust the tutoring strategy accordingly. To solve this problem, we contribute an adaptive haptic interface based on the latest design of haptic flute. We first adopted a clutch mechanism to enable the interface to turn on and off the haptic control flexibly in real time. The interactive tutor is then able to follow human performances and apply the “teacher force” only when the software instructs so. Finally, we incorporated the adaptive interface with a step-by-step dynamic learning strategy. Experimental results showed that dynamic learning dramatically outperforms static learning, which boosts the learning rate by 45.3% and shrinks the forgetting chance by 86%.
@inproceedings{Zhang2019, author = {Zhang, Yian and Li, Yinmiao and Chin, Daniel and Xia, Gus}, title = {Adaptive Multimodal Music Learning via Interactive Haptic Instrument}, pages = {140--145}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672900}, url = {http://www.nime.org/proceedings/2019/nime2019_paper028.pdf} }
Fabián Sguiglia, Pauli Coton, and Fernando Toth. 2019. El mapa no es el territorio: Sensor mapping for audiovisual performances. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 146–149. http://doi.org/10.5281/zenodo.3672902
Abstract
Download PDF DOI
We present El mapa no es el territorio (MNT), a set of open source tools that facilitate the design of visual and musical mappings for interactive installations and performance pieces. MNT is being developed by a multidisciplinary group that explores gestural control of audio-visual environments and virtual instruments. Along with these tools, this paper will present two projects in which they were used -interactive installation Memorias Migrantes and stage performance Recorte de Jorge Cárdenas Cayendo-, showing how MNT allows us to develop collaborative artworks that articulate body movement and generative audiovisual systems, and how its current version was influenced by these successive implementations.
@inproceedings{Sguiglia2019, author = {Sguiglia, Fabián and Coton, Pauli and Toth, Fernando}, title = {El mapa no es el territorio: Sensor mapping for audiovisual performances}, pages = {146--149}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672902}, url = {http://www.nime.org/proceedings/2019/nime2019_paper029.pdf} }
Vanessa Yaremchuk, Carolina Brum Medeiros, and Marcelo Wanderley. 2019. Small Dynamic Neural Networks for Gesture Classification with The Rulers (a Digital Musical Instrument). Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 150–155. http://doi.org/10.5281/zenodo.3672904
Abstract
Download PDF DOI
The Rulers is a Digital Musical Instrument with 7 metal beams, each of which is fixed at one end. It uses infrared sensors, Hall sensors, and strain gauges to estimate deflection. These sensors each perform better or worse depending on the class of gesture the user is making, motivating sensor fusion practices. Residuals between Kalman filters and sensor output are calculated and used as input to a recurrent neural network which outputs a classification that determines which processing parameters and sensor measurements are employed. Multiple instances (30) of layer recurrent neural networks with a single hidden layer varying in size from 1 to 10 processing units were trained, and tested on previously unseen data. The best performing neural network has only 3 hidden units and has a sufficiently low error rate to be good candidate for gesture classification. This paper demonstrates that: dynamic networks out-perform feedforward networks for this type of gesture classification, a small network can handle a problem of this level of complexity, recurrent networks of this size are fast enough for real-time applications of this type, and the importance of training multiple instances of each network architecture and selecting the best performing one from within that set.
@inproceedings{Yaremchuk2019, author = {Yaremchuk, Vanessa and Medeiros, Carolina Brum and Wanderley, Marcelo}, title = {Small Dynamic Neural Networks for Gesture Classification with The Rulers (a Digital Musical Instrument)}, pages = {150--155}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672904}, url = {http://www.nime.org/proceedings/2019/nime2019_paper030.pdf} }
Palle Dahlstedt and Ami Skånberg Dahlstedt. 2019. OtoKin: Mapping for Sound Space Exploration through Dance Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 156–161. http://doi.org/10.5281/zenodo.3672906
Abstract
Download PDF DOI
We present a work where a space of realtime synthesized sounds is explored through ear (Oto) and movement (Kinesis) by one or two dancers. Movement is tracked and mapped through extensive pre-processing to a high-dimensional acoustic space, using a many-to-many mapping, so that every small body movement matters. Designed for improvised exploration, it works as both performance and installation. Through this re-translation of bodily action, position, and posture into infinite-dimensional sound texture and timbre, the performers are invited to re-think and re-learn position and posture as sound, effort as gesture, and timbre as a bodily construction. The sound space can be shared by two people, with added modes of presence, proximity and interaction. The aesthetic background and technical implementation of the system are described, and the system is evaluated based on a number of performances, workshops and installation exhibits. Finally, the aesthetic and choreographic motivations behind the performance narrative are explained, and discussed in the light of the design of the sonification.
@inproceedings{Dahlstedtb2019, author = {Dahlstedt, Palle and Dahlstedt, Ami Skånberg}, title = {OtoKin: Mapping for Sound Space Exploration through Dance Improvisation}, pages = {156--161}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672906}, url = {http://www.nime.org/proceedings/2019/nime2019_paper031.pdf} }
Joe Wright and James Dooley. 2019. On the Inclusivity of Constraint: Creative Appropriation in Instruments for Neurodiverse Children and Young People. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 162–167. http://doi.org/10.5281/zenodo.3672908
Abstract
Download PDF DOI
Taking inspiration from research into deliberately constrained musical technologies and the emergence of neurodiverse, child-led musical groups such as the Artism Ensemble, the interplay between design-constraints, inclusivity and appro- priation is explored. A small scale review covers systems from two prominent UK-based companies, and two itera- tions of a new prototype system that were developed in collaboration with a small group of young people on the autistic spectrum. Amongst these technologies, the aspects of musical experience that are made accessible differ with re- spect to the extent and nature of each system’s constraints. It is argued that the design-constraints of the new prototype system facilitated the diverse playing styles and techniques observed during its development. Based on these obser- vations, we propose that deliberately constrained musical instruments may be one way of providing more opportuni- ties for the emergence of personal practices and preferences in neurodiverse groups of children and young people, and that this is a fitting subject for further research.
@inproceedings{Wright2019, author = {Wright, Joe and Dooley, James}, title = {On the Inclusivity of Constraint: Creative Appropriation in Instruments for Neurodiverse Children and Young People}, pages = {162--167}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672908}, url = {http://www.nime.org/proceedings/2019/nime2019_paper032.pdf} }
Isabela Corintha Almeida, Giordano Cabral, and Professor Gilberto Bernardes Almeida. 2019. AMIGO: An Assistive Musical Instrument to Engage, Create and Learn Music. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 168–169. http://doi.org/10.5281/zenodo.3672910
Abstract
Download PDF DOI
We present AMIGO, a real-time computer music system that assists novice users in the composition process through guided musical improvisation. The system consists of 1) a computational analysis-generation algorithm, which not only formalizes musical principles from examples, but also guides the user in selecting note sequences; 2) a MIDI keyboard controller with an integrated LED stripe, which provides visual feedback to the user; and 3) a real-time music notation, which displays the generated output. Ultimately, AMIGO allows the intuitive creation of new musical structures and the acquisition of Western music formalisms, such as musical notation.
@inproceedings{Almeida2019, author = {Almeida, Isabela Corintha and Cabral, Giordano and Almeida, Professor Gilberto Bernardes}, title = {{AMIGO}: An Assistive Musical Instrument to Engage, Create and Learn Music}, pages = {168--169}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672910}, url = {http://www.nime.org/proceedings/2019/nime2019_paper033.pdf} }
Cristiano Figueiró, Guilherme Soares, and Bruno Rohde. 2019. ESMERIL — An interactive audio player and composition system for collaborative experimental music netlabels. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 170–173. http://doi.org/10.5281/zenodo.3672912
Abstract
Download PDF DOI
ESMERIL is an application developed for Android with a toolchain based on Puredata and OpenFrameworks (with Ofelia library). The application enables music creation in a specific expanded format: four separate mono tracks, each one able to manipulate up to eight audio samples per channel. It works also as a performance instrument that stimulates collaborative remixings from compositions of scored interaction gestures called “scenes”. The interface also aims to be a platform to exchange those sample packs as artistic releases, a format similar to the popular idea of an “album”, but prepared to those four channel packs of samples and scores of interaction. It uses an adaptive audio slicing mechanism and it is based on interaction design for multi-touch screen features. A timing sequencer enhances the interaction between pre-set sequences (the “scenes”) and screen manipulation scratching, expanding and moving graphic sound waves. This paper describes the graphical interface features, some development decisions up to now and perspectives to its continuity.
@inproceedings{Figueiró2019, author = {Figueiró, Cristiano and Soares, Guilherme and Rohde, Bruno}, title = {{ESMERIL} --- An interactive audio player and composition system for collaborative experimental music netlabels}, pages = {170--173}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672912}, url = {http://www.nime.org/proceedings/2019/nime2019_paper034.pdf} }
Aline Weber, Lucas Nunes Alegre, Jim Torresen, and Bruno C. da Silva. 2019. Parameterized Melody Generation with Autoencoders and Temporally-Consistent Noise. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 174–179. http://doi.org/10.5281/zenodo.3672914
Abstract
Download PDF DOI
We introduce a machine learning technique to autonomously generate novel melodies that are variations of an arbitrary base melody. These are produced by a neural network that ensures that (with high probability) the melodic and rhythmic structure of the new melody is consistent with a given set of sample songs. We train a Variational Autoencoder network to identify a low-dimensional set of variables that allows for the compression and representation of sample songs. By perturbing these variables with Perlin Noise—a temporally-consistent parameterized noise function—it is possible to generate smoothly-changing novel melodies. We show that (1) by regulating the amount of noise, one can specify how much of the base song will be preserved; and (2) there is a direct correlation between the noise signal and the differences between the statistical properties of novel melodies and the original one. Users can interpret the controllable noise as a type of "creativity knob": the higher it is, the more leeway the network has to generate significantly different melodies. We present a physical prototype that allows musicians to use a keyboard to provide base melodies and to adjust the network’s "creativity knobs" to regulate in real-time the process that proposes new melody ideas.
@inproceedings{Weber2019, author = {Weber, Aline and Alegre, Lucas Nunes and Torresen, Jim and da Silva, Bruno C.}, title = {Parameterized Melody Generation with Autoencoders and Temporally-Consistent Noise}, pages = {174--179}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672914}, url = {http://www.nime.org/proceedings/2019/nime2019_paper035.pdf} }
Atau Tanaka, Balandino Di Donato, Michael Zbyszynski, and Geert Roks. 2019. Designing Gestures for Continuous Sonic Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 180–185. http://doi.org/10.5281/zenodo.3672916
Abstract
Download PDF DOI
This paper presents a system that allows users to quickly try different ways to train neural networks and temporal modeling techniques to associate arm gestures with time varying sound. We created a software framework for this, and designed three interactive sounds and presented them to participants in a workshop based study. We build upon previous work in sound-tracing and mapping-by-demonstration to ask the participants to design gestures with which to perform the given sounds using a multimodal, inertial measurement (IMU) and muscle sensing (EMG) device. We presented the user with four techniques for associating sensor input to synthesizer parameter output. Two were classical techniques from the literature, and two proposed different ways to capture dynamic gesture in a neural network. These four techniques were: 1.) A Static Position regression training procedure, 2.) A Hidden Markov based temporal modeler, 3.) Whole Gesture capture to a neural network, and 4.) a Windowed method using the position-based procedure on the fly during the performance of a dynamic gesture. Our results show trade-offs between accurate, predictable reproduction of the source sounds and exploration of the gesture-sound space. Several of the users were attracted to our new windowed method for capturing gesture anchor points on the fly as training data for neural network based regression. This paper will be of interest to musicians interested in going from sound design to gesture design and offers a workflow for quickly trying different mapping-by-demonstration techniques.
@inproceedings{Tanaka2019, author = {Tanaka, Atau and Di Donato, Balandino and Zbyszynski, Michael and Roks, Geert}, title = {Designing Gestures for Continuous Sonic Interaction}, pages = {180--185}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672916}, url = {http://www.nime.org/proceedings/2019/nime2019_paper036.pdf} }
Cagri Erdem, Katja Henriksen Schia, and Alexander Refsum Jensenius. 2019. Vrengt: A Shared Body-Machine Instrument for Music-Dance Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 186–191. http://doi.org/10.5281/zenodo.3672918
Abstract
Download PDF DOI
This paper describes the process of developing a shared instrument for music–dance performance, with a particular focus on exploring the boundaries between standstill vs motion, and silence vs sound. The piece Vrengt grew from the idea of enabling a true partnership between a musician and a dancer, developing an instrument that would allow for active co-performance. Using a participatory design approach, we worked with sonification as a tool for systematically exploring the dancer’s bodily expressions. The exploration used a "spatiotemporal matrix", with a particular focus on sonic microinteraction. In the final performance, two Myo armbands were used for capturing muscle activity of the arm and leg of the dancer, together with a wireless headset microphone capturing the sound of breathing. In the paper we reflect on multi-user instrument paradigms, discuss our approach to creating a shared instrument using sonification as a tool for the sound design, and reflect on the performers’ subjective evaluation of the instrument.
@inproceedings{Erdem2019, author = {Erdem, Cagri and Schia, Katja Henriksen and Jensenius, Alexander Refsum}, title = {Vrengt: A Shared Body-Machine Instrument for Music-Dance Performance}, pages = {186--191}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672918}, url = {http://www.nime.org/proceedings/2019/nime2019_paper037.pdf} }
Samuel Thompson Parke-Wolfe, Hugo Scurto, and Rebecca Fiebrink. 2019. Sound Control: Supporting Custom Musical Interface Design for Children with Disabilities. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 192–197. http://doi.org/10.5281/zenodo.3672920
Abstract
Download PDF DOI
We have built a new software toolkit that enables music therapists and teachers to create custom digital musical interfaces for children with diverse disabilities. It was designed in collaboration with music therapists, teachers, and children. It uses interactive machine learning to create new sensor- and vision-based musical interfaces using demonstrations of actions and sound, making interface building fast and accessible to people without programming or engineering expertise. Interviews with two music therapy and education professionals who have used the software extensively illustrate how richly customised, sensor-based interfaces can be used in music therapy contexts; they also reveal how properties of input devices, music-making approaches, and mapping techniques can support a variety of interaction styles and therapy goals.
@inproceedings{ParkeWolfe2019, author = {Parke-Wolfe, Samuel Thompson and Scurto, Hugo and Fiebrink, Rebecca}, title = {Sound Control: Supporting Custom Musical Interface Design for Children with Disabilities}, pages = {192--197}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672920}, url = {http://www.nime.org/proceedings/2019/nime2019_paper038.pdf} }
Oliver Hödl. 2019. ’Blending Dimensions’ when Composing for DMI and Symphonic Orchestra. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 198–203. http://doi.org/10.5281/zenodo.3672922
Abstract
Download PDF DOI
With a new digital music instrument (DMI), the interface itself, the sound generation, the composition, and the performance are often closely related and even intrinsically linked with each other. Similarly, the instrument designer, composer, and performer are often the same person. The Academic Festival Overture is a new piece of music for the DMI Trombosonic and symphonic orchestra written by a composer who had no prior experience with the instrument. The piece underwent the phases of a composition competition, rehearsals, a music video production, and a public live performance. This whole process was evaluated reflecting on the experience of three involved key stakeholder: the composer, the conductor, and the instrument designer as performer. ‘Blending dimensions’ of these stakeholder and decoupling the composition from the instrument designer inspired the newly involved composer to completely rethink the DMI’s interaction and sound concept. Thus, to deliberately avoid an early collaboration between a DMI designer and a composer bears the potential for new inspiration and at the same time the challenge to seek such a collaboration in the need of clarifying possible misunderstandings and improvement.
@inproceedings{Hödl2019, author = {Hödl, Oliver}, title = {'Blending Dimensions' when Composing for {DMI} and Symphonic Orchestra}, pages = {198--203}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672922}, url = {http://www.nime.org/proceedings/2019/nime2019_paper039.pdf} }
behzad haki and Sergi Jorda. 2019. A Bassline Generation System Based on Sequence-to-Sequence Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 204–209. http://doi.org/10.5281/zenodo.3672928
Abstract
Download PDF DOI
This paper presents a detailed explanation of a system generating basslines that are stylistically and rhythmically interlocked with a provided audio drum loop. The proposed system is based on a natural language processing technique: word-based sequence-to-sequence learning using LSTM units. The novelty of the proposed method lies in the fact that the system is not reliant on a voice-by-voice transcription of drums; instead, in this method, a drum representation is used as an input sequence from which a translated bassline is obtained at the output. The drum representation consists of fixed size sequences of onsets detected from a 2-bar audio drum loop in eight different frequency bands. The basslines generated by this method consist of pitched notes with different duration. The proposed system was trained on two distinct datasets compiled for this project by the authors. Each dataset contains a variety of 2-bar drum loops with annotated basslines from two different styles of dance music: House and Soca. A listening experiment designed based on the system revealed that the proposed system is capable of generating basslines that are interesting and are well rhythmically interlocked with the drum loops from which they were generated.
@inproceedings{haki2019, author = {behzad haki and Jorda, Sergi}, title = {A Bassline Generation System Based on Sequence-to-Sequence Learning}, pages = {204--209}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672928}, url = {http://www.nime.org/proceedings/2019/nime2019_paper040.pdf} }
Lloyd May and spencer topel. 2019. BLIKSEM: An Acoustic Synthesis Fuzz Pedal. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 210–215. http://doi.org/10.5281/zenodo.3672930
Abstract
Download PDF DOI
This paper presents a novel physical fuzz pedal effect system named BLIKSEM. Our approach applies previous work in nonlinear acoustic synthesis via a driven cantilever soundboard configuration for the purpose of generating fuzz pedal-like effects as well as a variety of novel audio effects. Following a presentation of our pedal design, we compare the performance of our system with various various classic and contemporary fuzz pedals using an electric guitar. Our results show that BLIKSEM is capable of generating signals that approximate the timbre and dynamic behaviors of conventional fuzz pedals, as well as offer new mechanisms for expressive interactions and a range of new effects in different configurations.
@inproceedings{May2019, author = {May, Lloyd and spencer topel}, title = {{BLIKSEM}: An Acoustic Synthesis Fuzz Pedal}, pages = {210--215}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672930}, url = {http://www.nime.org/proceedings/2019/nime2019_paper041.pdf} }
Anna Xambó, Sigurd Saue, Alexander Refsum Jensenius, Robin Støckert, and Oeyvind Brandtsegg. 2019. NIME Prototyping in Teams: A Participatory Approach to Teaching Physical Computing. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 216–221. http://doi.org/10.5281/zenodo.3672932
Abstract
Download PDF DOI
In this paper, we present a workshop of physical computing applied to NIME design based on science, technology, engineering, arts, and mathematics (STEAM) education. The workshop is designed for master students with multidisciplinary backgrounds. They are encouraged to work in teams from two university campuses remotely connected through a portal space. The components of the workshop are prototyping, music improvisation and reflective practice. We report the results of this course, which show a positive impact on the students’ confidence in prototyping and intention to continue in STEM fields. We also present the challenges and lessons learned on how to improve the teaching of hybrid technologies and programming skills in an interdisciplinary context across two locations, with the aim of satisfying both beginners and experts. We conclude with a broader discussion on how these new pedagogical perspectives can improve NIME-related courses.
@inproceedings{Xambó2019, author = {Xambó, Anna and Saue, Sigurd and Jensenius, Alexander Refsum and Støckert, Robin and Brandtsegg, Oeyvind}, title = {{NIME} Prototyping in Teams: A Participatory Approach to Teaching Physical Computing}, pages = {216--221}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672932}, url = {http://www.nime.org/proceedings/2019/nime2019_paper042.pdf} }
Eduardo Meneses, Johnty Wang, Sergio Freire, and Marcelo Wanderley. 2019. A Comparison of Open-Source Linux Frameworks for an Augmented Musical Instrument Implementation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 222–227. http://doi.org/10.5281/zenodo.3672934
Abstract
Download PDF DOI
The increasing availability of accessible sensor technologies, single board computers, and prototyping platforms have resulted in a growing number of frameworks explicitly geared towards the design and construction of Digital and Augmented Musical Instruments. Developing such instruments can be facilitated by choosing the most suitable framework for each project. In the process of selecting a framework for implementing an augmented guitar instrument, we have tested three Linux-based open-source platforms that have been designed for real-time sensor interfacing, audio processing, and synthesis. Factors such as acquisition latency, workload measurements, documentation, and software implementation are compared and discussed to determine the suitability of each environment for our particular project.
@inproceedings{Meneses2019, author = {Meneses, Eduardo and Wang, Johnty and Freire, Sergio and Wanderley, Marcelo}, title = {A Comparison of Open-Source Linux Frameworks for an Augmented Musical Instrument Implementation}, pages = {222--227}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672934}, url = {http://www.nime.org/proceedings/2019/nime2019_paper043.pdf} }
Martin Matus Lerner. 2019. Latin American NIMEs: Electronic Musical Instruments and Experimental Sound Devices in the Twentieth Century. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 228–233. http://doi.org/10.5281/zenodo.3672936
Abstract
Download PDF DOI
During the twentieth century several Latin American nations (such as Argentina, Brazil, Chile, Cuba and Mexico) have originated relevant antecedents in the NIME field. Their innovative authors have interrelated musical composition, lutherie, electronics and computing. This paper provides a panoramic view of their original electronic instruments and experimental sound practices, as well as a perspective of them regarding other inventions around the World.
@inproceedings{MatusLerner2019, author = {Lerner, Martin Matus}, title = {Latin American {NIME}s: Electronic Musical Instruments and Experimental Sound Devices in the Twentieth Century}, pages = {228--233}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672936}, url = {http://www.nime.org/proceedings/2019/nime2019_paper044.pdf} }
Sarah Reid, Ryan Gaston, and Ajay Kapur. 2019. Perspectives on Time: performance practice, mapping strategies, & composition with MIGSI. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 234–239. http://doi.org/10.5281/zenodo.3672940
Abstract
Download PDF DOI
This paper presents four years of development in performance and compositional practice on an electronically augmented trumpet called MIGSI. Discussion is focused on conceptual and technical approaches to data mapping, sonic interaction, and composition that are inspired by philosophical questions of time: what is now? Is time linear or multi-directional? Can we operate in multiple modes of temporal perception simultaneously? A number of mapping strategies are presented which explore these ideas through the manipulation of temporal separation between user input and sonic output. In addition to presenting technical progress, this paper will introduce a body of original repertoire composed for MIGSI, in order to illustrate how these tools and approaches have been utilized in live performance and how they may find use in other creative applications.
@inproceedings{Reid2019, author = {Reid, Sarah and Gaston, Ryan and Kapur, Ajay}, title = {Perspectives on Time: performance practice, mapping strategies, \& composition with {MIGSI}}, pages = {234--239}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672940}, url = {http://www.nime.org/proceedings/2019/nime2019_paper045.pdf} }
Natacha Lamounier, Luiz Naveda, and Adriana Bicalho. 2019. The design of technological interfaces for interactions between music, dance and garment movements. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 240–245. http://doi.org/10.5281/zenodo.3672942
Abstract
Download PDF DOI
The present work explores the design of multimodal interfaces that capture hand gestures and promote interactions between dance, music and wearable technologic garment. We aim at studying the design strategies used to interface music to other domains of the performance, in special, the application of wearable technologies into music performances. The project describes the development of the music and wearable interfaces, which comprise a hand interface and a mechanical actuator attached to the dancer’s dress. The performance resulted from the study is inspired in the butoh dances and attempts to add a technological poetic as music-dance-wearable interactions to the traditional dialogue between dance and music.
@inproceedings{Lamounier2019, author = {Lamounier, Natacha and Naveda, Luiz and Bicalho, Adriana}, title = {The design of technological interfaces for interactions between music, dance and garment movements}, pages = {240--245}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672942}, url = {http://www.nime.org/proceedings/2019/nime2019_paper046.pdf} }
Ximena Alarcon Diaz, Victor Evaristo Gonzalez Sanchez, and Cagri Erdem. 2019. INTIMAL: Walking to Find Place, Breathing to Feel Presence. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 246–249. http://doi.org/10.5281/zenodo.3672944
Abstract
Download PDF DOI
INTIMAL is a physical virtual embodied system for relational listening that integrates body movement, oral archives, and voice expression through telematic improvisatory performance in migratory contexts. It has been informed by nine Colombian migrant women who express their migratory journeys through free body movement, voice and spoken word improvisation. These improvisations have been recorded using Motion Capture, in order to develop interfaces for co-located and telematic interactions for the sharing of narratives of migration. In this paper, using data from the Motion Capture experiments, we are exploring two specific movements from improvisers: displacements on space (walking, rotating), and breathing data. Here we envision how co-relations between walking and breathing, might be further studied to implement interfaces that help the making of connections between place, and the feeling of presence for people in-between distant locations.
@inproceedings{AlarconDiaz2019, author = {Diaz, Ximena Alarcon and Sanchez, Victor Evaristo Gonzalez and Erdem, Cagri}, title = {{INTIMAL}: Walking to Find Place, Breathing to Feel Presence}, pages = {246--249}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672944}, url = {http://www.nime.org/proceedings/2019/nime2019_paper047.pdf} }
Disha Sardana, Woohun Joo, Ivica Ico Bukvic, and Greg Earle. 2019. Introducing Locus: a NIME for Immersive Exocentric Aural Environments. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 250–255. http://doi.org/10.5281/zenodo.3672946
Abstract
Download PDF DOI
Locus is a NIME designed specifically for an interactive, immersive high density loudspeaker array environment. The system is based on a pointing mechanism to interact with a sound scene comprising 128 speakers. Users can point anywhere to interact with the system, and the spatial interaction utilizes motion capture, so it does not require a screen. Instead, it is completely controlled via hand gestures using a glove that is populated with motion-tracking markers. The main purpose of this system is to offer intuitive physical interaction with the perimeter-based spatial sound sources. Further, its goal is to minimize user-worn technology and thereby enhance freedom of motion by utilizing environmental sensing devices, such as motion capture cameras or infrared sensors. The ensuing creativity enabling technology is applicable to a broad array of possible scenarios, from researching limits of human spatial hearing perception to facilitating learning and artistic performances, including dance. In this paper, we describe our NIME design and implementation, its preliminary assessment, and offer a Unity-based toolkit to facilitate its broader deployment and adoption.
@inproceedings{Sardana2019, author = {Sardana, Disha and Joo, Woohun and Bukvic, Ivica Ico and Earle, Greg}, title = {Introducing Locus: a {NIME} for Immersive Exocentric Aural Environments}, pages = {250--255}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672946}, url = {http://www.nime.org/proceedings/2019/nime2019_paper048.pdf} }
Echo Ho, Prof. Dr. Phil. Alberto de Campo, and Hannes Hoelzl. 2019. The SlowQin: An Interdisciplinary Approach to reinventing the Guqin. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 256–259. http://doi.org/10.5281/zenodo.3672948
Abstract
Download PDF DOI
This paper presents an ongoing process of examining and reinventing the Guqin, to forge a contemporary engagement with this unique traditional Chinese string instrument. The SlowQin is both a hybrid resemblance of the Guqin and a fully functioning wireless interface to interact with computer software. It has been developed and performed during the last decade. Instead of aiming for virtuosic perfection of playing the instrument, SlowQin emphasizes the openness for continuously rethinking and reinventing the Guqin’s possibilities. Through a combination of conceptual work and practical production, Echo Ho’s SlowQin project works as an experimental twist on Historically Informed Performance, with the motivation of conveying artistic gestures that tackle philosophical, ideological, and socio-political subjects embedded in our living environment in globalised conditions. In particular, this paper touches the history of the Guqin, gives an overview of the technical design concepts of the instrument, and discusses the aesthetical approaches of the SlowQin performances that have been realised so far.
@inproceedings{Ho2019, author = {Ho, Echo and de Campo, Prof. Dr. Phil. Alberto and Hoelzl, Hannes}, title = {The SlowQin: An Interdisciplinary Approach to reinventing the Guqin}, pages = {256--259}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672948}, url = {http://www.nime.org/proceedings/2019/nime2019_paper049.pdf} }
Charles Patrick Martin and Jim Torresen. 2019. An Interactive Musical Prediction System with Mixture Density Recurrent Neural Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 260–265. http://doi.org/10.5281/zenodo.3672952
Abstract
Download PDF DOI
This paper is about creating digital musical instruments where a predictive neural network model is integrated into the interactive system. Rather than predicting symbolic music (e.g., MIDI notes), we suggest that predicting future control data from the user and precise temporal information can lead to new and interesting interactive possibilities. We propose that a mixture density recurrent neural network (MDRNN) is an appropriate model for this task. The predictions can be used to fill-in control data when the user stops performing, or as a kind of filter on the user’s own input. We present an interactive MDRNN prediction server that allows rapid prototyping of new NIMEs featuring predictive musical interaction by recording datasets, training MDRNN models, and experimenting with interaction modes. We illustrate our system with several example NIMEs applying this idea. Our evaluation shows that real-time predictive interaction is viable even on single-board computers and that small models are appropriate for small datasets.
@inproceedings{Martin2019, author = {Martin, Charles Patrick and Torresen, Jim}, title = {An Interactive Musical Prediction System with Mixture Density Recurrent Neural Networks}, pages = {260--265}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672952}, url = {http://www.nime.org/proceedings/2019/nime2019_paper050.pdf} }
Nicolas Bazoge, Ronan Gaugne, Florian Nouviale, Valérie Gouranton, and Bruno Bossis. 2019. Expressive potentials of motion capture in musical performance. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 266–271. http://doi.org/10.5281/zenodo.3672954
Abstract
Download PDF DOI
The paper presents the electronic music performance project Vis Insita implementing the design of experimental instrumental interfaces based on optical motion capture technology with passive infrared markers (MoCap), and the analysis of their use in a real scenic presentation context. Because of MoCap’s predisposition to capture the movements of the body, a lot of research and musical applications in the performing arts concern dance or the sonification of gesture. For our research, we wanted to move away from the capture of the human body to analyse the possibilities of a kinetic object handled by a performer, both in terms of musical expression, but also in the broader context of a multimodal scenic interpretation.
@inproceedings{Bazoge2019, author = {Bazoge, Nicolas and Gaugne, Ronan and Nouviale, Florian and Gouranton, Valérie and Bossis, Bruno}, title = {Expressive potentials of motion capture in musical performance}, pages = {266--271}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672954}, url = {http://www.nime.org/proceedings/2019/nime2019_paper051.pdf} }
Akito Van Troyer and Rebecca Kleinberger. 2019. From Mondrian to Modular Synth: Rendering NIME using Generative Adversarial Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 272–277. http://doi.org/10.5281/zenodo.3672956
Abstract
Download PDF DOI
This paper explores the potential of image-to-image translation techniques in aiding the design of new hardware-based musical interfaces such as MIDI keyboard, grid-based controller, drum machine, and analog modular synthesizers. We collected an extensive image database of such interfaces and implemented image-to-image translation techniques using variants of Generative Adversarial Networks. The created models learn the mapping between input and output images using a training set of either paired or unpaired images. We qualitatively assess the visual outcomes based on three image-to-image translation models: reconstructing interfaces from edge maps, and collection style transfers based on two image sets: visuals of mosaic tile patterns and geometric abstract two-dimensional arts. This paper aims to demonstrate that synthesizing interface layouts based on image-to-image translation techniques can yield insights for researchers, musicians, music technology industrial designers, and the broader NIME community.
@inproceedings{VanTroyer2019, author = {Troyer, Akito Van and Kleinberger, Rebecca}, title = {From Mondrian to Modular Synth: Rendering {NIME} using Generative Adversarial Networks}, pages = {272--277}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672956}, url = {http://www.nime.org/proceedings/2019/nime2019_paper052.pdf} }
Laurel Pardue, Kurijn Buys, Dan Overholt, Andrew P. McPherson, and Michael Edinger. 2019. Separating sound from source: sonic transformation of the violin through electrodynamic pickups and acoustic actuation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 278–283. http://doi.org/10.5281/zenodo.3672958
Abstract
Download PDF DOI
When designing an augmented acoustic instrument, it is often of interest to retain an instrument’s sound quality and nuanced response while leveraging the richness of digital synthesis. Digital audio has traditionally been generated through speakers, separating sound generation from the instrument itself, or by adding an actuator within the instrument’s resonating body, imparting new sounds along with the original. We offer a third option, isolating the playing interface from the actuated resonating body, allowing us to rewrite the relationship between performance action and sound result while retaining the general form and feel of the acoustic instrument. We present a hybrid acoustic-electronic violin based on a stick-body electric violin and an electrodynamic polyphonic pick-up capturing individual string displacements. A conventional violin body acts as the resonator, actuated using digitally altered audio of the string inputs. By attaching the electric violin above the body with acoustic isolation, we retain the physical playing experience of a normal violin along with some of the acoustic filtering and radiation of a traditional build. We propose the use of the hybrid instrument with digitally automated pitch and tone correction to make an easy violin for use as a potential motivational tool for beginning violinists.
@inproceedings{Pardue2019, author = {Pardue, Laurel and Buys, Kurijn and Overholt, Dan and McPherson, Andrew P. and Edinger, Michael}, title = {Separating sound from source: sonic transformation of the violin through electrodynamic pickups and acoustic actuation}, pages = {278--283}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672958}, url = {http://www.nime.org/proceedings/2019/nime2019_paper053.pdf} }
Gabriela Bila Advincula, Don Derek Haddad, and Kent Larson. 2019. Grain Prism: Hieroglyphic Interface for Granular Sampling. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 284–285. http://doi.org/10.5281/zenodo.3672960
Abstract
Download PDF DOI
This paper introduces the Grain Prism, a hybrid of a granular synthesizer and sampler that, through a capacitive sensing interface presented in obscure glyphs, invites users to create experimental sound textures with their own recorded voice. The capacitive sensing system, activated through skin contact over single glyphs or a combination of them, instigates the user to decipher the hidden sonic messages. The mysterious interface open space to aleatoricism in the act of conjuring sound, and therefore new discoveries. The users, when forced to abandon preconceived ways of playing a synthesizer, look at themselves in a different light, as their voice is the source material.
@inproceedings{Advincula2019, author = {Advincula, Gabriela Bila and Haddad, Don Derek and Larson, Kent}, title = {Grain Prism: Hieroglyphic Interface for Granular Sampling}, pages = {284--285}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672960}, url = {http://www.nime.org/proceedings/2019/nime2019_paper054.pdf} }
Oliver Bown, Angelo Fraietta, Sam Ferguson, Lian Loke, and Liam Bray. 2019. Facilitating Creative Exploratory Search with Multiple Networked Audio Devices Using HappyBrackets. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 286–291. http://doi.org/10.5281/zenodo.3672962
Abstract
Download PDF DOI
We present an audio-focused creative coding toolkit for deploying music programs to remote networked devices. It is designed to support efficient creative exploratory search in the context of the Internet of Things (IoT), where one or more devices must be configured, programmed and interact over a network, with applications in digital musical instruments, networked music performance and other digital experiences. Users can easily monitor and hack what multiple devices are doing on the fly, enhancing their ability to perform “exploratory search” in a creative workflow. We present two creative case studies using the system: the creation of a dance performance and the creation of a distributed musical installation. Analysing different activities within the production process, with a particular focus on the trade-off between more creative exploratory tasks and more standard configuring and problem-solving tasks, we show how the system supports creative exploratory search for multiple networked devices.
@inproceedings{Bown2019, author = {Bown, Oliver and Fraietta, Angelo and Ferguson, Sam and Loke, Lian and Bray, Liam}, title = {Facilitating Creative Exploratory Search with Multiple Networked Audio Devices Using HappyBrackets}, pages = {286--291}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672962}, url = {http://www.nime.org/proceedings/2019/nime2019_paper055.pdf} }
Thais Fernandes Santos. 2019. The reciprocity between ancillary gesture and music structure performed by expert musicians. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 292–297. http://doi.org/10.5281/zenodo.3672966
Abstract
Download PDF DOI
During the musical performance, expert musicians consciously manipulate acoustical parameters expressing their interpretative choices. Also, players make physical motions and, in many cases, these gestures are related to the musicians’ artistic intentions. However, it’s not clear if the sound manipulation reflects in physical motions. The understanding of the musical structure of the work being performed in its many levels may impact the projection of artistic intentions, and performers alter it in micro and macro-sections, such as in musical motifs, phrases and sessions. Therefore, this paper investigates the timing manipulation and how such variations may reflect in physical gestures. The study involved musicians (flute, clarinet, and bassoon players) performing a unison excerpt by G. Rossini. We analyzed the relationship between timing variation (the Inter Onsets Interval deviations) and physical motion based on the traveled distance of the flute under different conditions. The flutists were asked to play the musical excerpt in three experimental conditions: (1) playing solo and playing in duets with previous recordings by other instrumentalists, (2) clarinetist and (3) bassoonist. The finding suggests that: 1) the movements, which seem to be related to the sense of pulse, are recurrent and stable, 2) the timing variability in micro or macro sections reflects in gestures’ amplitude performed by flutists.
@inproceedings{FernandesSantos2019, author = {Santos, Thais Fernandes}, title = {The reciprocity between ancillary gesture and music structure performed by expert musicians}, pages = {292--297}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672966}, url = {http://www.nime.org/proceedings/2019/nime2019_paper056.pdf} }
Razvan Paisa and Dan Overholt. 2019. Enhancing the Expressivity of the Sensel Morph via Audio-rate Sensing. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 298–302. http://doi.org/10.5281/zenodo.3672968
Abstract
Download PDF DOI
This project describes a novel approach to hybrid electro-acoustical instruments by augmenting the Sensel Morph, with real-time audio sensing capabilities. The actual action-sounds are captured with a piezoelectric transducer and processed in Max 8 to extend the sonic range existing in the acoustical domain alone. The control parameters are captured by the Morph and mapped to audio algorithm proprieties like filter cutoff frequency, frequency shift or overdrive. The instrument opens up the possibility for a large selection of different interaction techniques that have a direct impact on the output sound. The instrument is evaluated from a sound designer’s perspective, encouraging exploration in the materials used as well as techniques. The contribution are two-fold. First, the use of a piezo transducer to augment the Sensel Morph affords an extra dimension of control on top of the offerings. Second, the use of acoustic sounds from physical interactions as a source for excitation and manipulation of an audio processing system offers a large variety of new sounds to be discovered. The methodology involved an exploratory process of iterative instrument making, interspersed with observations gathered via improvisatory trials, focusing on the new interactions made possible through the fusion of audio-rate inputs with the Morph’s default interaction methods.
@inproceedings{Paisa2019, author = {Paisa, Razvan and Overholt, Dan}, title = {Enhancing the Expressivity of the Sensel Morph via Audio-rate Sensing}, pages = {298--302}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672968}, url = {http://www.nime.org/proceedings/2019/nime2019_paper057.pdf} }
Juan Mariano Ramos. 2019. Eolos: a wireless MIDI wind controller. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 303–306. http://doi.org/10.5281/zenodo.3672972
Abstract
Download PDF DOI
This paper presents a description of the design and usage of Eolos, a wireless MIDI wind controller. The main goal of Eolos is to provide an interface that facilitates the production of music for any individual, regardless of their playing skills or previous musical knowledge. Its features are: open design, lower cost than commercial alternatives, wireless MIDI operation, rechargeable battery power, graphical user interface, tactile keys, sensitivity to air pressure, left-right reversible design and two FSR sensors. There is also a mention about its participation in the 1st Collaborative Concert over the Internet between Argentina and Cuba "Tradición y Nuevas Sonoridades".
@inproceedings{Ramos2019, author = {Ramos, Juan Mariano}, title = {Eolos: a wireless {MIDI} wind controller}, pages = {303--306}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672972}, url = {http://www.nime.org/proceedings/2019/nime2019_paper058.pdf} }
Ruihan Yang, Tianyao Chen, Yiyi Zhang, and gus xia. 2019. Inspecting and Interacting with Meaningful Music Representations using VAE. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 307–312. http://doi.org/10.5281/zenodo.3672974
Abstract
Download PDF DOI
Variational Autoencoder has already achieved great results on image generation and recently made promising progress on music sequence generation. However, the model is still quite difficult to control in the sense that the learned latent representations lack meaningful music semantics. What users really need is to interact with certain music features, such as rhythm and pitch contour, in the creation process so that they can easily test different composition ideas. In this paper, we propose a disentanglement by augmentation method to inspect the pitch and rhythm interpretations of the latent representations. Based on the interpretable representations, an intuitive graphical user interface demo is designed for users to better direct the music creation process by manipulating the pitch contours and rhythmic complexity.
@inproceedings{Yang2019, author = {Yang, Ruihan and Chen, Tianyao and Zhang, Yiyi and gus xia}, title = {Inspecting and Interacting with Meaningful Music Representations using {VAE}}, pages = {307--312}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672974}, url = {http://www.nime.org/proceedings/2019/nime2019_paper059.pdf} }
Gerard Roma, Owen Green, and Pierre Alexandre Tremblay. 2019. Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 313–318. http://doi.org/10.5281/zenodo.3672976
Abstract
Download PDF DOI
Descriptor spaces have become an ubiquitous interaction paradigm for music based on collections of audio samples. However, most systems rely on a small predefined set of descriptors, which the user is often required to understand and choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition, this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples. In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as a software library using the SuperCollider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.
@inproceedings{Roma2019, author = {Roma, Gerard and Green, Owen and Tremblay, Pierre Alexandre}, title = {Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces}, pages = {313--318}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672976}, url = {http://www.nime.org/proceedings/2019/nime2019_paper060.pdf} }
Vesa Petri Norilo. 2019. Veneer: Visual and Touch-based Programming for Audio. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 319–324. http://doi.org/10.5281/zenodo.3672978
Abstract
Download PDF DOI
This paper presents Veneer, a visual, touch-ready programming interface for the Kronos programming language. The challenges of representing high-level data flow abstractions, including higher order functions, are described. The tension between abstraction and spontaneity in programming is addressed, and gradual abstraction in live programming is proposed as a potential solution. Several novel user interactions for patching on a touch device are shown. In addition, the paper describes some of the current issues of web audio music applications and offers strategies for integrating a web-based presentation layer with a low-latency native processing backend.
@inproceedings{Norilo2019, author = {Norilo, Vesa Petri}, title = {Veneer: Visual and Touch-based Programming for Audio}, pages = {319--324}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672978}, url = {http://www.nime.org/proceedings/2019/nime2019_paper061.pdf} }
Andrei Faitas, Synne Engdahl Baumann, Torgrim Rudland Næss, Jim Torresen, and Charles Patrick Martin. 2019. Generating Convincing Harmony Parts with Simple Long Short-Term Memory Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 325–330. http://doi.org/10.5281/zenodo.3672980
Abstract
Download PDF DOI
Generating convincing music via deep neural networks is a challenging problem that shows promise for many applications including interactive musical creation. One part of this challenge is the problem of generating convincing accompaniment parts to a given melody, as could be used in an automatic accompaniment system. Despite much progress in this area, systems that can automatically learn to generate interesting sounding, as well as harmonically plausible, accompanying melodies remain somewhat elusive. In this paper we explore the problem of sequence to sequence music generation where a human user provides a sequence of notes, and a neural network model responds with a harmonically suitable sequence of equal length. We consider two sequence-to-sequence models; one featuring standard unidirectional long short-term memory (LSTM) architecture, and the other featuring bidirectional LSTM, both successfully trained to produce a sequence based on the given input. Both of these are fairly dated models, as part of the investigation is to see what can be achieved with such models. These are evaluated and compared via a qualitative study that features 106 respondents listening to eight random samples from our set of generated music, as well as two human samples. From the results we see a preference for the sequences generated by the bidirectional model as well as an indication that these sequences sound more human.
@inproceedings{Faitas2019, author = {Faitas, Andrei and Baumann, Synne Engdahl and Næss, Torgrim Rudland and Torresen, Jim and Martin, Charles Patrick}, title = {Generating Convincing Harmony Parts with Simple Long Short-Term Memory Networks}, pages = {325--330}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672980}, url = {http://www.nime.org/proceedings/2019/nime2019_paper062.pdf} }
Anthony T. Marasco, Edgar Berdahl, and Jesse Allison. 2019. Bendit_I/O: A System for Networked Performance of Circuit-Bent Devices. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 331–334. http://doi.org/10.5281/zenodo.3672982
Abstract
Download PDF DOI
Bendit_I/O is a system that allows for wireless, networked performance of circuit-bent devices, giving artists a new outlet for performing with repurposed technology. In a typical setup, a user pre-bends a device using the Bendit_I/O board as an intermediary, replacing physical switches and potentiometers with the board’s reed relays, motor driver, and digital potentiometer signals. Bendit_I/O brings the networking techniques of distributed music performances to the hardware hacking realm, opening the door for creative implementation of multiple circuit-bent devices in audiovisual experiences. Consisting of a Wi-Fi- enabled I/O board and a Node-based server, the system provides performers with a variety of interaction and control possibilities between connected users and hacked devices. Moreover, it is user-friendly, low-cost, and modular, making it a flexible toolset for artists of diverse experience levels.
@inproceedings{Marasco2019, author = {Marasco, Anthony T. and Berdahl, Edgar and Allison, Jesse}, title = {{Bendit\_I/O}: A System for Networked Performance of Circuit-Bent Devices}, pages = {331--334}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672982}, url = {http://www.nime.org/proceedings/2019/nime2019_paper063.pdf} }
McLean J Macionis and Ajay Kapur. 2019. Where Is The Quiet: Immersive Experience Design Using the Brain, Mechatronics, and Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 335–338. http://doi.org/10.5281/zenodo.3672984
Abstract
Download PDF DOI
’Where Is The Quiet?’ is a mixed-media installation that utilizes immersive experience design, mechatronics, and machine learning in order to enhance wellness and increase connectivity to the natural world. Individuals interact with the installation by wearing a brainwave interface that measures the strength of the alpha wave signal. The interface then transmits the data to a computer that uses it in order to determine the individual’s overall state of relaxation. As the individual achieves higher states of relaxation, mechatronic instruments respond and provide feedback. This feedback not only encourages self-awareness but also it motivates the individual to relax further. Visitors without the headset experience the installation by watching a film and listening to an original musical score. Through the novel arrangement of technologies and features, ’Where Is The Quiet?’ demonstrates that mediated technological experiences are capable of evoking meditative states of consciousness, facilitating individual and group connectivity, and deepening awareness of the natural world. As such, this installation opens the door to future research regarding the possibility of immersive experiences supporting humanitarian needs.
@inproceedings{Macionis2019, author = {Macionis, McLean J and Kapur, Ajay}, title = {Where Is The Quiet: Immersive Experience Design Using the Brain, Mechatronics, and Machine Learning}, pages = {335--338}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672984}, url = {http://www.nime.org/proceedings/2019/nime2019_paper064.pdf} }
Tate Carson. 2019. Mesh Garden: A creative-based musical game for participatory musical performance . Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 339–342. http://doi.org/10.5281/zenodo.3672986
Abstract
Download PDF DOI
Mesh Garden explores participatory music-making with smart- phones using an audio sequencer game made up of a distributed smartphone speaker system. The piece allows a group of people in a relaxed situation to create a piece of ambient music using their smartphones networked through the internet. The players’ interactions with the music are derived from the orientations of their phones. The work also has a gameplay aspect; if two players’ phones match in orientation, one player has the option to take the other player’s note, building up a bank of notes that will be used to form a melody.
@inproceedings{Carson2019, author = {Carson, Tate}, title = {Mesh Garden: A creative-based musical game for participatory musical performance }, pages = {339--342}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672986}, url = {http://www.nime.org/proceedings/2019/nime2019_paper065.pdf} }
Beat Rossmy and Alexander Wiethoff. 2019. The Modular Backward Evolution — Why to Use Outdated Technologies. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 343–348. http://doi.org/10.5281/zenodo.3672988
Abstract
Download PDF DOI
In this paper we draw a picture that captures the increasing interest in the format of modular synthesizers today. We therefore provide a historical summary, which includes the origins, the fall and the rediscovery of that technology. Further an empirical analysis is performed based on statements given by artists and manufacturers taken from published interviews. These statements were aggregated, objectified and later reviewed by an expert group consisting of modular synthesizer vendors. Their responses provide the basis for the discussion on how emerging trends in synthesizer interface design reveal challenges and opportunities for the NIME community.
@inproceedings{Rossmy2019, author = {Rossmy, Beat and Wiethoff, Alexander}, title = {The Modular Backward Evolution --- Why to Use Outdated Technologies}, pages = {343--348}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672988}, url = {http://www.nime.org/proceedings/2019/nime2019_paper066.pdf} }
Vincent Goudard. 2019. Ephemeral instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 349–354. http://doi.org/10.5281/zenodo.3672990
Abstract
Download PDF DOI
This article questions the notion of ephemerality of digital musical instruments (DMI). Longevity is generally regarded as a valuable quality that good design criteria should help to achieve. However, the nature of the tools, of the performance conditions and of the music itself may lead to think of ephemerality as an intrinsic modality of the existence of DMIs. In particular, the conditions of contemporary musical production suggest that contextual adaptations of instrumental devices beyond the monolithic unity of classical instruments should be considered. The first two parts of this article analyse various reasons to reassess the issue of longevity and ephemerality. The last two sections attempt to propose an articulation of these two aspects to inform both the design of the DMI and their learning.
@inproceedings{Goudard2019, author = {Goudard, Vincent}, title = {Ephemeral instruments}, pages = {349--354}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672990}, url = {http://www.nime.org/proceedings/2019/nime2019_paper067.pdf} }
Julian Jaramillo and Fernando Iazzetta. 2019. PICO: A portable audio effect box for traditional plucked-string instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 355–360. http://doi.org/10.5281/zenodo.3672992
Abstract
Download PDF DOI
This paper reports the conception, design, implementation and evaluation processes of PICO, a portable audio effect system created with Pure Data and the Raspberry Pi, which augments traditional plucked string instruments such as the Brazilian Cavaquinho, the Venezuelan Cuatro, the Colombian Tiple and the Peruvian/Bolivian Charango. A fabric soft case fixed to the instrument‘s body holds the PICO modules: the touchscreen, the single board computer, the sound card, the speaker system and the DC power bank. The device audio specifications arose from musicological insights about the social role of performers in their musical contexts and the instruments’ playing techniques. They were taken as design challenges in the creation process of PICO‘s first prototype, which was submitted to a short evaluation. Along with the construction of PICO, we reflected over the design of an interactive audio interface as a mode of research. Therefore, the paper will also discuss methodological aspects of audio hardware design.
@inproceedings{Jaramillo2019, author = {Jaramillo, Julian and Iazzetta, Fernando}, title = {{PICO}: A portable audio effect box for traditional plucked-string instruments}, pages = {355--360}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672992}, url = {http://www.nime.org/proceedings/2019/nime2019_paper068.pdf} }
Guilherme Bertissolo. 2019. Composing Understandings: music, motion, gesture and embodied cognition. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 361–364. http://doi.org/10.5281/zenodo.3672994
Abstract
Download PDF DOI
This paper focuses on ongoing research in music composition based on the study of cognitive research in musical meaning. As a method and result at the same time, we propose the creation of experiments related to key issues in composition and music cognition, such as music and movement, memory, expectation and metaphor in creative process. The theoretical reference approached is linked to the embodied cognition, with unfolding related to the cognitive semantics and the enactivist current of cognitive sciences, among other domains of contemporary sciences of mind and neuroscience. The experiments involve the relationship between music and movement, based on prior research using as a reference context in which it is not possible to establish a clear distinction between them: the Capoeira. Finally, we proposes a discussion about the application of the theoretical approach in two compositions: Boreal IV, for Steel Drums and real time electronics, and Converse, collaborative multimedia piece for piano, real-time audio (Puredata) and video processing (GEM and live video) and a dancer.
@inproceedings{Bertissolo2019, author = {Bertissolo, Guilherme}, title = {Composing Understandings: music, motion, gesture and embodied cognition}, pages = {361--364}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672994}, url = {http://www.nime.org/proceedings/2019/nime2019_paper069.pdf} }
Cristohper Ramos Flores, Jim Murphy, and Michael Norris. 2019. HypeSax: Saxophone acoustic augmentation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 365–370. http://doi.org/10.5281/zenodo.3672996
Abstract
Download PDF DOI
New interfaces allow performers to access new possibilities of musical expression. Even though interfaces are often designed to be adaptable to different software, most of them rely on external speakers or similar transducers. This often results on disembodiment and acoustic disengagement from the interface, and in the case of augmented instruments, from the instruments themselves. This paper describes a project in which a hybrid system allows an acoustic integration between the sound of acoustic saxophone and electronics.
@inproceedings{RamosFlores2019, author = {Flores, Cristohper Ramos and Murphy, Jim and Norris, Michael}, title = {HypeSax: Saxophone acoustic augmentation}, pages = {365--370}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672996}, url = {http://www.nime.org/proceedings/2019/nime2019_paper070.pdf} }
Patrick Chwalek and Joe Paradiso. 2019. CD-Synth: a Rotating, Untethered, Digital Synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 371–374. http://doi.org/10.5281/zenodo.3672998
Abstract
Download PDF DOI
We describe the design of an untethered digital synthesizer that can be held and manipulated while broadcasting audio data to a receiving off-the-shelf Bluetooth receiver. The synthesizer allows the user to freely rotate and reorient the instrument while exploiting non-contact light sensing for a truly expressive performance. The system consists of a suite of sensors that convert rotation, orientation, touch, and user proximity into various audio filters and effects operated on preset wave tables, while offering a persistence of vision display for input visualization. This paper discusses the design of the system, including the circuit, mechanics, and software layout, as well as how this device may be incorporated into a performance.
@inproceedings{Chwalek2019, author = {Chwalek, Patrick and Paradiso, Joe}, title = {CD-Synth: a Rotating, Untethered, Digital Synthesizer}, pages = {371--374}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672998}, url = {http://www.nime.org/proceedings/2019/nime2019_paper071.pdf} }
Niccolò Granieri and James Dooley. 2019. Reach: a keyboard-based gesture recognition system for live piano sound modulation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 375–376. http://doi.org/10.5281/zenodo.3673000
Abstract
Download PDF DOI
This paper presents Reach, a keyboard-based gesture recog- nition system for live piano sound modulation. Reach is a system built using the Leap Motion Orion SDK, Pure Data and a custom C++ OSC mapper1. It provides control over the sound modulation of an acoustic piano using the pi- anist’s ancillary gestures. The system was developed using an iterative design pro- cess, incorporating research findings from two user studies and several case studies. The results that emerged show the potential of recognising and utilising the pianist’s existing technique when designing keyboard-based DMIs, reducing the requirement to learn additional techniques.
@inproceedings{Granieri2019, author = {Granieri, Niccolò and Dooley, James}, title = {Reach: a keyboard-based gesture recognition system for live piano sound modulation}, pages = {375--376}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673000}, url = {http://www.nime.org/proceedings/2019/nime2019_paper072.pdf} }
margaret schedel, Jocelyn Ho, and Matthew Blessing. 2019. Women’s Labor: Creating NIMEs from Domestic Tools . Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 377–380. http://doi.org/10.5281/zenodo.3672729
Abstract
Download PDF DOI
This paper describes the creation of a NIME created from an iron and wooden ironing board. The ironing board acts as a resonator for the system which includes sensors embedded in the iron such as pressure, and piezo microphones. The iron has LEDs wired to the sides and at either end of the board are CCDs; using machine learning we can identify what kind of fabric is being ironed, and the position of the iron along the x and y-axes as well as its rotation and tilt. This instrument is part of a larger project, Women’s Labor, that juxtaposes traditional musical instruments such as spinets and virginals designated for “ladies” with new interfaces for musical expression that repurpose older tools of women’s work. Using embedded technologies, we reimagine domestic tools as musical interfaces, creating expressive instruments from the appliances of women’s chores.
@inproceedings{schedel2019, author = {margaret schedel and Ho, Jocelyn and Blessing, Matthew}, title = {Women's Labor: Creating {NIME}s from Domestic Tools }, pages = {377--380}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672729}, url = {http://www.nime.org/proceedings/2019/nime2019_paper073.pdf} }
Andre Rauber Du Bois and Rodrigo Geraldo Ribeiro. 2019. HMusic: A domain specific language for music programming and live coding. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 381–386. http://doi.org/10.5281/zenodo.3673003
Abstract
Download PDF DOI
This paper presents HMusic, a domain specific language based on music patterns that can be used to write music and live coding. The main abstractions provided by the language are patterns and tracks. Code written in HMusic looks like patterns and multi-tracks available in music sequencers and drum machines. HMusic provides primitives to design and compose patterns generating new patterns. The basic abstractions provided by the language have an inductive definition and HMusic is embedded in the Haskell functional programming language, programmers can design functions to manipulate music on the fly.
@inproceedings{RauberDuBois2019, author = {Bois, Andre Rauber Du and Ribeiro, Rodrigo Geraldo}, title = {HMusic: A domain specific language for music programming and live coding}, pages = {381--386}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673003}, url = {http://www.nime.org/proceedings/2019/nime2019_paper074.pdf} }
Angelo Fraietta. 2019. Stellar Command: a planetarium software based cosmic performance interface. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 387–392. http://doi.org/10.5281/zenodo.3673005
Abstract
Download PDF DOI
This paper presents the use of Stellarium planetarium software coupled with the VizieR database of astronomical catalogues as an interface mechanism for creating astronomy based multimedia performances, and as a music composition interface. The celestial display from Stellarium is used to query VizieR, which then obtains scienti c astronomical data from the stars displayed–including colour, celestial position, magnitude and distance–and sends it as input data for music composition or performance. Stellarium and VizieR are controlled through Stellar Command, a software library that couples the two systems and can be used as both a standalone command line utility using Open Sound Control, and as a software library.
@inproceedings{Fraiettab2019, author = {Fraietta, Angelo}, title = {Stellar Command: a planetarium software based cosmic performance interface}, pages = {387--392}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673005}, url = {http://www.nime.org/proceedings/2019/nime2019_paper075.pdf} }
Patrick Müller and Johannes Michael Schuett. 2019. Towards a Telematic Dimension Space. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 393–400. http://doi.org/10.5281/zenodo.3673007
Abstract
Download PDF DOI
Telematic performances connect two or more locations so that participants are able to interact in real time. Such practices blend a variety of dimensions, insofar as the representation of remote performers on a local stage intrinsically occurs on auditory, as well as visual and scenic, levels. Due to their multimodal nature, the analysis or creation of such performances can quickly descend into a house of mirrors wherein certain intensely interdependent dimensions come to the fore, while others are multiplied, seem hidden or are made invisible. In order to have a better understanding of such performances, Dimension Space Analysis, with its capacity to review multifaceted components of a system, can be applied to telematic performances, understood here as (a bundle of) NIMEs. In the second part of the paper, some telematic works from the practices of the authors are described with the toolset developed.
@inproceedings{Müller2019, author = {Müller, Patrick and Schuett, Johannes Michael}, title = {Towards a Telematic Dimension Space}, pages = {393--400}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673007}, url = {http://www.nime.org/proceedings/2019/nime2019_paper076.pdf} }
Pedro Pablo Lucas. 2019. A MIDI Controller Mapper for the Built-in Audio Mixer in the Unity Game Engine. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 401–404. http://doi.org/10.5281/zenodo.3673009
Abstract
Download PDF DOI
Unity is one of the most used engines in the game industry and several extensions have been implemented to increase its features in order to create multimedia products in a more effective and efficient way. From the point of view of audio development, Unity has included an Audio Mixer from version 5 which facilitates the organization of sounds, effects, and the mixing process in general; however, this module can be manipulated only through its graphical interface. This work describes the design and implementation of an extension tool to map parameters from the Audio Mixer to MIDI external devices, like controllers with sliders and knobs, such way the developer can easily mix a game with the feeling of a physical interface.
@inproceedings{Lucasb2019, author = {Lucas, Pedro Pablo}, title = {A {MIDI} Controller Mapper for the Built-in Audio Mixer in the Unity Game Engine}, pages = {401--404}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673009}, url = {http://www.nime.org/proceedings/2019/nime2019_paper077.pdf} }
Pedro Pablo Lucas. 2019. AuSynthAR: A simple low-cost modular synthesizer based on Augmented Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 405–406. http://doi.org/10.5281/zenodo.3673011
Abstract
Download PDF DOI
AuSynthAR is a digital instrument based on Augmented Reality (AR), which allows sound synthesis modules to create simple sound networks. It only requires a mobile device, a set of tokens, a sound output device and, optionally, a MIDI controller, which makes it an affordable instrument. An application running on the device generates the sounds and the graphical augmentations over the tokens.
@inproceedings{Lucasc2019, author = {Lucas, Pedro Pablo}, title = {AuSynthAR: A simple low-cost modular synthesizer based on Augmented Reality}, pages = {405--406}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673011}, url = {http://www.nime.org/proceedings/2019/nime2019_paper078.pdf} }
Don Derek Haddad and Joe Paradiso. 2019. The World Wide Web in an Analog Patchbay. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 407–410. http://doi.org/10.5281/zenodo.3673013
Abstract
Download PDF DOI
This paper introduces a versatile module for Eurorack synthesizers that allows multiple modular synthesizers to be patched together remotely through the world wide web. The module is configured from a read-eval-print-loop environment running in the web browser, that can be used to send signals to the modular synthesizer from a live coding interface or from various data sources on the internet.
@inproceedings{Haddad2019, author = {Haddad, Don Derek and Paradiso, Joe}, title = {The World Wide Web in an Analog Patchbay}, pages = {407--410}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673013}, url = {http://www.nime.org/proceedings/2019/nime2019_paper079.pdf} }
Fou Yoshimura and kazuhiro jo. 2019. A "voice" instrument based on vocal tract models by using soft material for a 3D printer and an electrolarynx. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 411–412. http://doi.org/10.5281/zenodo.3673015
Abstract
Download PDF DOI
In this paper, we propose a “voice” instrument based on vocal tract models with a soft material for a 3D printer and an electrolarynx. In our practice, we explore the incongruity of the voice instrument through the accompanying music production and performance. With the instrument, we aim to return to the fact that the “Machine speaks out.” With the production of a song “Vocalise (Incomplete),” and performances, we reveal how the instrument could work with the audiences and explore the uncultivated field of voices.
@inproceedings{Yoshimura2019, author = {Yoshimura, Fou and kazuhiro jo}, title = {A "voice" instrument based on vocal tract models by using soft material for a 3D printer and an electrolarynx}, pages = {411--412}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673015}, url = {http://www.nime.org/proceedings/2019/nime2019_paper080.pdf} }
Juan Pablo Yepez Placencia, Jim Murphy, and Dale Carnegie. 2019. Exploring Dynamic Variations for Expressive Mechatronic Chordophones. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 413–418. http://doi.org/10.5281/zenodo.3673017
Abstract
Download PDF DOI
Mechatronic chordophones have become increasingly common in mechatronic music. As expressive instruments, they offer multiple techniques to create and manipulate sounds using their actuation mechanisms. Chordophone designs have taken multiple forms, from frames that play a guitar-like instrument, to machines that integrate strings and actuators as part of their frame. However, few of these instruments have taken advantage of dynamics, which have been largely unexplored. This paper details the design and construction of a new picking mechanism prototype which enables expressive techniques through fast and precise movement and actuation. We have adopted iterative design and rapid prototyping strategies to develop and refine a compact picker capable of creating dynamic variations reliably. Finally, a quantitative evaluation process demonstrates that this system offers the speed and consistency of previously existing picking mechanisms, while providing increased control over musical dynamics and articulations.
@inproceedings{YepezPlacencia2019, author = {Placencia, Juan Pablo Yepez and Murphy, Jim and Carnegie, Dale}, title = {Exploring Dynamic Variations for Expressive Mechatronic Chordophones}, pages = {413--418}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673017}, url = {http://www.nime.org/proceedings/2019/nime2019_paper081.pdf} }
Dhruv Chauhan and Peter Bennett. 2019. Searching for the Perfect Instrument: Increased Telepresence through Interactive Evolutionary Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 419–422. http://doi.org/10.5281/zenodo.3673019
Abstract
Download PDF DOI
In this paper, we introduce and explore a novel Virtual Reality musical interaction system (named REVOLVE) that utilises a user-guided evolutionary algorithm to personalise musical instruments to users’ individual preferences. REVOLVE is designed towards being an ‘endlessly entertaining’ experience through the potentially infinite number of sounds that can be produced. Our hypothesis is that using evolutionary algorithms with VR for musical interactions will lead to increased user telepresence. In addition to this, REVOLVE was designed to inform novel research into this unexplored area. Think aloud trials and thematic analysis revealed 5 main themes: control, comparison to the real world, immersion, general usability and limitations, in addition to practical improvements. Overall, it was found that this combination of technologies did improve telepresence levels, proving the original hypothesis correct.
@inproceedings{Chauhan2019, author = {Chauhan, Dhruv and Bennett, Peter}, title = {Searching for the Perfect Instrument: Increased Telepresence through Interactive Evolutionary Instrument Design}, pages = {419--422}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673019}, url = {http://www.nime.org/proceedings/2019/nime2019_paper082.pdf} }
Richard J Savery, Benjamin Genchel, Jason Brent Smith, Anthony Caulkins, Molly E Jones, and Anna Savery. 2019. Learning from History: Recreating and Repurposing Harriet Padberg’s Computer Composed Canon and Free Fugue. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 423–428. http://doi.org/10.5281/zenodo.3673021
Abstract
Download PDF DOI
Harriet Padberg wrote Computer-Composed Canon and Free Fugue as part of her 1964 dissertation in Mathematics and Music at Saint Louis University. This program is one of the earliest examples of text-to-music software and algorithmic composition, which are areas of great interest in the present-day field of music technology. This paper aims to analyze the technological innovation, aesthetic design process, and impact of Harriet Padberg’s original 1964 thesis as well as the design of a modern recreation and utilization, in order to gain insight to the nature of revisiting older works. Here, we present our open source recreation of Padberg’s program with a modern interface and, through its use as an artistic tool by three composers, show how historical works can be effectively used for new creative purposes in contemporary contexts. Not Even One by Molly Jones draws on the historical and social significance of Harriet Padberg through using her program in a piece about the lack of representation of women judges in composition competitions. Brevity by Anna Savery utilizes the original software design as a composition tool, and The Padberg Piano by Anthony Caulkins uses the melodic generation of the original to create a software instrument.
@inproceedings{Savery2019, author = {Savery, Richard J and Genchel, Benjamin and Smith, Jason Brent and Caulkins, Anthony and Jones, Molly E and Savery, Anna}, title = {Learning from History: Recreating and Repurposing Harriet Padberg's Computer Composed Canon and Free Fugue}, pages = {423--428}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673021}, url = {http://www.nime.org/proceedings/2019/nime2019_paper083.pdf} }
Edgar Berdahl, Austin Franklin, and Eric Sheffield. 2019. A Spatially Distributed Vibrotactile Actuator Array for the Fingertips. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 429–430. http://doi.org/10.5281/zenodo.3673023
Abstract
Download PDF DOI
The design of a Spatially Distributed Vibrotactile Actuator Array (SDVAA) for the fingertips is presented. It provides high-fidelity vibrotactile stimulation at the audio sampling rate. Prior works are discussed, and the system is demonstrated using two music compositions by the authors.
@inproceedings{Berdahl2019, author = {Berdahl, Edgar and Franklin, Austin and Sheffield, Eric}, title = {A Spatially Distributed Vibrotactile Actuator Array for the Fingertips}, pages = {429--430}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673023}, url = {http://www.nime.org/proceedings/2019/nime2019_paper084.pdf} }
Jeff Gregorio and Youngmoo Kim. 2019. Augmenting Parametric Synthesis with Learned Timbral Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 431–436. http://doi.org/10.5281/zenodo.3673025
Abstract
Download PDF DOI
Feature-based synthesis applies machine learning and signal processing methods to the development of alternative interfaces for controlling parametric synthesis algorithms. One approach, geared toward real-time control, uses low dimensional gestural controllers and learned mappings from control spaces to parameter spaces, making use of an intermediate latent timbre distribution, such that the control space affords a spatially-intuitive arrangement of sonic possibilities. Whereas many existing systems present alternatives to the traditional parametric interfaces, the proposed system explores ways in which feature-based synthesis can augment one-to-one parameter control, made possible by fully invertible mappings between control and parameter spaces.
@inproceedings{Gregorio2019, author = {Gregorio, Jeff and Kim, Youngmoo}, title = {Augmenting Parametric Synthesis with Learned Timbral Controllers}, pages = {431--436}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673025}, url = {http://www.nime.org/proceedings/2019/nime2019_paper085.pdf} }
Sang-won Leigh, Abhinandan Jain, and Pattie Maes. 2019. Exploring Human-Machine Synergy and Interaction on a Robotic Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 437–442. http://doi.org/10.5281/zenodo.3673027
Abstract
Download PDF DOI
This paper introduces studies conducted with musicians that aim to understand modes of human-robot interaction, situated between automation and human augmentation. Our robotic guitar system used for the study consists of various sound generating mechanisms, either driven by software or by a musician directly. The control mechanism allows the musician to have a varying degree of agency over the overall musical direction. We present interviews and discussions on open-ended experiments conducted with music students and musicians. The outcome of this research includes new modes of playing the guitar given the robotic capabilities, and an understanding of how automation can be integrated into instrument-playing processes. The results present insights into how a human-machine hybrid system can increase the efficacy of training or exploration, without compromising human engagement with a task.
@inproceedings{Leigh2019, author = {Leigh, Sang-won and Jain, Abhinandan and Maes, Pattie}, title = {Exploring Human-Machine Synergy and Interaction on a Robotic Instrument}, pages = {437--442}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673027}, url = {http://www.nime.org/proceedings/2019/nime2019_paper086.pdf} }
Sang Won Lee. 2019. Show Them My Screen: Mirroring a Laptop Screen as an Expressive and Communicative Means in Computer Music. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 443–448. http://doi.org/10.5281/zenodo.3673029
Abstract
Download PDF DOI
Modern computer music performances often involve a musical instrument that is primarily digital; software runs on a computer, and the physical form of the instrument is the computer. In such a practice, the performance interface is rendered on a computer screen for the performer. There has been a concern in using a laptop as a musical instrument from the audience’s perspective, in that having “a laptop performer sitting behind the screen” makes it difficult for the audience to understand how the performer is creating music. Mirroring a computer screen on a projection screen has been one way to address the concern and reveal the performer’s instrument. This paper introduces and discusses the author’s computer music practice, in which a performer actively considers screen mirroring as an essential part of the performance, beyond visualization of music. In this case, screen mirroring is not complementary, but inevitable from the inception of the performance. The related works listed within explore various roles of screen mirroring in computer music performance and helps us understand empirical and logistical findings in such practices.
@inproceedings{Lee2019, author = {Lee, Sang Won}, title = {Show Them My Screen: Mirroring a Laptop Screen as an Expressive and Communicative Means in Computer Music}, pages = {443--448}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673029}, url = {http://www.nime.org/proceedings/2019/nime2019_paper087.pdf} }
Josh Urban Davis. 2019. IllumiWear: A Fiber-Optic eTextile for MultiMedia Interactions. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 449–454. http://doi.org/10.5281/zenodo.3673033
Abstract
Download PDF DOI
We present IllumiWear, a novel eTextile prototype that uses fiber optics as interactive input and visual output. Fiber optic cables are separated into bundles and then woven like a basket into a bendable glowing fabric. By equipping light emitting diodes to one side of these bundles and photodiode light intensity sensors to the other, loss of light intensity can be measured when the fabric is bent. The sensing technique of IllumiWear is not only able to discriminate between discreet touch, slight bends, and harsh bends, but also recover the location of deformation. In this way, our computational fabric prototype uses its intrinsic means of visual output (light) as a tool for interactive input. We provide design and implementation details for our prototype as well as a technical evaluation of its effectiveness and limitations as an interactive computational textile. In addition, we examine the potential of this prototype’s interactive capabilities by extending our eTextile to create a tangible user interface for audio and visual manipulation.
@inproceedings{Davis2019, author = {Davis, Josh Urban}, title = {IllumiWear: A Fiber-Optic eTextile for MultiMedia Interactions}, pages = {449--454}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673033}, url = {http://www.nime.org/proceedings/2019/nime2019_paper088.pdf} }
2018
Oeyvind Brandtsegg, Trond Engum, and Bernt Isak Wærstad. 2018. Working methods and instrument design for cross-adaptive sessions. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 1–6. http://doi.org/10.5281/zenodo.1302649
Abstract
Download PDF DOI
This paper explores working methods and instrument design for musical performance sessions (studio and live) where cross-adaptive techniques for audio processing are utilized. Cross-adaptive processing uses feature extraction methods and digital processing to allow the actions of one acoustic instrument to influence the timbre of another. Even though the physical interface for the musician is the familiar acoustic instrument, the musical dimensions controlled with the actions on the instrument have been expanded radically. For this reason, and when used in live performance, the cross-adaptive methods constitute new interfaces for musical expression. Not only do the musician control his or her own instrumental expression, but the instrumental actions directly influence the timbre of another instrument in the ensemble, while their own instrument’s sound is modified by the actions of other musicians. In the present paper we illustrate and discuss some design issues relating to the configuration and composition of such tools for different musical situations. Such configurations include among other things the mapping of modulators, the choice of applied effects and processing methods.
@inproceedings{Brandtsegg2018, author = {Brandtsegg, Oeyvind and Engum, Trond and Wærstad, Bernt Isak}, title = {Working methods and instrument design for cross-adaptive sessions}, pages = {1--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302649}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0001.pdf} }
Eran Egozy and Eun Young Lee. 2018. *12*: Mobile Phone-Based Audience Participation in a Chamber Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 7–12. http://doi.org/10.5281/zenodo.1302655
Abstract
Download PDF DOI
*12* is chamber music work composed with the goal of letting audience members have an engaging, individualized, and influential role in live music performance using their mobile phones as custom tailored musical instruments. The goals of direct music making, meaningful communication, intuitive interfaces, and technical transparency led to a design that purposefully limits the number of participating audience members, balances the tradeoffs between interface simplicity and control, and prioritizes the role of a graphics and animation display system that is both functional and aesthetically integrated. Survey results from the audience and stage musicians show a successful and engaging experience, and also illuminate the path towards future improvements.
@inproceedings{Egozy2018, author = {Egozy, Eran and Lee, Eun Young}, title = {*12*: Mobile Phone-Based Audience Participation in a Chamber Music Performance}, pages = {7--12}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302655}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0002.pdf} }
Anders Lind. 2018. Animated Notation in Multiple Parts for Crowd of Non-professional Performers. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 13–18. http://doi.org/10.5281/zenodo.1302657
Abstract
Download PDF DOI
The Max Maestro – an animated music notation system was developed to enable the exploration of artistic possibilities for composition and performance practices within the field of contemporary art music, more specifically, to enable a large crowd of non-professional performers regardless of their musical background to perform a fixed music compositions written in multiple individual parts. Furthermore, the Max Maestro was developed to facilitate concert hall performances where non-professional performers could be synchronised with an electronic music part. This paper presents the background, the content and the artistic ideas with the Max Maestro system and gives two examples of live concert hall performances where the Max Maestro was used. An artistic research approach with an auto ethnographic method was adopted for the study. This paper contributes with new knowledge to the field of animated music notation.
@inproceedings{Lind2018, author = {Lind, Anders}, title = {Animated Notation in Multiple Parts for Crowd of Non-professional Performers}, pages = {13--18}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302657}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0003.pdf} }
Andrew R. Brown, Matthew Horrigan, Arne Eigenfeldt, Toby Gifford, Daniel Field, and Jon McCormack. 2018. Interacting with Musebots. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 19–24. http://doi.org/10.5281/zenodo.1302659
Abstract
Download PDF DOI
Musebots are autonomous musical agents that interact with other musebots to produce music. Inaugurated in 2015, musebots are now an established practice in the field of musical metacreation, which aims to automate aspects of creative practice. Originally musebot development focused on software-only ensembles of musical agents, coded by a community of developers. More recent experiments have explored humans interfacing with musebot ensembles in various ways: including through electronic interfaces in which parametric control of high-level musebot parameters are used; message-based interfaces which allow human users to communicate with musebots in their own language; and interfaces through which musebots have jammed with human musicians. Here we report on the recent developments of human interaction with musebot ensembles and reflect on some of the implications of these developments for the design of metacreative music systems.
@inproceedings{Brown2018, author = {Brown, Andrew R. and Horrigan, Matthew and Eigenfeldt, Arne and Gifford, Toby and Field, Daniel and McCormack, Jon}, title = {Interacting with Musebots}, pages = {19--24}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302659}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0004.pdf} }
Chris Kiefer and Cecile Chevalier. 2018. Towards New Modes of Collective Musical Expression through Audio Augmented Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 25–28. http://doi.org/10.5281/zenodo.1302661
Abstract
Download PDF DOI
We investigate how audio augmented reality can engender new collective modes of musical expression in the context of a sound art installation, ’Listening Mirrors’, exploring the creation of interactive sound environments for musicians and non-musicians alike. ’Listening Mirrors’ is designed to incorporate physical objects and computational systems for altering the acoustic environment, to enhance collective listening and challenge traditional musician-instrument performance. At a formative stage in exploring audio AR technology, we conducted an audience experience study investigating questions around the potential of audio AR in creating sound installation environments for collective musical expression. We collected interview evidence about the participants’ experience and analysed the data with using a grounded theory approach. The results demonstrated that the technology has the potential to create immersive spaces where an audience can feel safe to experiment musically, and showed how AR can intervene in sound perception to instrumentalise an environment. The results also revealed caveats about the use of audio AR, mainly centred on social inhibition and seamlessness of experience, and finding a balance between mediated worlds so that there is space for interplay between the two.
@inproceedings{Kiefer2018, author = {Kiefer, Chris and Chevalier, Cecile}, title = {Towards New Modes of Collective Musical Expression through Audio Augmented Reality}, pages = {25--28}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302661}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0005.pdf} }
Tomoya Matsuura and kazuhiro jo. 2018. Aphysical Unmodeling Instrument: Sound Installation that Re-Physicalizes a Meta-Wind-Instrument Physical Model, Whirlwind. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 29–30. http://doi.org/10.5281/zenodo.1302663
Abstract
Download PDF DOI
Aphysical Unmodeling Instrument is the title of a sound installation that re-physicalizes the Whirlwind meta-wind-instrument physical model. We re-implemented the Whirlwind by using real-world physical objects to comprise a sound installation. The sound propagation between a speaker and microphone was used as the delay, and a paper cylinder was employed as the resonator. This paper explains the concept and implementation of this work at the 2017 HANARART exhibition. We examine the characteristics of the work, address its limitations, and discuss the possibility of its interpretation by means of a “re-physicalization.”
@inproceedings{Matsuura2018, author = {Matsuura, Tomoya and kazuhiro jo}, title = {Aphysical Unmodeling Instrument: Sound Installation that Re-Physicalizes a Meta-Wind-Instrument Physical Model, Whirlwind}, pages = {29--30}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302663}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0006.pdf} }
Ulf A. S. Holbrook. 2018. An approach to stochastic spatialization — A case of Hot Pocket. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 31–32. http://doi.org/10.5281/zenodo.1302665
Abstract
Download PDF DOI
Many common and popular sound spatialisation techniques and methods rely on listeners being positioned in a "sweet-spot" for an optimal listening position in a circle of speakers. This paper discusses a stochastic spatialisation method and its first iteration as implemented for the exhibition Hot Pocket at The Museum of Contemporary Art in Oslo in 2017. This method is implemented in Max and offers a matrix-based amplitude panning methodology which can provide a flexible means for the spatialialisation of sounds.
@inproceedings{Holbrook2018, author = {Holbrook, Ulf A. S.}, title = {An approach to stochastic spatialization --- A case of Hot Pocket}, pages = {31--32}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302665}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0007.pdf} }
Cory Champion and Mo H Zareei. 2018. AM MODE: Using AM and FM Synthesis for Acoustic Drum Set Augmentation. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 33–34. http://doi.org/10.5281/zenodo.1302667
Abstract
Download PDF DOI
AM MODE is a custom-designed software interface for electronic augmentation of the acoustic drum set. The software is used in the development a series of recordings, similarly titled as AM MODE. Programmed in Max/MSP, the software uses live audio input from individual instruments within the drum set as control parameters for modulation synthesis. By using a combination of microphones and MIDI triggers, audio signal features such as the velocity of the strike of the drum, or the frequency at which the drum resonates, are tracked, interpolated, and scaled to user specifications. The resulting series of recordings is comprised of the digitally generated output of the modulation engine, in addition to both raw and modulated signals from the acoustic drum set. In this way, this project explores drum set augmentation not only at the input and from a performative angle, but also at the output, where the acoustic and the synthesized elements are merged into each other, forming a sonic hybrid.
@inproceedings{Champion2018, author = {Champion, Cory and Zareei, Mo H}, title = {AM MODE: Using AM and FM Synthesis for Acoustic Drum Set Augmentation}, pages = {33--34}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302667}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0008.pdf} }
Don Derek Haddad and Joe Paradiso. 2018. Kinesynth: Patching, Modulating, and Mixing a Hybrid Kinesthetic Synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 35–36. http://doi.org/10.5281/zenodo.1302669
Abstract
Download PDF DOI
This paper introduces the Kinesynth, a hybrid kinesthetic synthesizer that uses the human body as both an analog mixer and as a modulator using a combination of capacitive sensing in "transmit" mode and skin conductance. This is achieved when the body, through the skin, relays signals from control & audio sources to the inputs of the instrument. These signals can be harnessed from the environment, from within the Kinesynth’s internal synthesizer, or from external instrument, making the Kinesynth a mediator between the body and the environment.
@inproceedings{Haddad2018, author = {Haddad, Don Derek and Paradiso, Joe}, title = {Kinesynth: Patching, Modulating, and Mixing a Hybrid Kinesthetic Synthesizer.}, pages = {35--36}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302669}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0009.pdf} }
Riccardo Marogna. 2018. CABOTO: A Graphic-Based Interactive System for Composing and Performing Electronic Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 37–42. http://doi.org/10.5281/zenodo.1302671
Abstract
Download PDF DOI
CABOTO is an interactive system for live performance and composition. A graphic score sketched on paper is read by a computer vision system. The graphic elements are scanned following a symbolic-raw hybrid approach, that is, they are recognised and classified according to their shapes but also scanned as waveforms and optical signals. All this information is mapped into the synthesis engine, which implements different kind of synthesis techniques for different shapes. In CABOTO the score is viewed as a cartographic map explored by some navigators. These navigators traverse the score in a semi-autonomous way, scanning the graphic elements found along their paths. The system tries to challenge the boundaries between the concepts of composition, score, performance, instrument, since the musical result will depend both on the composed score and the way the navigators will traverse it during the live performance.
@inproceedings{Marogna2018, author = {Marogna, Riccardo}, title = {CABOTO: A Graphic-Based Interactive System for Composing and Performing Electronic Music}, pages = {37--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302671}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0010.pdf} }
Gustavo Oliveira da Silveira. 2018. The XT Synth: A New Controller for String Players. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 43–44. http://doi.org/10.5281/zenodo.1302673
Abstract
Download PDF DOI
This paper describes the concept, design, and realization of two iterations of a new controller called the XT Synth. The development of the instrument came from the desire to maintain the expressivity and familiarity of string instruments, while adding the flexibility and power usually found in keyboard controllers. There are different examples of instruments that bring the physicality and expressiveness of acoustic instruments into electronic music, from “Do it yourself” (DIY) products to commercially available ones. This paper discusses the process and the challenges faced when creating a DIY musical instrument and then subsequently transforming the instrument into a product suitable for commercialization.
@inproceedings{Oliveira2018, author = {Oliveira da Silveira, Gustavo}, title = {The XT Synth: A New Controller for String Players}, pages = {43--44}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302673}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0011.pdf} }
S. M. Astrid Bin, Nick Bryan-Kinns, and Andrew P. McPherson. 2018. Risky business: Disfluency as a design strategy. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 45–50. http://doi.org/10.5281/zenodo.1302675
Abstract
Download PDF DOI
This paper presents a study examining the effects of disfluent design on audience perception of digital musical instrument (DMI) performance. Disfluency, defined as a barrier to effortless cognitive processing, has been shown to generate better results in some contexts as it engages higher levels of cognition. We were motivated to determine if disfluent design in a DMI would result in a risk state that audiences would be able to perceive, and if this would have any effect on their evaluation of the performance. A DMI was produced that incorporated a disfluent characteristic: It would turn itself off if not constantly moved. Six physically identical instruments were produced, each in one of three versions: Control (no disfluent characteristics), mild disfluency (turned itself off slowly), and heightened disfluency (turned itself off more quickly). 6 percussionists each performed on one instrument for a live audience (N=31), and data was collected in the form of real-time feedback (via a mobile phone app), and post-hoc surveys. Though there was little difference in ratings of enjoyment between the versions of the instrument, the real-time and qualitative data suggest that disfluent behaviour in a DMI may be a way for audiences to perceive and appreciate performer skill.
@inproceedings{Bin2018, author = {Bin, S. M. Astrid and Bryan-Kinns, Nick and McPherson, Andrew P.}, title = {Risky business: Disfluency as a design strategy}, pages = {45--50}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302675}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0012.pdf} }
Rachel Gibson. 2018. The Theremin Textural Expander. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 51–52. http://doi.org/10.5281/zenodo.1302527
Abstract
Download PDF DOI
The voice of the theremin is more than just a simple sine wave. Its unique sound is made through two radio frequency oscillators that, when operating at almost identical frequencies, gravitate towards each other. Ultimately, this pull alters the sine wave, creating the signature sound of the theremin. The Theremin Textural Expander (TTE) explores other textures the theremin can produce when its sound is processed and manipulated through a Max/MSP patch and controlled via a MIDI pedalboard. The TTE extends the theremin’s ability, enabling it to produce five distinct new textures beyond the original. It also features a looping system that the performer can use to layer textures created with the traditional theremin sound. Ultimately, this interface introduces a new way to play and experience the theremin; it extends its expressivity, affording a greater range of compositional possibilities and greater flexibility in free improvisation contexts.
@inproceedings{Gibson2018, author = {Gibson, Rachel}, title = {The Theremin Textural Expander}, pages = {51--52}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302527}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0013.pdf} }
Mert Toka, Can Ince, and Mehmet Aydin Baytas. 2018. Siren: Interface for Pattern Languages. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 53–58. http://doi.org/10.5281/zenodo.1302677
Abstract
Download PDF DOI
This paper introduces Siren, a hybrid system for algorithmic composition and live-coding performances. Its hierarchical structure allows small modifications to propagate and aggregate on lower levels for dramatic changes in the musical output. It uses functional programming language TidalCycles as the core pattern creation environment due to its inherent ability to create complex pattern relations with minimal syntax. Borrowing the best from TidalCycles, Siren augments the pattern creation process by introducing various interface level features: a multi-channel sequencer, local and global parameters, mathematical expressions, and pattern history. It presents new opportunities for recording, refining, and reusing the playback information with the pattern roll component. Subsequently, the paper concludes with a preliminary evaluation of Siren in the context of user interface design principles, which originates from the cognitive dimensions framework for musical notation design.
@inproceedings{Toka2018, author = {Toka, Mert and Ince, Can and Baytas, Mehmet Aydin}, title = {Siren: Interface for Pattern Languages}, pages = {53--58}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302677}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0014.pdf} }
Spencer Salazar, Andrew Piepenbrink, and Sarah Reid. 2018. Developing a Performance Practice for Mobile Music Technology. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 59–64. http://doi.org/10.5281/zenodo.1302679
Abstract
Download PDF DOI
This paper documents an extensive and varied series of performances by the authors over the past year using mobile technology, primarily iPad tablets running the Auraglyph musical sketchpad software. These include both solo and group performances, the latter under the auspices of the Mobile Ensemble of CalArts (MECA), a group created to perform music with mobile technology devices. As a whole, this diverse mobile technology-based performance practice leverages Auraglyph’s versatility to explore a number of topical issues in electronic music performance, including the use of physical and acoustical space, audience participation, and interaction design of musical instruments.
@inproceedings{Salazar2018, author = {Salazar, Spencer and Piepenbrink, Andrew and Reid, Sarah}, title = {Developing a Performance Practice for Mobile Music Technology}, pages = {59--64}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302679}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0015.pdf} }
Ali Momeni, Daniel McNamara, and Jesse Stiles. 2018. MOM: an Extensible Platform for Rapid Prototyping and Design of Electroacoustic Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 65–71. http://doi.org/10.5281/zenodo.1302681
Abstract
Download PDF DOI
This paper provides an overview of the design, prototyping, deployment and evaluation of a multi-agent interactive sound instrument named MOM (Mobile Object for Music). MOM combines a real-time signal processing engine implemented with Pure Data on an embedded Linux platform, with gestural interaction implemented via a variety of analog and digital sensors. Power, sound-input and sound-diffusion subsystems make the instrument autonomous and mobile. This instrument was designed in coordination with the development of an evening-length dance/music performance in which the performing musician is engaged in choreographed movements with the mobile instruments. The design methodology relied on a participatory process that engaged an interdisciplinary team made up of technologists, musicians, composers, choreographers, and dancers. The prototyping process relied on a mix of in-house and out-sourced digital fabrication processes intended to make the open source hardware and software design of the system accessible and affordable for other creators.
@inproceedings{Momeni2018, author = {Momeni, Ali and McNamara, Daniel and Stiles, Jesse}, title = {MOM: an Extensible Platform for Rapid Prototyping and Design of Electroacoustic Instruments}, pages = {65--71}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302681}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0016.pdf} }
Ben Luca Robertson and Luke Dahl. 2018. Harmonic Wand: An Instrument for Microtonal Control and Gestural Excitation. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 72–77. http://doi.org/10.5281/zenodo.1302683
Abstract
Download PDF DOI
The Harmonic Wand is a transducer-based instrument that combines physical excitation, synthesis, and gestural control. Our objective was to design a device that affords exploratory modes of interaction with the performer’s surroundings, as well as precise control over microtonal pitch content and other concomitant parameters. The instrument is comprised of a hand-held wand, containing two piezo-electric transducers affixed to a pair of metal probes. The performer uses the wand to physically excite surfaces in the environment and capture resultant signals. Input materials are then processed using a novel application of Karplus-Strong synthesis, in which these impulses are imbued with discrete resonances. We achieved gestural control over synthesis parameters using a secondary tactile interface, consisting of four force-sensitive resistors (FSR), a fader, and momentary switch. As a unique feature of our instrument, we modeled pitch organization and associated parametric controls according to theoretical principles outlined in Harry Partch’s “monophonic fabric” of Just Intonation—specifically his conception of odentities, udentities, and a variable numerary nexus. This system classifies pitch content based upon intervallic structures found in both the overtone and undertone series. Our paper details the procedural challenges in designing the Harmonic Wand.
@inproceedings{Robertson2018, author = {Robertson, Ben Luca and Dahl, Luke}, title = {Harmonic Wand: An Instrument for Microtonal Control and Gestural Excitation}, pages = {72--77}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302683}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0017.pdf} }
McLean J Macionis and Ajay Kapur. 2018. Sansa: A Modified Sansula for Extended Compositional Techniques Using Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 78–81. http://doi.org/10.5281/zenodo.1302685
Abstract
Download PDF DOI
Sansa is an extended sansula, a hyper-instrument that is similar in design and functionality to a kalimba or thumb piano. At the heart of this interface is a series of sensors that are used to augment the tone and expand the performance capabilities of the instrument. The sensor data is further exploited using the machine learning program Wekinator, which gives users the ability to interact and perform with the instrument using several different modes of operation. In this way, Sansa is capable of both solo acoustic performances as well as complex productions that require interactions between multiple technological mediums. Sansa expands the current community of hyper-instruments by demonstrating the ways that hardware and software can extend an acoustic instrument’s functionality and playability in a live performance or studio setting.
@inproceedings{Macionis2018, author = {Macionis, McLean J and Kapur, Ajay}, title = {Sansa: A Modified Sansula for Extended Compositional Techniques Using Machine Learning}, pages = {78--81}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302685}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0018.pdf} }
Luca Turchet and Mathieu Barthet. 2018. Demo of interactions between a performer playing a Smart Mandolin and audience members using Musical Haptic Wearables. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 82–83. http://doi.org/10.5281/zenodo.1302687
Abstract
Download PDF DOI
This demo will showcase technologically mediated interactions between a performer playing a smart musical instrument (SMIs) and audience members using Musical Haptic Wearables (MHWs). Smart Instruments are a family of musical instruments characterized by embedded computational intelligence, wireless connectivity, an embedded sound delivery system, and an onboard system for feedback to the player. They offer direct point-to-point communication between each other and other portable sensor-enabled devices connected to local networks and to the Internet. MHWs are wearable devices for audience members, which encompass haptic stimulation, gesture tracking, and wireless connectivity features. This demo will present an architecture enabling the multidirectional creative communication between a performer playing a Smart Mandolin and audience members using armband-based MHWs.
@inproceedings{Turchet2018, author = {Turchet, Luca and Barthet, Mathieu}, title = {Demo of interactions between a performer playing a Smart Mandolin and audience members using Musical Haptic Wearables}, pages = {82--83}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302687}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0019.pdf} }
Steven Kemper and Scott Barton. 2018. Mechatronic Expression: Reconsidering Expressivity in Music for Robotic Instruments . Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 84–87. http://doi.org/10.5281/zenodo.1302689
Abstract
Download PDF DOI
Robotic instrument designers tend to focus on the number of sound control parameters and their resolution when trying to develop expressivity in their instruments. These parameters afford greater sonic nuance related to elements of music that are traditionally associated with expressive human performances including articulation, timbre, dynamics, and phrasing. Equating the capacity for sonic nuance and musical expression stems from the “transitive” perspective that musical expression is an act of emotional communication from performer to listener. However, this perspective is problematic in the case of robotic instruments since we do not typically consider machines to be capable of expressing emotion. Contemporary theories of musical expression focus on an “intransitive” perspective, where musical meaning is generated as an embodied experience. Understanding expressivity from this perspective allows listeners to interpret performances by robotic instruments as possessing their own expressive meaning, even though the performer is a machine. It also enables musicians working with robotic instruments to develop their own unique vocabulary of expressive gestures unique to mechanical instruments. This paper explores these issues of musical expression, introducing the concept of mechatronic expression as a compositional and design strategy that highlights the musical and performative capabilities unique to robotic instruments.
@inproceedings{Kemper2018, author = {Kemper, Steven and Barton, Scott}, title = {Mechatronic Expression: Reconsidering Expressivity in Music for Robotic Instruments }, pages = {84--87}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302689}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0020.pdf} }
Courtney Brown. 2018. Interactive Tango Milonga: Designing DMIs for the Social Dance Context . Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 88–91. http://doi.org/10.5281/zenodo.1302693
Abstract
Download PDF DOI
Musical participation has brought individuals together in on-going communities throughout human history, aiding in the kinds of social integration essential for wellbeing. The design of Digital Musical Instruments (DMIs), however, has generally been driven by idiosyncratic artistic concerns, Western art music and dance traditions of expert performance, and short-lived interactive art installations engaging a broader public of musical novices. These DMIs rarely engage with the problems of on-going use in musical communities with existing performance idioms, repertoire, and social codes with participants representing the full learning curve of musical skill, such as social dance. Our project, Interactive Tango Milonga, an interactive Argentine tango dance system for social dance addresses these challenges in order to innovate connection, the feeling of intense relation between dance partners, music, and the larger tango community.
@inproceedings{Brownb2018, author = {Brown, Courtney}, title = {Interactive Tango Milonga: Designing {DMI}s for the Social Dance Context }, pages = {88--91}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302693}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0021.pdf} }
Rebecca Kleinberger. 2018. Vocal Musical Expression with a Tactile Resonating Device and its Psychophysiological Effects. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 92–95. http://doi.org/10.5281/zenodo.1302693
Abstract
Download PDF DOI
This paper presents an experiment to investigate how new types of vocal practices can affect psychophysiological activity. We know that health can influence the voice, but can a certain use of the voice influence health through modification of mental and physical state? This study took place in the setting of the Vocal Vibrations installation. For the experiment, participants engage in a multi sensory vocal exercise with a limited set of guidance to obtain a wide spectrum of vocal performances across participants. We compare characteristics of those vocal practices to the participant’s heart rate, breathing rate, electrodermal activity and mental states. We obtained significant results suggesting that we can correlate psychophysiological states with characteristics of the vocal practice if we also take into account biographical information, and in particular mea- surement of how much people “like” their own voice.
@inproceedings{Kleinberger2018, author = {Kleinberger, Rebecca}, title = {Vocal Musical Expression with a Tactile Resonating Device and its Psychophysiological Effects}, pages = {92--95}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302693}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0022.pdf} }
Patrick Palsbröker, Christine Steinmeier, and Dominic Becking. 2018. A Framework for Modular VST-based NIMEs Using EDA and Dependency Injection. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 96–101. http://doi.org/10.5281/zenodo.1302653
Abstract
Download PDF DOI
In order to facilitate access to playing music spontaneously, the prototype of an instrument which allows a more natural learning approach was developed as part of the research project Drum-Dance-Music-Machine. The result was a modular system consisting of several VST plug-ins, which on the one hand provides a drum interface to create sounds and tones and on the other hand generates or manipulates music through dance movement, in order to simplify the understanding of more abstract characteristics of music. This paper describes the development of a new software concept for the prototype, which since then has been further developed and evaluated several times. This will improve the maintainability and extensibility of the system and eliminate design weaknesses. To do so, the existing system first will be analyzed and requirements for a new framework, which is based on the concepts of event driven architecture and dependency injection, will be defined. The components are then transferred to the new system and their performance is assessed. The approach chosen in this case study and the lessons learned are intended to provide a viable solution for solving similar problems in the development of modular VST-based NIMEs.
@inproceedings{Palsbröker2018, author = {Palsbröker, Patrick and Steinmeier, Christine and Becking, Dominic}, title = {A Framework for Modular VST-based NIMEs Using EDA and Dependency Injection}, pages = {96--101}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302653}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0023.pdf} }
Jack Atherton and Ge Wang. 2018. Chunity: Integrated Audiovisual Programming in Unity. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 102–107. http://doi.org/10.5281/zenodo.1302695
Abstract
Download PDF DOI
Chunity is a programming environment for the design of interactive audiovisual games, instruments, and experiences. It embodies an audio-driven, sound-first approach that integrates audio programming and graphics programming in the same workflow, taking advantage of strongly-timed audio programming features of the ChucK programming language and the state-of-the-art real-time graphics engine found in Unity. We describe both the system and its intended workflow for the creation of expressive audiovisual works. Chunity was evaluated as the primary software platform in a computer music and design course, where students created a diverse assortment of interactive audiovisual software. We present results from the evaluation and discuss Chunity’s usability, utility, and aesthetics as a way of working. Through these, we argue for Chunity as a unique and useful way to program sound, graphics, and interaction in tandem, giving users the flexibility to use a game engine to do much more than "just" make games.
@inproceedings{Atherton2018, author = {Atherton, Jack and Wang, Ge}, title = {Chunity: Integrated Audiovisual Programming in Unity}, pages = {102--107}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302695}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0024.pdf} }
Steffan Carlos Ianigro and Oliver Bown. 2018. Exploring Continuous Time Recurrent Neural Networks through Novelty Search. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 108–113. http://doi.org/10.5281/zenodo.1302697
Abstract
Download PDF DOI
In this paper we expand on prior research into the use of Continuous Time Recurrent Neural Networks (CTRNNs) as evolvable generators of musical structures such as audio waveforms. This type of neural network has a compact structure and is capable of producing a large range of temporal dynamics. Due to these properties, we believe that CTRNNs combined with evolutionary algorithms (EA) could offer musicians many creative possibilities for the exploration of sound. In prior work, we have explored the use of interactive and target-based EA designs to tap into the creative possibilities of CTRNNs. Our results have shown promise for the use of CTRNNs in the audio domain. However, we feel neither EA designs allow both open-ended discovery and effective navigation of the CTRNN audio search space by musicians. Within this paper, we explore the possibility of using novelty search as an alternative algorithm that facilitates both open-ended and rapid discovery of the CTRNN creative search space.
@inproceedings{Ianigro2018, author = {Ianigro, Steffan Carlos and Bown, Oliver}, title = {Exploring Continuous Time Recurrent Neural Networks through Novelty Search}, pages = {108--113}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302697}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0025.pdf} }
John Bowers and Owen Green. 2018. All the Noises: Hijacking Listening Machines for Performative Research. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 114–119. http://doi.org/10.5281/zenodo.1302699
Abstract
Download PDF DOI
Research into machine listening has intensified in recent years creating a variety of techniques for recognising musical features suitable, for example, in musicological analysis or commercial application in song recognition. Within NIME, several projects exist seeking to make these techniques useful in real-time music making. However, we debate whether the functionally-oriented approaches inherited from engineering domains that much machine listening research manifests is fully suited to the exploratory, divergent, boundary-stretching, uncertainty-seeking, playful and irreverent orientations of many artists. To explore this, we engaged in a concerted collaborative design exercise in which many different listening algorithms were implemented and presented with input which challenged their customary range of application and the implicit norms of musicality which research can take for granted. An immersive 3D spatialised multichannel environment was created in which the algorithms could be explored in a hybrid installation/performance/lecture form of research presentation. The paper closes with reflections on the creative value of ‘hijacking’ formal approaches into deviant contexts, the typically undocumented practical know-how required to make algorithms work, the productivity of a playfully irreverent relationship between engineering and artistic approaches to NIME, and a sketch of a sonocybernetic aesthetics for our work.
@inproceedings{Bowers2018, author = {Bowers, John and Green, Owen}, title = {All the Noises: Hijacking Listening Machines for Performative Research}, pages = {114--119}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302699}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0026.pdf} }
Rodrigo Schramm, Federico Visi, André Brasil, and Marcelo O Johann. 2018. A polyphonic pitch tracking embedded system for rapid instrument augmentation. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 120–125. http://doi.org/10.5281/zenodo.1302650
Abstract
Download PDF DOI
This paper presents a system for easily augmenting polyphonic pitched instruments. The entire system is designed to run on a low-cost embedded computer, suitable for live performance and easy to customise for different use cases. The core of the system implements real-time spectrum factorisation, decomposing polyphonic audio input signals into music note activations. New instruments can be easily added to the system with the help of custom spectral template dictionaries. Instrument augmentation is achieved by replacing or mixing the instrument’s original sounds with a large variety of synthetic or sampled sounds, which follow the polyphonic pitch activations.
@inproceedings{Schramm2018, author = {Schramm, Rodrigo and Visi, Federico and Brasil, André and Johann, Marcelo O}, title = {A polyphonic pitch tracking embedded system for rapid instrument augmentation}, pages = {120--125}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302650}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0027.pdf} }
Koray Tahiroglu, Michael Gurevich, and R. Benjamin Knapp. 2018. Contextualising Idiomatic Gestures in Musical Interactions with NIMEs. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 126–131. http://doi.org/10.5281/zenodo.1302701
Abstract
Download PDF DOI
This paper introduces various ways that idiomatic gestures emerge in performance practice with new musical instruments. It demonstrates that idiomatic gestures can play an important role in the development of personalized performance practices that can be the basis for the development of style and expression. Three detailed examples – biocontrollers, accordion-inspired instruments, and a networked intelligent controller – illustrate how a complex suite of factors throughout the design, composition and performance processes can influence the development of idiomatic gestures. We argue that the explicit consideration of idiomatic gestures throughout the life cycle of new instruments can facilitate the emergence of style and give rise to performances that can develop rich layers of meaning.
@inproceedings{Tahiroglu2018, author = {Tahiroglu, Koray and Gurevich, Michael and Knapp, R. Benjamin}, title = {Contextualising Idiomatic Gestures in Musical Interactions with NIMEs}, pages = {126--131}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302701}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0028.pdf} }
Lamtharn Hantrakul. 2018. GestureRNN: A neural gesture system for the Roli Lightpad Block. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 132–137. http://doi.org/10.5281/zenodo.1302703
Abstract
Download PDF DOI
Machine learning and deep learning has recently made a large impact in the artistic community. In many of these applications however, the model is often used to render the high dimensional output directly e.g. every individual pixel in the final image. Humans arguably operate in much lower dimensional spaces during the creative process e.g. the broad movements of a brush. In this paper, we design a neural gesture system for music generation based around this concept. Instead of directly generating audio, we train a Long Short Term Memory (LSTM) recurrent neural network to generate instantaneous position and pressure on the Roli Lightpad instrument. These generated coordinates in turn, give rise to the sonic output defined in the synth engine. The system relies on learning these movements from a musician who has already developed a palette of musical gestures idiomatic to the Lightpad. Unlike many deep learning systems that render high dimensional output, our low-dimensional system can be run in real-time, enabling the first real time gestural duet of its kind between a player and a recurrent neural network on the Lightpad instrument.
@inproceedings{Hantrakul2018, author = {Hantrakul, Lamtharn}, title = {GestureRNN: A neural gesture system for the Roli Lightpad Block}, pages = {132--137}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302703}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0029.pdf} }
Balandino Di Donato, Jamie Bullock, and Atau Tanaka. 2018. Myo Mapper: a Myo armband to OSC mapper. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 138–143. http://doi.org/10.5281/zenodo.1302705
Abstract
Download PDF DOI
Myo Mapper is a free and open source cross-platform application to map data from the gestural device Myo armband into Open Sound Control (OSC) messages. It represents a ‘quick and easy’ solution for exploring the Myo’s potential for realising new interfaces for musical expression. Together with details of the software, this paper reports some applications in which Myo Mapper has been successfully used and a qualitative evaluation. We then proposed guidelines for using Myo data in interactive artworks based on insight gained from the works described and the evaluation. Findings show that Myo Mapper empowers artists and non-skilled developers to easily take advantage of Myo data high-level features for realising interactive artistic works. It also facilitates the recognition of poses and gestures beyond those included with the product by using third-party interactive machine learning software.
@inproceedings{DiDonato2018, author = {Di Donato, Balandino and Bullock, Jamie and Tanaka, Atau}, title = {Myo Mapper: a Myo armband to OSC mapper}, pages = {138--143}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302705}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0030.pdf} }
Federico Visi and Luke Dahl. 2018. Real-Time Motion Capture Analysis and Music Interaction with the Modosc Descriptor Library. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 144–147. http://doi.org/10.5281/zenodo.1302707
Abstract
Download PDF DOI
We present modosc, a set of Max abstractions designed for computing motion descriptors from raw motion capture data in real time. The library contains methods for extracting descriptors useful for expressive movement analysis and sonic interaction design. modosc is designed to address the data handling and synchronization issues that often arise when working with complex marker sets. This is achieved by adopting a multiparadigm approach facilitated by odot and Open Sound Control to overcome some of the limitations of conventional Max programming, and structure incoming and outgoing data streams in a meaningful and easily accessible manner. After describing the contents of the library and how data streams are structured and processed, we report on a sonic interaction design use case involving motion feature extraction and machine learning.
@inproceedings{Visi2018, author = {Visi, Federico and Dahl, Luke}, title = {Real-Time Motion Capture Analysis and Music Interaction with the Modosc Descriptor Library}, pages = {144--147}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302707}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0031.pdf} }
Cagan Arslan, Florent Berthaut, Jean Martinet, Ioan Marius Bilasco, and Laurent Grisoni. 2018. The Phone with the Flow: Combining Touch + Optical Flow in Mobile Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 148–151. http://doi.org/10.5281/zenodo.1302709
Abstract
Download PDF DOI
Mobile devices have been a promising platform for musical performance thanks to the various sensors readily available on board. In particular, mobile cameras can provide rich input as they can capture a wide variety of user gestures or environment dynamics. However, this raw camera input only provides continuous parameters and requires expensive computation. In this paper, we propose to combine motion/gesture input with the touch input, in order to filter movement information both temporally and spatially, thus increasing expressiveness while reducing computation time. We present a design space which demonstrates the diversity of interactions that our technique enables. We also report the results of a user study in which we observe how musicians appropriate the interaction space with an example instrument.
@inproceedings{Arslan2018, author = {Arslan, Cagan and Berthaut, Florent and Martinet, Jean and Bilasco, Ioan Marius and Grisoni, Laurent}, title = {The Phone with the Flow: Combining Touch + Optical Flow in Mobile Instruments}, pages = {148--151}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302709}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0032.pdf} }
Lars Engeln, Dietrich Kammer, Leon Brandt, and Rainer Groh. 2018. Multi-Touch Enhanced Visual Audio-Morphing. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 152–155. http://doi.org/10.5281/zenodo.1302711
Abstract
Download PDF DOI
Many digital interfaces for audio effects still resemble racks and cases of their hardware counterparts. For instance, DSP-algorithms are often adjusted via direct value input, sliders, or knobs. While recent research has started to experiment with the capabilities offered by modern interfaces, there are no examples for productive applications such as audio-morphing. Audio-morphing as a special field of DSP has a high complexity for the morph itself and for the parametrization of the transition between two sources. We propose a multi-touch enhanced interface for visual audiomorphing. This interface visualizes the internal processing and allows direct manipulation of the morphing parameters in the visualization. Using multi-touch gestures to manipulate audio-morphing in a visual way, sound design and music production becomes more unrestricted and creative.
@inproceedings{Engeln2018, author = {Engeln, Lars and Kammer, Dietrich and Brandt, Leon and Groh, Rainer}, title = {Multi-Touch Enhanced Visual Audio-Morphing}, pages = {152--155}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302711}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0033.pdf} }
Anıl Çamcı. 2018. GrainTrain: A Hand-drawn Multi-touch Interface for Granular Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 156–161. http://doi.org/10.5281/zenodo.1302529
Abstract
Download PDF DOI
We describe an innovative multi-touch performance tool for real-time granular synthesis based on hand-drawn waveform paths. GrainTrain is a cross-platform web application that can run on both desktop and mobile computers, including tablets and phones. In this paper, we first offer an analysis of existing granular synthesis tools from an interaction stand-point, and outline a taxonomy of common interaction paradigms used in their designs. We then delineate the implementation of GrainTrain, and its unique approach to controlling real-time granular synthesis. We describe practical scenarios in which GrainTrain enables new performance possibilities. Finally, we discuss the results of a user study, and provide reports from expert users who evaluated GrainTrain.
@inproceedings{Çamcı2018, author = {Çamcı, Anıl}, title = {GrainTrain: A Hand-drawn Multi-touch Interface for Granular Synthesis}, pages = {156--161}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302529}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0034.pdf} }
gus xia and Roger B. Dannenberg. 2018. ShIFT: A Semi-haptic Interface for Flute Tutoring. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 162–167. http://doi.org/10.5281/zenodo.1302531
Abstract
Download PDF DOI
Traditional instrument learning procedure is time-consuming; it begins with learning music notations and necessitates layers of sophistication and abstraction. Haptic interfaces open another door to the music world for the vast majority of talentless beginners when traditional training methods are not effective. However, the existing haptic interfaces can only be used to learn specially designed pieces with great restrictions on duration and pitch range due to the fact that it is only feasible to guide a part of performance motion haptically for most instruments. Our study breaks such restrictions using a semi-haptic guidance method. For the first time, the pitch range of the haptically learned pieces go beyond an octave (with the fingering motion covers most of the possible choices) and the duration of learned pieces cover a whole phrase. This significant change leads to a more realistic instrument learning process. Experiments show that semi-haptic interface is effective as long as learners are not “tone deaf”. Using our prototype device, the learning rate is about 30% faster compared with learning from videos.
@inproceedings{xia2018, author = {gus xia and Dannenberg, Roger B.}, title = {ShIFT: A Semi-haptic Interface for Flute Tutoring}, pages = {162--167}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302531}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0035.pdf} }
Fabio Morreale, Andrew P. McPherson, and Marcelo Wanderley. 2018. NIME Identity from the Performer’s Perspective. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 168–173. http://doi.org/10.5281/zenodo.1302533
Abstract
Download PDF DOI
The term ‘NIME’ — New Interfaces for Musical Expression — has come to signify both technical and cultural characteristics. Not all new musical instruments are NIMEs, and not all NIMEs are defined as such for the sole ephemeral condition of being new. So, what are the typical characteristics of NIMEs and what are their roles in performers’ practice? Is there a typical NIME repertoire? This paper aims to address these questions with a bottom up approach. We reflect on the answers of 78 NIME performers to an online questionnaire discussing their performance experience with NIMEs. The results of our investigation explore the role of NIMEs in the performers’ practice and identify the values that are common among performers. We find that most NIMEs are viewed as exploratory tools created by and for performers, and that they are constantly in development and almost in no occasions in a finite state. The findings of our survey also reflect upon virtuosity with NIMEs, whose peculiar performance practice results in learning trajectories that often do not lead to the development of virtuosity as it is commonly understood in traditional performance.
@inproceedings{Morreale2018, author = {Morreale, Fabio and McPherson, Andrew P. and Wanderley, Marcelo}, title = {NIME Identity from the Performer's Perspective}, pages = {168--173}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302533}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0036.pdf} }
Anna Xambó. 2018. Who Are the Women Authors in NIME?–Improving Gender Balance in NIME Research. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 174–177. http://doi.org/10.5281/zenodo.1302535
Abstract
Download PDF DOI
In recent years, there has been an increase in awareness of the underrepresentation of women in the sound and music computing fields. The New Interfaces for Musical Expression (NIME) conference is not an exception, with a number of open questions remaining around the issue. In the present paper, we study the presence and evolution over time of women authors in NIME since the beginning of the conference in 2001 until 2017. We discuss the results of such a gender imbalance and potential solutions by summarizing the actions taken by a number of worldwide initiatives that have put an effort into making women’s work visible in our field, with a particular emphasis on Women in Music Tech (WiMT), a student-led organization that aims to encourage more women to join music technology, as a case study. We conclude with a hope for an improvement in the representation of women in NIME by presenting WiNIME, a public online database that details who are the women authors in NIME.
@inproceedings{Xambó2018, author = {Xambó, Anna}, title = {Who Are the Women Authors in NIME?–Improving Gender Balance in NIME Research}, pages = {174--177}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302535}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0037.pdf} }
Sarah Reid, Sara Sithi-Amnuai, and Ajay Kapur. 2018. Women Who Build Things: Gestural Controllers, Augmented Instruments, and Musical Mechatronics. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 178–183. http://doi.org/10.5281/zenodo.1302537
Abstract
Download PDF DOI
This paper presents a collection of hardware-based technologies for live performance developed by women over the last few decades. The field of music technology and interface design has a significant gender imbalance, with men greatly outnumbering women. The purpose of this paper is to promote the visibility and representation of women in this field, and to encourage discussion on the importance of mentorship and role models for young women and girls in music technology.
@inproceedings{Reid2018, author = {Reid, Sarah and Sithi-Amnuai, Sara and Kapur, Ajay}, title = {Women Who Build Things: Gestural Controllers, Augmented Instruments, and Musical Mechatronics}, pages = {178--183}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302537}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0038.pdf} }
Robert H Jack, Jacob Harrison, Fabio Morreale, and Andrew P. McPherson. 2018. Democratising DMIs: the relationship of expertise and control intimacy. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 184–189. http://doi.org/10.5281/zenodo.1302539
Abstract
Download PDF DOI
An oft-cited aspiration of digital musical instrument (DMI) design is to create instruments, in the words of Wessel and Wright, with a ‘low entry fee and no ceiling on virtuosity’. This is a difficult task to achieve: many new instruments are aimed at either the expert or amateur musician, with few instruments catering for both. There is often a balance between learning curve and the nuance of musical control in DMIs. In this paper we present a study conducted with non-musicians and guitarists playing guitar-derivative DMIs with variable levels of control intimacy: how the richness and nuance of a performer’s movement translates into the musical output of an instrument. Findings suggest a significant difference in preference for levels of control intimacy between the guitarists and the non-musicians. In particular, the guitarists unanimously preferred the richest of the two settings whereas the non-musicians generally preferred the setting with lower richness. This difference is notable because it is often taken as a given that increasing richness is a way to make instruments more enjoyable to play, however, this result only seems to be true for expert players.
@inproceedings{Jack2018, author = {Jack, Robert H and Harrison, Jacob and Morreale, Fabio and McPherson, Andrew P.}, title = {Democratising {DMI}s: the relationship of expertise and control intimacy}, pages = {184--189}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302539}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0039.pdf} }
Adnan Marquez-Borbon and Juan Pablo Martinez-Avila. 2018. The Problem of DMI Adoption and Longevity: Envisioning a NIME Performance Pedagogy. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 190–195. http://doi.org/10.5281/zenodo.1302541
Abstract
Download PDF DOI
This paper addresses the prevailing longevity problem of digital musical instruments (DMIs) in NIME research and design by proposing a holistic system design approach. Despite recent efforts to examine the main contributing factors of DMI falling into obsolescence, such attempts to remedy this issue largely place focus on the artifacts establishing themselves, their design processes and technologies. However, few existing studies have attempted to proactively build a community around technological platforms for DMIs, whilst bearing in mind the social dynamics and activities necessary for a budding community. We observe that such attempts while important in their undertaking, are limited in their scope. In this paper we will discuss that achieving some sort of longevity must be addressed beyond the device itself and must tackle broader ecosystemic factors. We hypothesize, that a longevous DMI design must not only take into account a target community but it may also require a non-traditional pedagogical system that sustains artistic practice.
@inproceedings{MarquezBorbon2018, author = {Marquez-Borbon, Adnan and Martinez-Avila, Juan Pablo}, title = {The Problem of DMI Adoption and Longevity: Envisioning a NIME Performance Pedagogy}, pages = {190--195}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302541}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0040.pdf} }
Charles Patrick Martin, Alexander Refsum Jensenius, and Jim Torresen. 2018. Composing an Ensemble Standstill Work for Myo and Bela. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 196–197. http://doi.org/10.5281/zenodo.1302543
Abstract
Download PDF DOI
This paper describes the process of developing a standstill performance work using the Myo gesture control armband and the Bela embedded computing platform. The combination of Myo and Bela allows a portable and extensible version of the standstill performance concept while introducing muscle tension as an additional control parameter. We describe the technical details of our setup and introduce Myo-to-Bela and Myo-to-OSC software bridges that assist with prototyping compositions using the Myo controller.
@inproceedings{Martin2018, author = {Martin, Charles Patrick and Jensenius, Alexander Refsum and Torresen, Jim}, title = {Composing an Ensemble Standstill Work for Myo and Bela}, pages = {196--197}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302543}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0041.pdf} }
Alex Nieva, Johnty Wang, Joseph Malloch, and Marcelo Wanderley. 2018. The T-Stick: Maintaining a 12 year-old Digital Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 198–199. http://doi.org/10.5281/zenodo.1302545
Abstract
Download PDF DOI
This paper presents the work to maintain several copies of the digital musical instrument (DMI) called the T-Stick in the hopes of extending their useful lifetime. The T-Sticks were originally conceived in 2006 and 20 copies have been built over the last 12 years. While they all preserve the original design concept, their evolution resulted in variations in choice of microcontrollers, and sensors. We worked with eight copies of the second and fourth generation T-Sticks to overcome issues related to the aging of components, changes in external software, lack of documentation, and in general, the problem of technical maintenance.
@inproceedings{Nieva2018, author = {Nieva, Alex and Wang, Johnty and Malloch, Joseph and Wanderley, Marcelo}, title = {The T-Stick: Maintaining a 12 year-old Digital Musical Instrument}, pages = {198--199}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302545}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0042.pdf} }
Christopher Dewey and Jonathan P. Wakefield. 2018. MIDI Keyboard Defined DJ Performance System. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 200–201. http://doi.org/10.5281/zenodo.1302547
Abstract
Download PDF DOI
This paper explores the use of the ubiquitous MIDI keyboard to control a DJ performance system. The prototype system uses a two octave keyboard with each octave controlling one audio track. Each audio track has four two-bar loops which play in synchronisation switchable by its respective octave’s first four black keys. The top key of the keyboard toggles between frequency filter mode and time slicer mode. In frequency filter mode the white keys provide seven bands of latched frequency filtering. In time slicer mode the white keys plus black B flat key provide latched on/off control of eight time slices of the loop. The system was informally evaluated by nine subjects. The frequency filter mode combined with loop switching worked well with the MIDI keyboard interface. All subjects agreed that all tools had creative performance potential that could be developed by further practice.
@inproceedings{Dewey2018, author = {Dewey, Christopher and Wakefield, Jonathan P.}, title = {{MIDI} Keyboard Defined DJ Performance System}, pages = {200--201}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302547}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0043.pdf} }
Trond Engum and Otto Jonassen Wittner. 2018. Democratizing Interactive Music Production over the Internet. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 202–203. http://doi.org/10.5281/zenodo.1302549
Abstract
Download PDF DOI
This paper describes an ongoing research project which address challenges and opportunities when collaborating interactively in real time in a "virtual" sound studio with several partners in different locations. "Virtual" in this context referring to an interconnected and inter-domain studio environment consisting of several local production systems connected to public and private networks. This paper reports experiences and challenges related to two different production scenarios conducted in 2017.
@inproceedings{Engum2018, author = {Engum, Trond and Wittner, Otto Jonassen}, title = {Democratizing Interactive Music Production over the Internet}, pages = {202--203}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302549}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0044.pdf} }
Jean-Francois Charles, Carlos Cotallo Solares, Carlos Toro Tobon, and Andrew Willette. 2018. Using the Axoloti Embedded Sound Processing Platform to Foster Experimentation and Creativity. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 204–205. http://doi.org/10.5281/zenodo.1302551
Abstract
Download PDF DOI
This paper describes how the Axoloti platform is well suited to teach a beginners’ course about new elecro-acoustic musical instruments and how it fits the needs of artists who want to work with an embedded sound processing platform and get creative at the crossroads of acoustics and electronics. First, we present the criteria used to choose a platform for the course titled "Creating New Musical Instruments" given at the University of Iowa in the Fall of 2017. Then, we explain why we chose the Axoloti board and development environment.
@inproceedings{Charles2018, author = {Charles, Jean-Francois and Cotallo Solares, Carlos and Toro Tobon, Carlos and Willette, Andrew}, title = {Using the Axoloti Embedded Sound Processing Platform to Foster Experimentation and Creativity}, pages = {204--205}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302551}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0045.pdf} }
Kyriakos Tsoukalas and Ivica Ico Bukvic. 2018. Introducing a K-12 Mechatronic NIME Kit. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 206–209. http://doi.org/10.5281/zenodo.1302553
Abstract
Download PDF DOI
The following paper introduces a new mechatronic NIME kit that uses new additions to the Pd-L2Ork visual programing environment and its K-12 learning module. It is designed to facilitate the creation of simple mechatronics systems for physical sound production in K-12 and production scenarios. The new set of objects builds on the existing support for the Raspberry Pi platform to also include the use of electric actuators via the microcomputer’s GPIO system. Moreover, we discuss implications of the newly introduced kit in the creative and K-12 education scenarios by sharing observations from a series of pilot workshops, with particular focus on using mechatronic NIMEs as a catalyst for the development of programing skills.
@inproceedings{Tsoukalas2018, author = {Tsoukalas, Kyriakos and Bukvic, Ivica Ico}, title = {Introducing a K-12 Mechatronic NIME Kit}, pages = {206--209}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302553}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0046.pdf} }
Daniel Bennett, Peter Bennett, and Anne Roudaut. 2018. Neurythmic: A Rhythm Creation Tool Based on Central Pattern Generators. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 210–215. http://doi.org/10.5281/zenodo.1302555
Abstract
Download PDF DOI
We describe the development of Neurythmic: an interactive system for the creation and performance of fluid, expressive musical rhythms using Central Pattern Generators (CPGs). CPGs are neural networks which generate adaptive rhythmic signals. They simulate structures in animals which underly behaviours such as heartbeat, gut peristalsis and complex motor control. Neurythmic is the first such system to use CPGs for interactive rhythm creation. We discuss how Neurythmic uses the entrainment behaviour of these networks to support the creation of rhythms while avoiding the rigidity of grid quantisation approaches. As well as discussing the development, design and evaluation of Neurythmic, we discuss relevant properties of the CPG networks used (Matsuoka’s Neural Oscillator), and describe methods for their control. Evaluation with expert and professional musicians shows that Neurythmic is a versatile tool, adapting well to a range of quite different musical approaches.
@inproceedings{Bennett2018, author = {Bennett, Daniel and Bennett, Peter and Roudaut, Anne}, title = {Neurythmic: A Rhythm Creation Tool Based on Central Pattern Generators}, pages = {210--215}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302555}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0047.pdf} }
James Granger, Mateo Aviles, Joshua Kirby, et al. 2018. Evaluating LED-based interface for Lumanote composition creation tool. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 216–221. http://doi.org/10.5281/zenodo.1302557
Abstract
Download PDF DOI
Composing music typically requires years of music theory experience and knowledge that includes but is not limited to chord progression, melody composition theory, and an understanding of whole-step/half-step passing tones among others. For that reason, certain songwriters such as singers may find a necessity to hire experienced pianists to help compose their music. In order to facilitate the process for beginner and aspiring musicians, we have developed Lumanote, a music composition tool that aids songwriters by presenting real-time suggestions on appropriate melody notes and chord progression. While a preliminary evaluation yielded favorable results for beginners, many commented on the difficulty of having to map the note suggestions displayed on the on-screen interface to the physical keyboard they were playing on. This paper presents the resulting solution: an LED-based feedback system that is designed to be directly attached to any standard MIDI keyboard. This peripheral aims to help map note suggestions directly to the physical keys of a musical keyboard. A study consisting of 22 individuals was conducted to compare the effectiveness of the new LED-based system with the existing computer interface, finding that the vast majority of users preferred the LED system. Three experienced musicians also judged and ranked the compositions, noting significant improvement in song quality when using either system, and citing comparable quality between compositions that used either interface.
@inproceedings{Granger2018, author = {Granger, James and Aviles, Mateo and Kirby, Joshua and Griffin, Austin and Yoon, Johnny and Lara-Garduno, Raniero A. and Hammond, Tracy}, title = {Evaluating LED-based interface for Lumanote composition creation tool}, pages = {216--221}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302557}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0048.pdf} }
Eduardo Meneses, Sergio Freire, and Marcelo Wanderley. 2018. GuitarAMI and GuiaRT: two independent yet complementary projects on augmented nylon guitars. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 222–227. http://doi.org/10.5281/zenodo.1302559
Abstract
Download PDF DOI
This paper describes two augmented nylon-string guitar projects developed in different institutions. GuitarAMI uses sensors to modify the classical guitars constraints while GuiaRT uses digital signal processing to create virtual guitarists that interact with the performer in real-time. After a bibliographic review of Augmented Musical Instruments (AMIs) based on guitars, we present the details of the two projects and compare them using an adapted dimensional space representation. Highlighting the complementarity and cross-influences between the projects, we propose avenues for future collaborative work.
@inproceedings{Meneses2018, author = {Meneses, Eduardo and Freire, Sergio and Wanderley, Marcelo}, title = {GuitarAMI and GuiaRT: two independent yet complementary projects on augmented nylon guitars}, pages = {222--227}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302559}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0049.pdf} }
Ariane de Souza Stolfi, Miguel Ceriani, Luca Turchet, and Mathieu Barthet. 2018. Playsound.space: Inclusive Free Music Improvisations Using Audio Commons. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 228–233. http://doi.org/10.5281/zenodo.1302561
Abstract
Download PDF DOI
Playsound.space is a web-based tool to search for and play Creative Commons licensed-sounds which can be applied to free improvisation, experimental music production and soundscape composition. It provides a fast access to about 400k non-musical and musical sounds provided by Freesound, and allows users to play/loop single or multiple sounds retrieved through text based search. Sound discovery is facilitated by use of semantic searches and sound visual representations (spectrograms). Guided by the motivation to create an intuitive tool to support music practice that could suit both novice and trained musicians, we developed and improved the system in a continuous process, gathering frequent feedback from a range of users with various skills. We assessed the prototype with 18 non musician and musician participants during free music improvisation sessions. Results indicate that the system was found easy to use and supports creative collaboration and expressiveness irrespective of musical ability. We identified further design challenges linked to creative identification, control and content quality.
@inproceedings{Stolfi2018, author = {de Souza Stolfi, Ariane and Ceriani, Miguel and Turchet, Luca and Barthet, Mathieu}, title = {Playsound.space: Inclusive Free Music Improvisations Using Audio Commons}, pages = {228--233}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302561}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0050.pdf} }
John Harding, Richard Graham, and Edwin Park. 2018. CTRL: A Flexible, Precision Interface for Analog Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 234–237. http://doi.org/10.5281/zenodo.1302563
Abstract
Download PDF DOI
This paper provides a new interface for the production and distribution of high resolution analog control signals, particularly aimed toward the control of analog modular synthesisers. Control Voltage/Gate interfaces generate Control Voltage (CV) and Gate Voltage (Gate) as a means of controlling note pitch and length respectively, and have been with us since 1986 [2]. The authors provide a unique custom CV/Gate interface and dedicated communication protocol which leverages standard USB Serial functionality and enables connectivity over a plethora of computing devices, including embedded devices such as the Raspberry Pi and ARM based devices including widely available ‘Android TV Boxes’. We provide a general overview of the unique hardware and communication protocol developments followed by usage case examples toward tuning and embedded platforms, leveraging softwares ranging from Pure Data (Pd), Max, and Max for Live (M4L).
@inproceedings{Harding2018, author = {Harding, John and Graham, Richard and Park, Edwin}, title = {CTRL: A Flexible, Precision Interface for Analog Synthesis}, pages = {234--237}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302563}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0051.pdf} }
Peter Beyls. 2018. Motivated Learning in Human-Machine Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 238–243. http://doi.org/10.5281/zenodo.1302565
Abstract
Download PDF DOI
This paper describes a machine learning approach in the context of non-idiomatic human-machine improvisation. In an attempt to avoid explicit mapping of user actions to machine responses, an experimental machine learning strategy is suggested where rewards are derived from the implied motivation of the human interactor – two motivations are at work: integration (aiming to connect with machine generated material) and expression (independent activity). By tracking consecutive changes in musical distance (i.e. melodic similarity) between human and machine, such motivations can be inferred. A variation of Q-learning is used featuring a self-optimizing variable length state-action-reward list. The system (called Pock) is tunable into particular behavioral niches by means of a limited number of parameters. Pock is designed as a recursive structure and behaves as a complex dynamical system. When tracking systems variables over time, emergent non-trivial patterns reveal experimental evidence of attractors demonstrating successful adaptation.
@inproceedings{Beyls2018, author = {Beyls, Peter}, title = {Motivated Learning in Human-Machine Improvisation}, pages = {238--243}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302565}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0052.pdf} }
Deepak Chandran and Ge Wang. 2018. InterFACE: new faces for musical expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 244–248. http://doi.org/10.5281/zenodo.1302569
Abstract
Download PDF DOI
InterFACE is an interactive system for musical creation, mediated primarily through the user’s facial expressions and movements. It aims to take advantage of the expressive capabilities of the human face to create music in a way that is both expressive and whimsical. This paper introduces the designs of three virtual instruments in the InterFACE system: namely, FACEdrum (a drum machine), GrannyFACE (a granular synthesis sampler), and FACEorgan (a laptop mouth organ using both face tracking and audio analysis). We present the design behind these instruments and consider what it means to be able to create music with one’s face. Finally, we discuss the usability and aesthetic criteria for evaluating such a system, taking into account our initial design goals as well as the resulting experience for the performer and audience.
@inproceedings{Chandran2018, author = {Chandran, Deepak and Wang, Ge}, title = {InterFACE: new faces for musical expression}, pages = {244--248}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302569}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0053.pdf} }
Richard Polfreman. 2018. Hand Posture Recognition: IR, IMU and sEMG. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 249–254. http://doi.org/10.5281/zenodo.1302571
Abstract
Download PDF DOI
Hands are important anatomical structures for musical performance, and recent developments in input device technology have allowed rather detailed capture of hand gestures using consumer-level products. While in some musical contexts, detailed hand and finger movements are required, in others it is sufficient to communicate discrete hand postures to indicate selection or other state changes. This research compared three approaches to capturing hand gestures where the shape of the hand, i.e. the relative positions and angles of finger joints, are an important part of the gesture. A number of sensor types can be used to capture information about hand posture, each of which has various practical advantages and disadvantages for music applications. This study compared three approaches, using optical, inertial and muscular information, with three sets of 5 hand postures (i.e. static gestures) and gesture recognition algorithms applied to the device data, aiming to determine which methods are most effective.
@inproceedings{Polfreman2018, author = {Polfreman, Richard}, title = {Hand Posture Recognition: IR, IMU and sEMG}, pages = {249--254}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302571}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0054.pdf} }
Joseph Malloch, Marlon Mario Schumacher, Stephen Sinclair, and Marcelo Wanderley. 2018. The Digital Orchestra Toolbox for Max. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 255–258. http://doi.org/10.5281/zenodo.1302573
Abstract
Download PDF DOI
The Digital Orchestra Toolbox for Max is an open-source collection of small modular software tools for aiding the development of Digital Musical Instruments. Each tool takes the form of an "abstraction" for the visual programming environment Max, meaning it can be opened and understood by users within the Max environment, as well as copied, modified, and appropriated as desired. This paper describes the origins of the Toolbox and our motivations for creating it, broadly outlines the types of tools included, and follows the development of the project over the last twelve years. We also present examples of several digital musical instruments built using the Toolbox.
@inproceedings{Malloch2018, author = {Malloch, Joseph and Schumacher, Marlon Mario and Sinclair, Stephen and Wanderley, Marcelo}, title = {The Digital Orchestra Toolbox for Max}, pages = {255--258}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302573}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0055.pdf} }
Bill Manaris, Pangur Brougham-Cook, Dana Hughes, and Andrew R. Brown. 2018. JythonMusic: An Environment for Developing Interactive Music Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 259–262. http://doi.org/10.5281/zenodo.1302575
Abstract
Download PDF DOI
JythonMusic is a software environment for developing interactive musical experiences and systems. It is based on jMusic, a software environment for computer-assisted composition, which was extended within the last decade into a more comprehensive framework providing composers and software developers with libraries for music making, image manipulation, building graphical user interfaces, and interacting with external devices via MIDI and OSC, among others. This environment is free and open source. It is based on Python, therefore it provides more economical syntax relative to Java- and C/C++-like languages. JythonMusic rests on top of Java, so it provides access to the complete Java API and external Java-based libraries as needed. Also, it works seamlessly with other software, such as PureData, Max/MSP, and Processing. The paper provides an overview of important JythonMusic libraries related to constructing interactive musical experiences. It demonstrates their scope and utility by summarizing several projects developed using JythonMusic, including interactive sound art installations, new interfaces for sound manipulation and spatialization, as well as various explorations on mapping among motion, gesture and music.
@inproceedings{Manaris2018, author = {Manaris, Bill and Brougham-Cook, Pangur and Hughes, Dana and Brown, Andrew R.}, title = {JythonMusic: An Environment for Developing Interactive Music Systems}, pages = {259--262}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302575}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0056.pdf} }
Steven Leib and Anıl Çamcı. 2018. Triplexer: An Expression Pedal with New Degrees of Freedom. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 263–268. http://doi.org/10.5281/zenodo.1302577
Abstract
Download PDF DOI
We introduce the Triplexer, a novel foot controller that gives the performer 3 degrees of freedom over the control of various effects parameters. With the Triplexer, we aim to expand the performer’s control space by augmenting the capabilities of the common expression pedal that is found in most effects rigs. Using industrial-grade weight-detection sensors and widely-adopted communication protocols, the Triplexer offers a flexible platform that can be integrated into various performance setups and situations. In this paper, we detail the design of the Triplexer by describing its hardware, embedded signal processing, and mapping software implementations. We also offer the results of a user study, which we conducted to evaluate the usability of our controller.
@inproceedings{Leib2018, author = {Leib, Steven and Çamcı, Anıl}, title = {Triplexer: An Expression Pedal with New Degrees of Freedom}, pages = {263--268}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302577}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0057.pdf} }
Halldór Úlfarsson. 2018. The halldorophone: The ongoing innovation of a cello-like drone instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 269–274. http://doi.org/10.5281/zenodo.1302579
Abstract
Download PDF DOI
This paper reports upon the process of innovation of a new instrument. The author has developed the halldorophone a new electroacoustic string instrument which makes use of positive feedback as a key element in generating its sound. An important objective of the project has been to encourage its use by practicing musicians. After ten years of use, the halldorophone has a growing repertoire of works by prominent composers and performers. During the development of the instrument, the question has been asked: “why do musicians want to use this instrument?” and answers have been found through on-going (informal) user studies and feedback. As the project progresses, a picture emerges of what qualities have led to a culture of acceptance and use around this new instrument. This paper describes the halldorophone and presents the rationale for its major design features and ergonomic choices, as they relate to the overarching objective of nurturing a culture of use and connects it to wider trends.
@inproceedings{Úlfarsson2018, author = {Úlfarsson, Halldór}, title = {The halldorophone: The ongoing innovation of a cello-like drone instrument}, pages = {269--274}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302579}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0058.pdf} }
Kyriakos Tsoukalas, Joseph Kubalak, and Ivica Ico Bukvic. 2018. L2OrkMote: Reimagining a Low-Cost Wearable Controller for a Live Gesture-Centric Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 275–280. http://doi.org/10.5281/zenodo.1302581
Abstract
Download PDF DOI
Laptop orchestras create music, although digitally produced, in a collaborative live performance not unlike a traditional orchestra. The recent increase in interest and investment in this style of music creation has paved the way for novel methods for musicians to create and interact with music. To this end, a number of nontraditional instruments have been constructed that enable musicians to control sound production beyond pitch and volume, integrating filtering, musical effects, etc. Wii Remotes (WiiMotes) have seen heavy use in maker communities, including laptop orchestras, for their robust sensor array and low cost. The placement of sensors and the form factor of the device itself are suited for video games, not necessarily live music creation. In this paper, the authors present a new controller design, based on the WiiMote hardware platform, to address usability in gesture-centric music performance. Based on the pilot-study data, the new controller offers unrestricted two-hand gesture production, smaller footprint, and lower muscle strain.
@inproceedings{Tsoukalasb2018, author = {Tsoukalas, Kyriakos and Kubalak, Joseph and Bukvic, Ivica Ico}, title = {L2OrkMote: Reimagining a Low-Cost Wearable Controller for a Live Gesture-Centric Music Performance}, pages = {275--280}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302581}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0059.pdf} }
Jack Armitage and Andrew P. McPherson. 2018. Crafting Digital Musical Instruments: An Exploratory Workshop Study. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 281–286. http://doi.org/10.5281/zenodo.1302583
Abstract
Download PDF DOI
In digital musical instrument design, different tools and methods offer a variety of approaches for constraining the exploration of musical gestures and sounds. Toolkits made of modular components usefully constrain exploration towards simple, quick and functional combinations, and methods such as sketching and model-making alternatively allow imagination and narrative to guide exploration. In this work we sought to investigate a context where these approaches to exploration were combined. We designed a craft workshop for 20 musical instrument designers, where groups were given the same partly-finished instrument to craft for one hour with raw materials, and though the task was open ended, they were prompted to focus on subtle details that might distinguish their instruments. Despite the prompt the groups diverged dramatically in intent and style, and generated gestural language rapidly and flexibly. By the end, each group had developed a distinctive approach to constraint, exploratory style, collaboration and interpretation of the instrument and workshop materials. We reflect on this outcome to discuss advantages and disadvantages to integrating digital musical instrument design tools and methods, and how to further investigate and extend this approach.
@inproceedings{Armitage2018, author = {Armitage, Jack and McPherson, Andrew P.}, title = {Crafting Digital Musical Instruments: An Exploratory Workshop Study}, pages = {281--286}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302583}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0060.pdf} }
Ammar Kalo and Georg Essl. 2018. Individual Fabrication of Cymbals using Incremental Robotic Sheet Forming. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 287–292. http://doi.org/10.5281/zenodo.1302585
Abstract
Download PDF DOI
Incremental robotic sheet forming is used to fabricate a novel cymbal shape based on models of geometric chaos for stadium shaped boundaries. This provides a proof-of-concept that this robotic fabrication technique might be a candidate method for creating novel metallic ideophones that are based on sheet deformations. Given that the technique does not require molding, it is well suited for both rapid and iterative prototyping and the fabrication of individual pieces. With advances in miniaturization, this approach may also be suitable for personal fabrication. In this paper we discuss this technique as well as aspects of the geometry of stadium cymbals and their impact on the resulting instrument.
@inproceedings{Kalo2018, author = {Kalo, Ammar and Essl, Georg}, title = {Individual Fabrication of Cymbals using Incremental Robotic Sheet Forming}, pages = {287--292}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302585}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0061.pdf} }
John McDowell. 2018. Haptic-Listening and the Classical Guitar. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 293–298. http://doi.org/10.5281/zenodo.1302587
Abstract
Download PDF DOI
This paper reports the development of a ‘haptic-listening’ system which presents the listener with a representation of the vibrotactile feedback perceived by a classical guitarist during performance through the use of haptic feedback technology. The paper describes the design of the haptic-listening system which is in two prototypes: the “DIY Haptic Guitar” and a more robust haptic-listening Trial prototype using a Reckhorn BS-200 shaker. Through two experiments, the perceptual significance and overall musical contribution of the addition of haptic feedback in a listening context was evaluated. Subjects preferred listening to the classical guitar presentation with the addition of haptic feedback and the addition of haptic feedback contributed to listeners’ engagement with a performance. The results of the experiments and their implications are discussed in this paper.
@inproceedings{McDowell2018, author = {McDowell, John}, title = {Haptic-Listening and the Classical Guitar}, pages = {293--298}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302587}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0062.pdf} }
Jacob Harrison, Robert H Jack, Fabio Morreale, and Andrew P. McPherson. 2018. When is a Guitar not a Guitar? Cultural Form, Input Modality and Expertise. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 299–304. http://doi.org/10.5281/zenodo.1302589
Abstract
Download PDF DOI
The design of traditional musical instruments is a process of incremental refinement over many centuries of innovation. Conversely, digital musical instruments (DMIs), being unconstrained by requirements of efficient acoustic sound production and ergonomics, can take on forms which are more abstract in their relation to the mechanism of control and sound production. In this paper we consider the case of designing DMIs for use in existing musical cultures, and pose questions around the social and technical acceptability of certain design choices relating to global physical form and input modality (sensing strategy and the input gestures that it affords). We designed four guitar-derivative DMIs designed to be suitable to perform a strummed harmonic accompaniment to a folk tune. Each instrument possessed varying degrees of ‘guitar-likeness’, based either on the form and aesthetics of the guitar or the specific mode of interaction. We conducted a study where both non-musicians and guitarists played two versions of the instruments and completed musical tasks with each instrument. The results of this study highlight the complex interaction between global form and input modality when designing for existing musical cultures.
@inproceedings{Harrison2018, author = {Harrison, Jacob and Jack, Robert H and Morreale, Fabio and McPherson, Andrew P.}, title = {When is a Guitar not a Guitar? Cultural Form, Input Modality and Expertise}, pages = {299--304}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302589}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0063.pdf} }
Jeppe Larsen, Hendrik Knoche, and Dan Overholt. 2018. A Longitudinal Field Trial with a Hemiplegic Guitarist Using The Actuated Guitar. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 305–310. http://doi.org/10.5281/zenodo.1302591
Abstract
Download PDF DOI
Common emotional effects following a stroke include depression, apathy and lack of motivation. We conducted a longitudinal case study to investigate if enabling a post-stroke former guitarist re-learn to play guitar would help increase motivation for self rehabilitation and quality of life after suffering a stroke. The intervention lasted three weeks during which the participant had a fully functional electrical guitar fitted with a strumming device controlled by a foot pedal at his free disposal. The device replaced right strumming of the strings, and the study showed that the participant, who was highly motivated, played 20 sessions despite system latency and reduced musical expression. He incorporated his own literature and equipment into his playing routine and improved greatly as the study progressed. He was able to play alone and keep a steady rhythm in time with backing tracks that went as fast as 120bpm. During the study he was able to lower his error rate to 33%, while his average flutter also decreased.
@inproceedings{Larsen2018, author = {Larsen, Jeppe and Knoche, Hendrik and Overholt, Dan}, title = {A Longitudinal Field Trial with a Hemiplegic Guitarist Using The Actuated Guitar}, pages = {305--310}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302591}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0064.pdf} }
Paul Stapleton, Maarten van Walstijn, and Sandor Mehes. 2018. Co-Tuning Virtual-Acoustic Performance Ecosystems: observations on the development of skill and style in the study of musician-instrument relationships. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 311–314. http://doi.org/10.5281/zenodo.1302593
Abstract
Download PDF DOI
In this paper we report preliminary observations from an ongoing study into how musicians explore and adapt to the parameter space of a virtual-acoustic string bridge plate instrument. These observations inform (and are informed by) a wider approach to understanding the development of skill and style in interactions between musicians and musical instruments. We discuss a performance-driven ecosystemic approach to studying musical relationships, drawing on arguments from the literature which emphasise the need to go beyond simplistic notions of control and usability when assessing exploratory and performatory musical interactions. Lastly, we focus on processes of perceptual learning and co-tuning between musician and instrument, and how these activities may contribute to the emergence of personal style as a hallmark of skilful music-making.
@inproceedings{Stapleton2018, author = {Stapleton, Paul and van Walstijn, Maarten and Mehes, Sandor}, title = {Co-Tuning Virtual-Acoustic Performance Ecosystems: observations on the development of skill and style in the study of musician-instrument relationships}, pages = {311--314}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302593}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0065.pdf} }
Sands A. Fish II and Nicole L’Huillier. 2018. Telemetron: A Musical Instrument for Performance in Zero Gravity. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 315–317. http://doi.org/10.5281/zenodo.1302595
Abstract
Download PDF DOI
The environment of zero gravity affords a unique medium for new modalities of musical performance, both in the design of instruments, and human interactions with said instruments. To explore this medium, we have created and flown Telemetron, the first musical instrument specifically designed for and tested in the zero gravity environment. The resultant instrument (leveraging gyroscopes and wireless telemetry transmission) and recorded performance represent an initial exploration of compositions that are unique to the physics and dynamics of outer space. We describe the motivations for this instrument, and the unique constraints involved in designing for this environment. This initial design suggests possibilities for further experiments in musical instrument design for outer space.
@inproceedings{Fish2018, author = {Fish II, Sands A. and L'Huillier, Nicole}, title = {Telemetron: A Musical Instrument for Performance in Zero Gravity}, pages = {315--317}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302595}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0066.pdf} }
Dan Wilcox. 2018. robotcowboy: 10 Years of Wearable Computer Rock. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 318–323. http://doi.org/10.5281/zenodo.1302597
Abstract
Download PDF DOI
This paper covers the technical and aesthetic development of robotcowboy, the author’s ongoing human-computer wearable performance project. Conceived as an idiosyncratic manifesto on the embodiment of computational sound, the original robotcowboy system was built in 2006-2007 using a belt-mounted industrial wearable computer running GNU/Linux and Pure Data, external USB audio/MIDI interfaces, HID gamepads, and guitar. Influenced by roadworthy analog gear, chief system requirements were mobility, plug-and-play, reliability, and low cost. From 2007 to 2011, this first iteration "Cabled Madness" melded rock music with realtime algorithmic composition and revolved around cyborg human/system tension, aspects of improvisation, audience feedback, and an inherent capability of failure. The second iteration "Onward to Mars" explored storytelling from 2012-2015 through the one-way journey of the first human on Mars with the computing system adapted into a self-contained spacesuit backpack. Now 10 years on, a new robotcowboy 2.0 system powers a third iteration with only an iPhone and PdParty, the author’s open-source iOS application which runs Pure Data patches and provides full duplex stereo audio, MIDI, HID game controller support, and Open Sound Control communication. The future is bright, do you have room to wiggle?
@inproceedings{Wilcox2018, author = {Wilcox, Dan}, title = {robotcowboy: 10 Years of Wearable Computer Rock}, pages = {318--323}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302597}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0067.pdf} }
Victor Evaristo Gonzalez Sanchez, Charles Patrick Martin, Agata Zelechowska, Kari Anne Vadstensvik Bjerkestrand, Victoria Johnson, and Alexander Refsum Jensenius. 2018. Bela-Based Augmented Acoustic Guitars for Sonic Microinteraction. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 324–327. http://doi.org/10.5281/zenodo.1302599
Abstract
Download PDF DOI
This article describes the design and construction of a collection of digitally-controlled augmented acoustic guitars, and the use of these guitars in the installation \textit{Sverm-Resonans}. The installation was built around the idea of exploring ‘inverse’ sonic microinteraction, that is, controlling sounds by the micromotion observed when attempting to stand still. It consisted of six acoustic guitars, each equipped with a Bela embedded computer for sound processing (in Pure Data), an infrared distance sensor to detect the presence of users, and an actuator attached to the guitar body to produce sound. With an attached battery pack, the result was a set of completely autonomous instruments that were easy to hang in a gallery space. The installation encouraged explorations on the boundary between the tactile and the kinesthetic, the body and the mind, and between motion and sound. The use of guitars, albeit with an untraditional ‘performance’ technique, made the experience both familiar and unfamiliar at the same time. Many users reported heightened sensations of stillness, sound, and vibration, and that the ‘inverse’ control of the instrument was both challenging and pleasant.
@inproceedings{Gonzalez2018, author = {Gonzalez Sanchez, Victor Evaristo and Martin, Charles Patrick and Zelechowska, Agata and Bjerkestrand, Kari Anne Vadstensvik and Johnson, Victoria and Jensenius, Alexander Refsum}, title = {Bela-Based Augmented Acoustic Guitars for Sonic Microinteraction}, pages = {324--327}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302599}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0068.pdf} }
Giacomo Lepri and Andrew P. McPherson. 2018. Mirroring the past, from typewriting to interactive art: an approach to the re-design of a vintage technology. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 328–333. http://doi.org/10.5281/zenodo.1302601
Abstract
Download PDF DOI
Obsolete and old technologies are often used in interactive art and music performance. DIY practices such as hardware hacking and circuit bending provide effective methods to the integration of old machines into new artistic inventions. This paper presents the Cembalo Scrivano .1, an interactive audio-visual installation based on an augmented typewriter. Borrowing concepts from media archaeology studies, tangible interaction design and digital lutherie, we discuss how investigations into the historical and cultural evolution of a technology can suggest directions for the regeneration of obsolete objects. The design approach outlined focuses on the remediation of an old device and aims to evoke cultural and physical properties associated to the source object.
@inproceedings{Lepri2018, author = {Lepri, Giacomo and McPherson, Andrew P.}, title = {Mirroring the past, from typewriting to interactive art: an approach to the re-design of a vintage technology}, pages = {328--333}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302601}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0069.pdf} }
Seth Dominicus Thorn. 2018. Alto.Glove: New Techniques for Augmented Violin. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 334–339. http://doi.org/10.5281/zenodo.1302603
Abstract
Download PDF DOI
This paper describes a performer-centric approach to the design, sensor selection, data interpretation, and mapping schema of a sensor-embedded glove called the “alto.glove” that the author uses to extend his performance abilities on violin. The alto.glove is a response to the limitations—both creative and technical—perceived in feature extraction processes that rely on classification. The hardware answers one problem: how to extend violin playing in a minimal yet powerful way; the software answers another: how to create a rich, evolving response that enhances expression in improvisation. The author approaches this problem from the various roles of violinist, hardware technician, programmer, sound designer, composer, and improviser. Importantly, the alto.glove is designed to be cost-effective and relatively easy to build.
@inproceedings{Thorn2018, author = {Thorn, Seth Dominicus}, title = {Alto.Glove: New Techniques for Augmented Violin}, pages = {334--339}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302603}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0070.pdf} }
Thanos Polymeneas Liontiris. 2018. Low Frequency Feedback Drones: A non-invasive augmentation of the double bass. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 340–341. http://doi.org/10.5281/zenodo.1302605
Abstract
Download PDF DOI
This paper illustrates the development of a Feedback Resonating Double Bass. The instrument is essentially the augmentation of an acoustic double bass using positive feedback. The research aimed to reply the question of how to augment and convert a double bass into a feedback resonating one without following an invasive method. The conversion process illustrated here is applicable and adaptable to double basses of any size, without making irreversible alterations to the instruments.
@inproceedings{Liontiris2018, author = {Liontiris, Thanos Polymeneas}, title = {Low Frequency Feedback Drones: A non-invasive augmentation of the double bass}, pages = {340--341}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302605}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0071.pdf} }
Daniel Formo. 2018. The Orchestra of Speech: a speech-based instrument system. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 342–343. http://doi.org/10.5281/zenodo.1302607
Abstract
Download PDF DOI
The Orchestra of Speech is a performance concept resulting from a recent artistic research project exploring the relationship between music and speech, in particular improvised music and everyday conversation. As a tool in this exploration, a digital musical instrument system has been developed for “orchestrating” musical features of speech into music, in real time. Through artistic practice, this system has evolved into a personal electroacoustic performance concept.
@inproceedings{Formo2018, author = {Formo, Daniel}, title = {The Orchestra of Speech: a speech-based instrument system}, pages = {342--343}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302607}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0072.pdf} }
Anna Weisling, Anna Xambó, ireti olowe, and Mathieu Barthet. 2018. Surveying the Compositional and Performance Practices of Audiovisual Practitioners. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 344–345. http://doi.org/10.5281/zenodo.1302609
Abstract
Download PDF DOI
This paper presents a brief overview of an online survey conducted with the objective of gaining insight into compositional and performance practices of contemporary audiovisual practitioners. The survey gathered information regarding how practitioners relate aural and visual media in their work, and how compositional and performance practices involving multiple modalities might differ from other practices. Discussed here are three themes: compositional approaches, transparency and audience knowledge, and error and risk, which emerged from participants’ responses. We believe these themes contribute to a discussion within the NIME community regarding unique challenges and objectives presented when working with multiple media.
@inproceedings{Weisling2018, author = {Weisling, Anna and Xambó, Anna and ireti olowe and Barthet, Mathieu}, title = {Surveying the Compositional and Performance Practices of Audiovisual Practitioners}, pages = {344--345}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302609}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0073.pdf} }
Anthony T. Marasco. 2018. Sound Opinions: Creating a Virtual Tool for Sound Art Installations through Sentiment Analysis of Critical Reviews. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 346–347. http://doi.org/10.5281/zenodo.1302611
Abstract
Download PDF DOI
The author presents Sound Opinions, a custom software tool that uses sentiment analysis to create sound art installations and music compositions. The software runs inside the NodeRed.js programming environment. It scrapes text from web pages, pre-processes it, performs sentiment analysis via a remote API, and parses the resulting data for use in external digital audio programs. The sentiment analysis itself is handled by IBM’s Watson Tone Analyzer. The author has used this tool to create an interactive multimedia installation, titled Critique. Sources of criticism of a chosen musical work are analyzed and the negative or positive statements about that composition work to warp and change it. This allows the audience to only hear the work through the lens of its critics, and not in the original form that its creator intended.
@inproceedings{Marasco2018, author = {Marasco, Anthony T.}, title = {Sound Opinions: Creating a Virtual Tool for Sound Art Installations through Sentiment Analysis of Critical Reviews}, pages = {346--347}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302611}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0074.pdf} }
Kosmas Kritsis, Aggelos Gkiokas, Carlos Árpád Acosta, et al. 2018. A web-based 3D environment for gestural interaction with virtual music instruments as a STEAM education tool. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 348–349. http://doi.org/10.5281/zenodo.1302613
Abstract
Download PDF DOI
We present our work in progress on the development of a web-based system for music performance with virtual instruments in a virtual 3D environment, which provides three means of interaction (i.e physical, gestural and mixed), using tracking data from a Leap Motion sensor. Moreover, our system is integrated as a creative tool within the context of a STEAM education platform that promotes science learning through musical activities. The presented system models string and percussion instruments, with realistic sonic feedback based on Modalys, a physical model-based sound synthesis engine. Our proposal meets the performance requirements of real-time interactive systems and is implemented strictly with web technologies.
@inproceedings{Kritsis2018, author = {Kritsis, Kosmas and Gkiokas, Aggelos and Acosta, Carlos Árpád and Lamerand, Quentin and Piéchaud, Robert and Kaliakatsos-Papakostas, Maximos and Katsouros, Vassilis}, title = {A web-based 3D environment for gestural interaction with virtual music instruments as a STEAM education tool}, pages = {348--349}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302613}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0075.pdf} }
Maria C. Mannone, Eri Kitamura, Jiawei Huang, Ryo Sugawara, and Yoshifumi Kitamura. 2018. CubeHarmonic: A New Interface from a Magnetic 3D Motion Tracking System to Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 350–351. http://doi.org/10.5281/zenodo.1302615
Abstract
Download PDF DOI
We developed a new musical interface, CubeHarmonic, with the magnetic tracking system, IM3D, created at Tohoku University. IM3D system precisely tracks positions of tiny, wireless, battery-less, and identifiable LC coils in real time. The CubeHarmonic is a musical application of the Rubik’s cube, with notes on each little piece. Scrambling the cube, we get different chords and chord sequences. Positions of the pieces which contain LC coils are detected through IM3D, and transmitted to the computer, that plays sounds. The central position of the cube is also computed from the LC coils located into the corners of Rubik’s cube, and, depending on the computed central position, we can manipulate overall loudness and pitch changes, as in theremin playing. This new instrument, whose first idea comes from mathematical theory of music, can be used as a teaching tool both for math (group theory) and music (music theory, mathematical music theory), as well as a composition device, a new instrument for avant-garde performances, and a recreational tool.
@inproceedings{Mannone2018, author = {Mannone, Maria C. and Kitamura, Eri and Huang, Jiawei and Sugawara, Ryo and Kitamura, Yoshifumi}, title = {CubeHarmonic: A New Interface from a Magnetic 3D Motion Tracking System to Music Performance}, pages = {350--351}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302615}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0076.pdf} }
Martin M Kristoffersen and Trond Engum. 2018. The Whammy Bar as a Digital Effect Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 352–355. http://doi.org/10.5281/zenodo.1302617
Abstract
Download PDF DOI
In this paper we present a novel digital effects controller for electric guitar based upon the whammy bar as a user interface. The goal with the project is to give guitarists a way to interact with dynamic effects control that feels familiar to their instrument and playing style. A 3D-printed prototype has been made. It replaces the whammy bar of a traditional Fender vibrato system with a sensor-equipped whammy bar. The functionality of the present prototype includes separate readings of force applied towards and from the guitar body, as well as an end knob for variable control. Further functionality includes a hinged system allowing for digital effect control either with or without the mechanical manipulation of string tension. By incorporating digital sensors to the idiomatic whammy bar interface, one would potentially bring guitarists a high level of control intimacy with the device, and thus lead to a closer interaction with effects.
@inproceedings{Kristoffersen2018, author = {Kristoffersen, Martin M and Engum, Trond}, title = {The Whammy Bar as a Digital Effect Controller}, pages = {352--355}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302617}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0077.pdf} }
Robert Pond, Alexander Klassen, and Kirk McNally. 2018. Timbre Tuning: Variation in Cello Sprectrum Across Pitches and Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 356–359. http://doi.org/10.5281/zenodo.1302619
Abstract
Download PDF DOI
The process of learning to play a string instrument is a notoriously difficult task. A new student to the instrument is faced with mastering multiple, interconnected physical movements in order to become a skillful player. In their development, one measure of a players quality is their tone, which is the result of the combination of the physical characteristics of the instrument and their technique in playing it. This paper describes preliminary research into creating an intuitive, real-time device for evaluating the quality of tone generation on the cello: a “timbre-tuner” to aid cellists evaluate their tone quality. Data for the study was collected from six post-secondary music students, consisting of recordings of scales covering the entire range of the cello. Comprehensive spectral audio analysis was performed on the data set in order to evaluate features suitable to describe tone quality. An inverse relationship was found between the harmonic centroid and pitch played, which became more pronounced when restricted to the A string. In addition, a model for predicting the harmonic centroid at different pitches on the A string was created. Results from informal listening tests support the use of the harmonic centroid as an appropriate measure for tone quality.
@inproceedings{Pond2018, author = {Pond, Robert and Klassen, Alexander and McNally, Kirk}, title = {Timbre Tuning: Variation in Cello Sprectrum Across Pitches and Instruments}, pages = {356--359}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302619}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0078.pdf} }
Matthew Mosher, Danielle Wood, and Tony Obr. 2018. Tributaries of Our Lost Palpability. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 360–361. http://doi.org/10.5281/zenodo.1302621
Abstract
Download PDF DOI
This demonstration paper describes the concepts behind Tributaries of Our Distant Palpability, an interactive sonified sculpture. It takes form as a swelling sea anemone, while the sounds it produces recall the quagmire of a digital ocean. The sculpture responds to changing light conditions with a dynamic mix of audio tracks, mapping volume to light level. People passing by the sculpture, or directly engaging it by creating light and shadows with their smart phone flashlights, will trigger the audio. At the same time, it automatically adapts to gradual environment light changes, such as the rise and fall of the sun. The piece was inspired by the searching gestures people make, and emotions they have while, idly browsing content on their smart devices. It was created through an interdisciplinary collaboration between a musician, an interaction designer, and a ceramicist.
@inproceedings{Mosher2018, author = {Mosher, Matthew and Wood, Danielle and Obr, Tony}, title = {Tributaries of Our Lost Palpability}, pages = {360--361}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302621}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0079.pdf} }
Andrew Piepenbrink. 2018. Embedded Digital Shakers: Handheld Physical Modeling Synthesizers. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 362–363. http://doi.org/10.5281/zenodo.1302623
Abstract
Download PDF DOI
We present a flexible, compact, and affordable embedded physical modeling synthesizer which functions as a digital shaker. The instrument is self-contained, battery-powered, wireless, and synthesizes various shakers, rattles, and other handheld shaken percussion. Beyond modeling existing shakers, the instrument affords new sonic interactions including hand mutes on its loudspeakers and self-sustaining feedback. Both low-cost and high-performance versions of the instrument are discussed.
@inproceedings{Piepenbrink2018, author = {Piepenbrink, Andrew}, title = {Embedded Digital Shakers: Handheld Physical Modeling Synthesizers}, pages = {362--363}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302623}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0080.pdf} }
Anna Xambó, Gerard Roma, Alexander Lerch, Mathieu Barthet, and György Fazekas. 2018. Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 364–369. http://doi.org/10.5281/zenodo.1302625
Abstract
Download PDF DOI
The recent increase in the accessibility and size of personal and crowdsourced digital sound collections brought about a valuable resource for music creation. Finding and retrieving relevant sounds in performance leads to challenges that can be approached using music information retrieval (MIR). In this paper, we explore the use of MIR to retrieve and repurpose sounds in musical live coding. We present a live coding system built on SuperCollider enabling the use of audio content from online Creative Commons (CC) sound databases such as Freesound or personal sound databases. The novelty of our approach lies in exploiting high-level MIR methods (e.g., query by pitch or rhythmic cues) using live coding techniques applied to sounds. We demonstrate its potential through the reflection of an illustrative case study and the feedback from four expert users. The users tried the system with either a personal database or a crowdsourced database and reported its potential in facilitating tailorability of the tool to their own creative workflows.
@inproceedings{Xambób2018, author = {Xambó, Anna and Roma, Gerard and Lerch, Alexander and Barthet, Mathieu and Fazekas, György}, title = {Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases}, pages = {364--369}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302625}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0081.pdf} }
Avneesh Sarwate, Ryan Taylor Rose, Jason Freeman, and Jack Armitage. 2018. Performance Systems for Live Coders and Non Coders. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 370–373. http://doi.org/10.5281/zenodo.1302627
Abstract
Download PDF DOI
This paper explores the question of how live coding musicians can perform with musicians who are not using code (such as acoustic instrumentalists or those using graphical and tangible electronic interfaces). This paper investigates performance systems that facilitate improvisation where the musicians can interact not just by listening to each other and changing their own output, but also by manipulating the data stream of the other performer(s). In a course of performance-led research four prototypes were built and analyzed them using concepts from NIME and creative collaboration literature. Based on this analysis it was found that the systems should 1) provide a commonly modifiable visual representation of musical data for both coder and non-coder, and 2) provide some independent means of sound production for each user, giving the non-coder the ability to slow down and make non-realtime decisions for greater performance flexibility.
@inproceedings{Sarwate2018, author = {Sarwate, Avneesh and Rose, Ryan Taylor and Freeman, Jason and Armitage, Jack}, title = {Performance Systems for Live Coders and Non Coders}, pages = {370--373}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302627}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0082.pdf} }
Jeff Snyder, Michael R Mulshine, and Rajeev S Erramilli. 2018. The Feedback Trombone: Controlling Feedback in Brass Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 374–379. http://doi.org/10.5281/zenodo.1302629
Abstract
Download PDF DOI
This paper presents research on control of electronic signal feedback in brass instruments through the development of a new augmented musical instrument, the Feedback Trombone. The Feedback Trombone (FBT) extends the traditional acoustic trombone interface with a speaker, microphone, and custom analog and digital hardware.
@inproceedings{Snyder2018, author = {Snyder, Jeff and Mulshine, Michael R and Erramilli, Rajeev S}, title = {The Feedback Trombone: Controlling Feedback in Brass Instruments}, pages = {374--379}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302629}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0083.pdf} }
Eric Sheffield. 2018. Mechanoise: Mechatronic Sound and Interaction in Embedded Acoustic Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 380–381. http://doi.org/10.5281/zenodo.1302631
Abstract
Download PDF DOI
The use of mechatronic components (e.g. DC motors and solenoids) as both electronic sound source and locus of interaction is explored in a form of embedded acoustic instruments called mechanoise instruments. Micro-controllers and embedded computing devices provide a platform for live control of motor speeds and additional sound processing by a human performer. Digital fabrication and use of salvaged and found materials are emphasized.
@inproceedings{Sheffield2018, author = {Sheffield, Eric}, title = {Mechanoise: Mechatronic Sound and Interaction in Embedded Acoustic Instruments}, pages = {380--381}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302631}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0084.pdf} }
Jon Pigrem and Andrew P. McPherson. 2018. Do We Speak Sensor? Cultural Constraints of Embodied Interaction . Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 382–385. http://doi.org/10.5281/zenodo.1302633
Abstract
Download PDF DOI
This paper explores the role of materiality in Digital Musical Instruments and questions the influence of tacit understandings of sensor technology. Existing research investigates the use of gesture, physical interaction and subsequent parameter mapping. We suggest that a tacit knowledge of the ‘sensor layer’ brings with it definitions, understandings and expectations that forge and guide our approach to interaction. We argue that the influence of technology starts before a sound is made, and comes from not only intuition of material properties, but also received notions of what technology can and should do. On encountering an instrument with obvious sensors, a potential performer will attempt to predict what the sensors do and what the designer intends for them to do, becoming influenced by a machine centered understanding of interaction and not a solely material centred one. The paper presents an observational study of interaction using non-functional prototype instruments designed to explore fundamental ideas and understandings of instrumental interaction in the digital realm. We will show that this understanding influences both gestural language and ability to characterise an expected sonic/musical response.
@inproceedings{Pigrem2018, author = {Pigrem, Jon and McPherson, Andrew P.}, title = {Do We Speak Sensor? Cultural Constraints of Embodied Interaction }, pages = {382--385}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302633}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0085.pdf} }
Spencer Salazar and Jack Armitage. 2018. Re-engaging the Body and Gesture in Musical Live Coding. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 386–389. http://doi.org/10.5281/zenodo.1302635
Abstract
Download PDF DOI
At first glance, the practice of musical live coding seems distanced from the gestures and sense of embodiment common in musical performance, electronic or otherwise. This workshop seeks to explore the extent to which this assertion is justified, to re-examine notions of gesture and embodiment in the context of musical live coding performance, to consider historical approaches to synthesizing musical programming and gesture, and to look to the future for new ways of doing so. The workshop will consist firstly of a critical discussion of these issues and related literature. This will be followed by applied practical experiments involving ideas generated during these discussions. The workshop will conclude with a recapitulation and examination of these experiments in the context of previous research and proposed future directions.
@inproceedings{Salazarb2018, author = {Salazar, Spencer and Armitage, Jack}, title = {Re-engaging the Body and Gesture in Musical Live Coding}, pages = {386--389}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302635}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0086.pdf} }
Edgar Berdahl, Eric Sheffield, Andrew Pfalz, and Anthony T. Marasco. 2018. Widening the Razor-Thin Edge of Chaos Into a Musical Highway: Connecting Chaotic Maps to Digital Waveguides. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 390–393. http://doi.org/10.5281/zenodo.1302637
Abstract
Download PDF DOI
For the purpose of creating new musical instruments, chaotic dynamical systems can be simulated in real time to synthesize complex sounds. This work investigates a series of discrete-time chaotic maps, which have the potential to generate intriguing sounds when they are adjusted to be on the edge of chaos. With these chaotic maps as studied historically, the edge of chaos tends to be razor-thin, which can make it difficult to employ them for making new musical instruments. The authors therefore suggest connecting chaotic maps with digital waveguides, which (1) make it easier to synthesize harmonic tones and (2) make it harder to fall off of the edge of chaos while playing a musical instrument. The authors argue therefore that this technique widens the razor-thin edge of chaos into a musical highway.
@inproceedings{Berdahl2018, author = {Berdahl, Edgar and Sheffield, Eric and Pfalz, Andrew and Marasco, Anthony T.}, title = {Widening the Razor-Thin Edge of Chaos Into a Musical Highway: Connecting Chaotic Maps to Digital Waveguides}, pages = {390--393}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302637}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0087.pdf} }
Jeff Snyder, Aatish Bhatia, and Michael R Mulshine. 2018. Neuron-modeled Audio Synthesis: Nonlinear Sound and Control. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 394–397. http://doi.org/10.5281/zenodo.1302639
Abstract
Download PDF DOI
This paper describes a project to create a software instrument using a biological model of neuron behavior for audio synthesis. The translation of the model to a usable audio synthesis process is described, and a piece for laptop orchestra created using the instrument is discussed.
@inproceedings{Snyderb2018, author = {Snyder, Jeff and Bhatia, Aatish and Mulshine, Michael R}, title = {Neuron-modeled Audio Synthesis: Nonlinear Sound and Control}, pages = {394--397}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302639}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0088.pdf} }
Rodrigo F. Cádiz and Marie Gonzalez-Inostroza. 2018. Fuzzy Logic Control Toolkit 2.0: composing and synthesis by fuzzyfication. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 398–402. http://doi.org/10.5281/zenodo.1302641
Abstract
Download PDF DOI
In computer or electroacoustic music, it is often the case that the compositional act and the parametric control of the underlying synthesis algorithms or hardware are not separable from each other. In these situations, composition and control of the synthesis parameters are not easy to distinguish. One possible solution is by means of fuzzy logic. This approach provides a simple, intuitive but powerful control of the compositional process usually in interesting non-linear ways. Compositional control in this context is achieved by the fuzzification of the relevant internal synthesis parameters and the parallel computation of common sense fuzzy rules of inference specified by the composer. This approach has been implemented computationally as a software package entitled FLCTK (Fuzzy Logic Control Tool Kit) in the form of external objects for the widely used real-time compositional environments Max/MSP and Pd. In this article, we present an updated version of this tool. As a demonstration of the wide range of situations in which this approach could be used, we provide two examples of parametric fuzzy control: first, the fuzzy control of a water tank simulation and second a particle-based sound synthesis technique by a fuzzy approach.
@inproceedings{Cádiz2018, author = {Cádiz, Rodrigo F. and Gonzalez-Inostroza, Marie}, title = {Fuzzy Logic Control Toolkit 2.0: composing and synthesis by fuzzyfication}, pages = {398--402}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302641}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0089.pdf} }
Sang-won Leigh and Pattie Maes. 2018. Guitar Machine: Robotic Fretting Augmentation for Hybrid Human-Machine Guitar Play. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 403–408. http://doi.org/10.5281/zenodo.1302643
Abstract
Download PDF DOI
Playing musical instruments involves producing gradually more challenging body movements and transitions, where the kinematic constraints of the body play a crucial role in structuring the resulting music. We seek to make a bridge between currently accessible motor patterns, and musical possibilities beyond those — afforded through the use of a robotic augmentation. Guitar Machine is a robotic device that presses on guitar strings and assists a musician by fretting alongside her on the same guitar. This paper discusses the design of the system, strategies for using the system to create novel musical patterns, and a user study that looks at the effects of the temporary acquisition of enhanced physical ability. Our results indicate that the proposed human-robot interaction would equip users to explore new musical avenues on the guitar, as well as provide an enhanced understanding of the task at hand on the basis of the robotically acquired ability.
@inproceedings{Leigh2018, author = {Leigh, Sang-won and Maes, Pattie}, title = {Guitar Machine: Robotic Fretting Augmentation for Hybrid Human-Machine Guitar Play}, pages = {403--408}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302643}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0090.pdf} }
Scott Barton, Karl Sundberg, Andrew Walter, Linda Sara Baker, Tanuj Sane, and Alexander O’Brien. 2018. Robotic Percussive Aerophone. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 409–412. http://doi.org/10.5281/zenodo.1302645
Abstract
Download PDF DOI
Percussive aerophones are configurable, modular, scalable, and can be constructed from commonly found materials. They can produce rich timbres, a wide range of pitches and complex polyphony. Their use by humans, perhaps most famously by the Blue Man Group, inspired us to build an electromechanically-actuated version of the instrument in order to explore expressive possibilities enabled by machines. The Music, Perception, and Robotics Lab at WPI has iteratively designed, built and composed for a robotic percussive aerophone since 2015, which has both taught lessons in actuation and revealed promising musical capabilities of the instrument.
@inproceedings{Barton2018, author = {Barton, Scott and Sundberg, Karl and Walter, Andrew and Baker, Linda Sara and Sane, Tanuj and O'Brien, Alexander}, title = {Robotic Percussive Aerophone}, pages = {409--412}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302645}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0091.pdf} }
Nathan Daniel Villicaña-Shaw, Spencer Salazar, and Ajay Kapur. 2018. Mechatronic Performance in Computer Music Compositions. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 413–418. http://doi.org/10.5281/zenodo.1302647
Abstract
Download PDF DOI
This paper introduces seven mechatronic compositions performed over three years at the xxxxx (xxxx). Each composition is discussed in regard to how it addresses the performative elements of mechatronic music concerts. The compositions are grouped into four classifications according to the types of interactions between human and robotic performers they afford: Non-Interactive, Mechatronic Instruments Played by Humans, Mechatronic Instruments Playing with Humans, and Social Interaction as Performance. The orchestration of each composition is described along with an overview of the piece’s compositional philosophy. Observations on how specific extra-musical compositional techniques can be incorporated into future mechatronic performances by human-robot performance ensembles are addressed.
@inproceedings{VillicañaShaw2018, author = {Villicaña-Shaw, Nathan Daniel and Salazar, Spencer and Kapur, Ajay}, title = {Mechatronic Performance in Computer Music Compositions}, pages = {413--418}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302647}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0092.pdf} }
2017
Robert Van Rooyen, Andrew Schloss, and George Tzanetakis. 2017. Voice Coil Actuators for Percussion Robotics. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 1–6. http://doi.org/10.5281/zenodo.1176149
Abstract
Download PDF DOI
Percussion robots have successfully used a variety of actuator technologies to activate a wide array of striking mechanisms. Popular types of actuators include solenoids and DC motors. However, the use of industrial strength voice coil actuators provides a compelling alternative given a desirable set of heterogeneous features and requirements that span traditional devices. Their characteristics such as high acceleration and accurate positioning enable the exploration of rendering highly accurate and expressive percussion performances.
@inproceedings{rrooyen2017, author = {Rooyen, Robert Van and Schloss, Andrew and Tzanetakis, George}, title = {Voice Coil Actuators for Percussion Robotics}, pages = {1--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176149}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0001.pdf} }
Maurin Donneaud, Cedric Honnet, and Paul Strohmeier. 2017. Designing a Multi-Touch eTextile for Music Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 7–12. http://doi.org/10.5281/zenodo.1176151
Abstract
Download PDF DOI
We present a textile pressure sensor matrix, designed to be used as a musical multi-touch input device. An evaluation of our design demonstrated that the sensors pressure response profile fits a logarithmic curve (R = 0.98). The input delay of the sensor is 2.1ms. The average absolute error in one direction of the sensor was measured to be less than 10% of one of the matrix’s strips (M = 1.8mm, SD = 1.37mm). We intend this technology to be easy to use and implement by experts and novices alike: We ensure the ease of use by providing a host application that tracks touch points and passes these on as OSC or MIDI messages. We make our design easy to implement by providing open source software and hardware and by choosing evaluation methods that use accessible tools, making quantitative comparisons between different branches of the design easy. We chose to work with textile to take advantage of its tactile properties and its malleability of form and to pay tribute to textile’s rich cultural heritage.
@inproceedings{mdonneaud2017, author = {Donneaud, Maurin and Honnet, Cedric and Strohmeier, Paul}, title = {Designing a Multi-Touch eTextile for Music Performances}, pages = {7--12}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176151}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0002.pdf} }
Peter Williams and Daniel Overholt. 2017. bEADS Extended Actuated Digital Shaker. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 13–18. http://doi.org/10.5281/zenodo.1176153
Abstract
Download PDF DOI
While there are a great variety of digital musical interfaces available to the working musician, few o er the level of immediate, nuanced and instinctive control that one nds in an acoustic shaker. bEADS is a prototype of a digital musical instrument that utilises the gestural vocabulary associated with shaken idiophones and expands on the techniques and sonic possibilities associated with them. By using a bespoke physically informed synthesis engine, in conjunction with accelerometer and pressure sensor data, an actuated handheld instrument has been built that allows for quickly switching between widely di ering percussive sound textures. The prototype has been evaluated by three experts with di erent levels of involvement in professional music making.
@inproceedings{pwilliams2017, author = {Williams, Peter and Overholt, Daniel}, title = {bEADS Extended Actuated Digital Shaker}, pages = {13--18}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176153}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0003.pdf} }
Romain Michon, Julius O. Smith, Matthew Wright, Chris Chafe, John Granzow, and Ge Wang. 2017. Passively Augmenting Mobile Devices Towards Hybrid Musical Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 19–24. http://doi.org/10.5281/zenodo.1176155
Abstract
Download PDF DOI
Mobile devices constitute a generic platform to make standalone musical instruments for live performance. However, they were not designed for such use and have multiple limitations when compared to other types of instruments. We introduce a framework to quickly design and prototype passive mobile device augmentations to leverage existing features of the device for the end goal of mobile musical instruments. An extended list of examples is provided and the results of a workshop, organized partly to evaluate our framework, are provided.
@inproceedings{rmichon2017, author = {Michon, Romain and Smith, Julius O. and Wright, Matthew and Chafe, Chris and Granzow, John and Wang, Ge}, title = {Passively Augmenting Mobile Devices Towards Hybrid Musical Instrument Design}, pages = {19--24}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176155}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0004.pdf} }
Alice Eldridge and Chris Kiefer. 2017. Self-resonating Feedback Cello: Interfacing gestural and generative processes in improvised performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 25–29. http://doi.org/10.5281/zenodo.1176157
Abstract
Download PDF DOI
The Feedback Cello is a new electroacoustic actuated instrument in which feedback can be induced independently on each string. Built from retro-fitted acoustic cellos, the signals from electromagnetic pickups sitting under each string are passed to a speaker built into the back of the instrument and to transducers clamped in varying places across the instrument body. Placement of acoustic and mechanical actuators on the resonant body of the cello mean that this simple analogue feedback system is capable of a wide range of complex self-resonating behaviours. This paper describes the motivations for building these instruments as both a physical extension to live coding practice and an electroacoustic augmentation of cello. The design and physical construction is outlined, and modes of performance described with reference to the first six months of performances and installations. Future developments and planned investigations are outlined.
@inproceedings{aeldridge2017, author = {Eldridge, Alice and Kiefer, Chris}, title = {Self-resonating Feedback Cello: Interfacing gestural and generative processes in improvised performance}, pages = {25--29}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176157}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0005.pdf} }
Don Derek Haddad, Xiao Xiao, Tod Machover, and Joseph Paradiso. 2017. Fragile Instruments: Constructing Destructible Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 30–33. http://doi.org/10.5281/zenodo.1176159
Abstract
Download PDF DOI
We introduce a family of fragile electronic musical instruments designed to be "played" through the act of destruction. Each Fragile Instrument consists of an analog synthesizing circuit with embedded sensors that detect the destruction of an outer shell, which is destroyed and replaced for each performance. Destruction plays an integral role in both the spectacle and the generated sounds. This paper presents several variations of Fragile Instruments we have created, discussing their circuit design as well as choices of material for the outer shell and tools of destruction. We conclude by considering other approaches to create intentionally destructible electronic musical instruments.
@inproceedings{dhaddad2017, author = {Haddad, Don Derek and Xiao, Xiao and Machover, Tod and Paradiso, Joseph}, title = {Fragile Instruments: Constructing Destructible Musical Interfaces}, pages = {30--33}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176159}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0006.pdf} }
Florian Heller, Irene Meying Cheung Ruiz, and Jan Borchers. 2017. An Augmented Flute for Beginners. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 34–37. http://doi.org/10.5281/zenodo.1176161
Abstract
Download PDF DOI
Learning to play the transverse flute is not an easy task, at least not for everyone. Since the flute does not have a reed to resonate, the player must provide a steady, focused stream of air that will cause the flute to resonate and thereby produce sound. In order to achieve this, the player has to be aware of the embouchure position to generate an adequate air jet. For a beginner, this can be a difficult task due to the lack of visual cues or indicators of the air jet and lips position. This paper attempts to address this problem by presenting an augmented flute that can make the gestures related to the embouchure visible and measurable. The augmented flute shows information about the area covered by the lower lip, estimates the lip hole shape based on noise analysis, and it shows graphically the air jet direction. Additionally, the augmented flute provides directional and continuous feedback in real time, based on data acquired by experienced flutists.
@inproceedings{fheller2017, author = {Heller, Florian and Ruiz, Irene Meying Cheung and Borchers, Jan}, title = {An Augmented Flute for Beginners}, pages = {34--37}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176161}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0007.pdf} }
Gabriella Isaac, Lauren Hayes, and Todd Ingalls. 2017. Cross-Modal Terrains: Navigating Sonic Space through Haptic Feedback. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 38–41. http://doi.org/10.5281/zenodo.1176163
Abstract
Download PDF DOI
This paper explores the idea of using virtual textural terrains as a means of generating haptic profiles for force-feedback controllers. This approach breaks from the paradigm established within audio-haptic research over the last few decades where physical models within virtual environments are designed to transduce gesture into sonic output. We outline a method for generating multimodal terrains using basis functions, which are rendered into monochromatic visual representations for inspection. This visual terrain is traversed using a haptic controller, the NovInt Falcon, which in turn receives force information based on the grayscale value of its location in this virtual space. As the image is traversed by a performer the levels of resistance vary, and the image is realized as a physical terrain. We discuss the potential of this approach to afford engaging musical experiences for both the performer and the audience as iterated through numerous performances.
@inproceedings{gisaac2017, author = {Isaac, Gabriella and Hayes, Lauren and Ingalls, Todd}, title = {Cross-Modal Terrains: Navigating Sonic Space through Haptic Feedback}, pages = {38--41}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176163}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0008.pdf} }
Jiayue Wu, Mark Rau, Yun Zhang, Yijun Zhou, and Matt Wright. 2017. Towards Robust Tracking with an Unreliable Motion Sensor Using Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 42–47. http://doi.org/10.5281/zenodo.1176165
Abstract
Download PDF DOI
This paper presents solutions to improve reliability and to work around challenges of using a Leap Motion; sensor as a gestural control and input device in digital music instrument (DMI) design. We implement supervised learning algorithms (k-nearest neighbors, support vector machine, binary decision tree, and artificial neural network) to estimate hand motion data, which is not typically captured by the sensor. Two problems are addressed: 1) the sensor cannot detect overlapping hands 2) The sensor’s limited detection range. Training examples included 7 kinds of overlapping hand gestures as well as hand trajectories where a hand goes out of the sensor’s range. The overlapping gestures were treated as a classification problem and the best performing model was k-nearest neighbors with 62% accuracy. The out-of-range problem was treated first as a clustering problem to group the training examples into a small number of trajectory types, then as a classification problem to predict trajectory type based on the hand’s motion before going out of range. The best performing model was k-nearest neighbors with an accuracy of 30%. The prediction models were implemented in an ongoing multimedia electroacoustic vocal performance and an educational project named Embodied Sonic Meditation (ESM).
@inproceedings{jwu2017, author = {Wu, Jiayue and Rau, Mark and Zhang, Yun and Zhou, Yijun and Wright, Matt}, title = {Towards Robust Tracking with an Unreliable Motion Sensor Using Machine Learning}, pages = {42--47}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176165}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0009.pdf} }
Álvaro Barbosa and Thomas Tsang. 2017. Sounding Architecture: Inter-Disciplinary Studio at HKU. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 48–51. http://doi.org/10.5281/zenodo.1176167
Abstract
Download PDF DOI
Sounding Architecture, is the first collaborative teaching development between Department of Architecture and Department of Music at the University of Hong Kong (HKU), introduced in Fall 2016. In this paper we present critical observations about the studio after a final public presentation of all projects. The Review was conducted with demonstrations by groups of students supervised by different Lecturer, in each case focusing on a different strategy to create a connection between Sound, Music, Acoustics, Space and Architectural Design. There was an assumption that the core working process would have to include the design of a new musical instrument, which in some cases became the final deliverable of the Studio and in other cases a step in a process that leads to a different outcome (such as an architectural Design, a performance or social experiment). One other relevant aspect was that Digital technology was used in the design and fabrication of the physical instruments’ prototypes, but in very few cases, it was used in the actual generation or enhancement of sound, with the instruments relying almost exclusively in acoustic and mechanical sound.
@inproceedings{abarbosa2017, author = {Barbosa, Álvaro and Tsang, Thomas}, title = {Sounding Architecture: Inter-Disciplinary Studio at HKU}, pages = {48--51}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176167}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0010.pdf} }
Martín Matus Lerner. 2017. Osiris: a liquid based digital musical instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 52–55. http://doi.org/10.5281/zenodo.1176169
Abstract
Download PDF DOI
This paper describes the process of creation of a new digital musical instrument: Osiris. This device is based on the circulation of liquids for the generation of musical notes. Besides the system of liquid distribution, a module that generates MIDI events was designed and built based on the Arduino platform; such module is employed together with a Proteus 2000 sound generator. The programming of the control module as well as the choice of sound-generating module had as their main objective that the instrument should provide an ample variety of sound and musical possibilities, controllable in real time.
@inproceedings{mlerner2017, author = {Matus Lerner, Martín}, title = {Osiris: a liquid based digital musical instrument}, pages = {52--55}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176169}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0011.pdf} }
Spyridon Stasis, Jason Hockman, and Ryan Stables. 2017. Navigating Descriptive Sub-Representations of Musical Timbre. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 56–61. http://doi.org/10.5281/zenodo.1176171
Abstract
Download PDF DOI
Musicians, audio engineers and producers often make use of common timbral adjectives to describe musical signals and transformations. However, the subjective nature of these terms, and the variability with respect to musical context often leads to inconsistencies in their definition. In this study, a model is proposed for controlling an equaliser by navigating clusters of datapoints, which represent grouped parameter settings with the same timbral description. The interface allows users to identify the nearest cluster to their current parameter setting and recommends changes based on its relationship to a cluster centroid. To do this, we apply dimensionality reduction to a dataset of equaliser curves described as warm and bright using a stacked autoencoder, then group the entries using an agglomerative clustering algorithm with a coherence based distance criterion. To test the efficacy of the system, we implement listening tests and show that subjects are able to match datapoints to their respective sub-representations with 93.75% mean accuracy.
@inproceedings{sstasis2017, author = {Stasis, Spyridon and Hockman, Jason and Stables, Ryan}, title = {Navigating Descriptive Sub-Representations of Musical Timbre}, pages = {56--61}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176171}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0012.pdf} }
Peter Williams and Daniel Overholt. 2017. Pitch Fork: A Novel tactile Digital Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 62–64. http://doi.org/10.5281/zenodo.1176173
Abstract
Download PDF DOI
Pitch Fork is a prototype of an alternate, actuated digital musical instrument (DMI). It uses 5 infra-red and 4 piezoelectric sensors to control an additive synthesis engine. Iron bars are used as the physical point of contact in interaction with the aim of using material computation to control aspects of the digitally produced sound. This choice of material was also chosen to affect player experience. Sensor readings are relayed to a Macbook via an Arduino Mega. Mappings and audio output signal is carried out with Pure Data Extended.
@inproceedings{pwilliams2017a, author = {Williams, Peter and Overholt, Daniel}, title = {Pitch Fork: A Novel tactile Digital Musical Instrument}, pages = {62--64}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176173}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0013.pdf} }
Cagri Erdem, Anil Camci, and Angus Forbes. 2017. Biostomp: A Biocontrol System for Embodied Performance Using Mechanomyography. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 65–70. http://doi.org/10.5281/zenodo.1176175
Abstract
Download PDF DOI
Biostomp is a new musical interface that relies on the use mechanomyography (MMG) as a biocontrol mechanism in live performance situations. Designed in the form of a stomp box, Biostomp translates a performer’s muscle movements into control signals. A custom MMG sensor captures the acoustic output of muscle tissue oscillations resulting from contractions. An analog circuit amplifies and filters these signals, and a micro-controller translates the processed signals into pulses. These pulses are used to activate a stepper motor mechanism, which is designed to be mounted on parameter knobs on effects pedals. The primary goal in designing Biostomp is to offer a robust, inexpensive, and easy-to-operate platform for integrating biological signals into both traditional and contemporary music performance practices without requiring an intermediary computer software. In this paper, we discuss the design, implementation and evaluation of Biostomp. Following an overview of related work on the use of biological signals in artistic projects, we offer a discussion of our approach to conceptualizing and fabricating a biocontrol mechanism as a new musical interface. We then discuss the results of an evaluation study conducted with 21 professional musicians. A video abstract for Biostomp can be viewed at vimeo.com/biostomp/video.
@inproceedings{cerdem2017, author = {Erdem, Cagri and Camci, Anil and Forbes, Angus}, title = {Biostomp: A Biocontrol System for Embodied Performance Using Mechanomyography}, pages = {65--70}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176175}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0014.pdf} }
Esben W. Knudsen, Malte L. Hølledig, Mads Juel Nielsen, et al. 2017. Audio-Visual Feedback for Self-monitoring Posture in Ballet Training. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 71–76. http://doi.org/10.5281/zenodo.1181422
Abstract
Download PDF DOI
An application for ballet training is presented that monitors the posture position (straightness of the spine and rotation of the pelvis) deviation from the ideal position in real-time. The human skeletal data is acquired through a Microsoft Kinect v2. The movement of the student is mirrored through an abstract skeletal figure and instructions are provided through a virtual teacher. Posture deviation is measured in the following way: Torso misalignment is calculated by comparing hip center joint, shoulder center joint and neck joint position with an ideal posture position retrieved in an initial calibration procedure. Pelvis deviation is expressed as the xz-rotation of the hip-center joint. The posture deviation is sonified via a varying cut-off frequency of a high-pass filter applied to floating water sound. The posture deviation is visualized via a curve and a rigged skeleton in which the misaligned torso parts are color-coded. In an experiment with 9-12 year-old dance students from a ballet school, comparing the audio-visual feedback modality with no feedback leads to an increase in posture accuracy (p < 0.001, Cohen’s d = 1.047). Reaction card feedback and expert interviews indicate that the feedback is considered fun and useful for training independently from the teacher.
@inproceedings{eknudsen2017, author = {Knudsen, Esben W. and Hølledig, Malte L. and Nielsen, Mads Juel and Petersen, Rikke K. and Bach-Nielsen, Sebastian and Zanescu, Bogdan-Constantin and Overholt, Daniel and Purwins, Hendrik and Helweg, Kim}, title = {Audio-Visual Feedback for Self-monitoring Posture in Ballet Training}, pages = {71--76}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1181422}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0015.pdf} }
Rikard Lindell and Tomas Kumlin. 2017. Augmented Embodied Performance – Extended Artistic Room, Enacted Teacher, and Humanisation of Technology. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 77–82. http://doi.org/10.5281/zenodo.1176177
Abstract
Download PDF DOI
We explore the phenomenology of embodiment based on research through design and reflection on the design of artefacts for augmenting embodied performance. We present three designs for highly trained musicians; the designs rely on the musicians’ mastery acquired from years of practice. Through the knowledge of the living body their instruments – saxophone, cello, and flute – are extensions of themselves; thus, we can explore technology with rich nuances and precision in corporeal schemas. With the help of Merleau-Ponty’s phenomenology of embodiment we present three hypotheses for augmented embodied performance: the extended artistic room, the interactively enacted teacher, and the humanisation of technology.
@inproceedings{rlindell2017, author = {Lindell, Rikard and Kumlin, Tomas}, title = {Augmented Embodied Performance – Extended Artistic Room, Enacted Teacher, and Humanisation of Technology}, pages = {77--82}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176177}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0016.pdf} }
Jens Vetter and Sarah Leimcke. 2017. Homo Restis — Constructive Control Through Modular String Topologies. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 83–86. http://doi.org/10.5281/zenodo.1176179
Abstract
Download PDF DOI
In this paper we discuss a modular instrument system for musical expression consisting of multiple devices using string detection, sound synthesis and wireless communication. The design of the system allows for different physical arrangements, which we define as topologies. In particular we will explain our concept and requirements, the system architecture including custom magnetic string sensors and our network communication and discuss its use in the performance HOMO RESTIS.
@inproceedings{jvetter2017, author = {Vetter, Jens and Leimcke, Sarah}, title = {Homo Restis --- Constructive Control Through Modular String Topologies}, pages = {83--86}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176179}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0017.pdf} }
Jeronimo Barbosa, Marcelo M. Wanderley, and Stéphane Huot. 2017. Exploring Playfulness in NIME Design: The Case of Live Looping Tools. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 87–92. http://doi.org/10.5281/zenodo.1176181
Abstract
Download PDF DOI
Play and playfulness compose an essential part of our lives as human beings. From childhood to adultness, playfulness is often associated with remarkable positive experiences related to fun, pleasure, intimate social activities, imagination, and creativity. Perhaps not surprisingly, playfulness has been recurrently used in NIME designs as a strategy to engage people, often non-expert, in short term musical activities. Yet, designing for playfulness remains a challenging task, as little knowledge is available for designers to support their decisions. To address this issue, we follow a design rationale approach using the context of Live Looping (LL) as a case study. We start by surveying 101 LL tools, summarizing our analysis into a new design space. We then use this design space to discuss potential guidelines to address playfulness in a design process. These guidelines are implemented and discussed in a new LL tool—called the "Voice Reaping Machine". Finally, we contrast our guidelines with previous works in the literature.
@inproceedings{jbarbosa2017, author = {Barbosa, Jeronimo and Wanderley, Marcelo M. and Huot, Stéphane}, title = {Exploring Playfulness in NIME Design: The Case of Live Looping Tools}, pages = {87--92}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176181}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0018.pdf} }
Daniel Manesh and Eran Egozy. 2017. Exquisite Score: A System for Collaborative Musical Composition. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 93–98. http://doi.org/10.5281/zenodo.1176183
Abstract
Download PDF DOI
Exquisite Score is a web application which allows users to collaborate on short musical compositions using the paradigm of the parlor game exquisite corpse. Through a MIDI-sequencer interface, composers each contribute a section to a piece of music, only seeing the very end of the preceding section. Exquisite Score is both a fun and accessible compositional game as well as a system for encouraging highly novel musical compositions. Exquisite Score was tested by several students and musicians. Several short pieces were created and a brief discussion and analysis of these pieces is included.
@inproceedings{dmanesh2017, author = {Manesh, Daniel and Egozy, Eran}, title = {Exquisite Score: A System for Collaborative Musical Composition}, pages = {93--98}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176183}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0019.pdf} }
Stahl Stenslie, Kjell Tore Innervik, Ivar Frounberg, and Thom Johansen. 2017. Somatic Sound in Performative Contexts. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 99–103. http://doi.org/10.5281/zenodo.1176185
Abstract
Download PDF DOI
This paper presents a new spherical shaped capacitive sensor device for creating interactive compositions and embodied user experiences inside of a periphonic, 3D sound space. The Somatic Sound project is here presented as a) technological innovative musical instrument, and b) an experiential art installation. One of the main research foci is to explore embodied experiences through moving, interactive and somatic sound. The term somatic is here understood and used as in relating to the body in a physical, holistic and immersive manner.
@inproceedings{sstenslie2017, author = {Stenslie, Stahl and Innervik, Kjell Tore and Frounberg, Ivar and Johansen, Thom}, title = {Somatic Sound in Performative Contexts}, pages = {99--103}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176185}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0020.pdf} }
Jeppe Veirum Larsen and Hendrik Knoche. 2017. States and Sound: Modelling Interactions with Musical User Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 104–109. http://doi.org/10.5281/zenodo.1176187
Abstract
Download PDF DOI
Musical instruments and musical user interfaces provide rich input and feedback through mostly tangible interactions, resulting in complex behavior. However, publications of novel interfaces often lack the required detail due to the complexity or the focus on a specific part of the interfaces and absence of a specific template or structure to describe these interactions. Drawing on and synthesizing models from interaction design and music making we propose a way for modeling musical interfaces by providing a scheme and visual language to describe, design, analyze, and compare interfaces for music making. To illustrate its capabilities we apply the proposed model to a range of assistive musical instruments, which often draw on multi-modal in- and output, resulting in complex designs and descriptions thereof.
@inproceedings{jlarsen2017, author = {Larsen, Jeppe Veirum and Knoche, Hendrik}, title = {States and Sound: Modelling Interactions with Musical User Interfaces}, pages = {104--109}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176187}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0021.pdf} }
Guangyu Xia and Roger Dannenberg. 2017. Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 110–114. http://doi.org/10.5281/zenodo.1176189
Abstract
Download PDF DOI
The interaction between music improvisers is studied in the context of piano duets, where one improviser embellishes a melody, and the other plays a chordal accompaniment with great freedom. We created an automated accompaniment player that learns to play from example performances. Accompaniments are constructed by selecting and concatenating one-measure score units from actual performances. An important innovation is the ability to learn how the improvised accompaniment should respond to variations in the melody performance, using tempo and embellishment complexity as features, resulting in a truly interactive performance within a conventional musical framework. We conducted both objective and subjective evaluations, showing that the learned improviser performs more interactive, musical, and human-like accompaniment compared with the less responsive, rule-based baseline algorithm.
@inproceedings{gxia2017, author = {Xia, Guangyu and Dannenberg, Roger}, title = {Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment}, pages = {110--114}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176189}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0022.pdf} }
Palle Dahlstedt. 2017. Physical Interactions with Digital Strings — A hybrid approach to a digital keyboard instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 115–120. http://doi.org/10.5281/zenodo.1176191
Abstract
Download PDF DOI
A new hybrid approach to digital keyboard playing is presented, where the actual acoustic sounds from a digital keyboard are captured with contact microphones and applied as excitation signals to a digital model of a prepared piano, i.e., an extended wave-guide model of strings with the possibility of stopping and muting the strings at arbitrary positions. The parameters of the string model are controlled through TouchKeys multitouch sensors on each key, combined with MIDI data and acoustic signals from the digital keyboard frame, using a novel mapping. The instrument is evaluated from a performing musician’s perspective, and emerging playing techniques are discussed. Since the instrument is a hybrid acoustic-digital system with several feedback paths between the domains, it provides for expressive and dynamic playing, with qualities approaching that of an acoustic instrument, yet with new kinds of control. The contributions are two-fold. First, the use of acoustic sounds from a physical keyboard for excitations and resonances results in a novel hybrid keyboard instrument in itself. Second, the digital model of "inside piano" playing, using multitouch keyboard data, allows for performance techniques going far beyond conventional keyboard playing.
@inproceedings{pdahlstedt2017, author = {Dahlstedt, Palle}, title = {Physical Interactions with Digital Strings --- A hybrid approach to a digital keyboard instrument}, pages = {115--120}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176191}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0023.pdf} }
Charles Roberts and Graham Wakefield. 2017. gibberwocky: New Live-Coding Instruments for Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 121–126. http://doi.org/10.5281/zenodo.1176193
Abstract
Download PDF DOI
We describe two new versions of the gibberwocky live-coding system. One integrates with Max/MSP while the second targets MIDI output and runs entirely in the browser. We discuss commonalities and differences between the three environments, and how they fit into the live-coding landscape. We also describe lessons learned while performing with the original version of gibberwocky, both from our perspective and the perspective of others. These lessons informed the addition of animated sparkline visualizations depicting modulations to performers and audiences in all three versions.
@inproceedings{croberts2017, author = {Roberts, Charles and Wakefield, Graham}, title = {gibberwocky: New Live-Coding Instruments for Musical Performance}, pages = {121--126}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176193}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0024.pdf} }
Sasha Leitman. 2017. Current Iteration of a Course on Physical Interaction Design for Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 127–132. http://doi.org/10.5281/zenodo.1176197
Abstract
Download PDF DOI
This paper is an overview of the current state of a course on New Interfaces for Musical Expression taught at Stanford University. It gives an overview of the various technologies and methodologies used to teach the interdisciplinary work of new musical interfaces.
@inproceedings{sleitman2017, author = {Leitman, Sasha}, title = {Current Iteration of a Course on Physical Interaction Design for Music}, pages = {127--132}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176197}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0025.pdf} }
Alex Hofmann, Bernt Isak Waerstad, Saranya Balasubramanian, and Kristoffer E. Koch. 2017. From interface design to the software instrument — Mapping as an approach to FX-instrument building. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 133–138. http://doi.org/10.5281/zenodo.1176199
Abstract
Download PDF DOI
To build electronic musical instruments, a mapping between the real-time audio processing software and the physical controllers is required. Different strategies of mapping were developed and discussed within the NIME community to improve musical expression in live performances. This paper discusses an interface focussed instrument design approach, which starts from the physical controller and its functionality. From this definition, the required, underlying software instrument is derived. A proof of concept is implemented as a framework for effect instruments. This framework comprises a library of real-time effects for Csound, a proposition for a JSON-based mapping format, and a mapping-to-instrument converter that outputs Csound instrument files. Advantages, limitations and possible future extensions are discussed.
@inproceedings{ahofmann2017, author = {Hofmann, Alex and Waerstad, Bernt Isak and Balasubramanian, Saranya and Koch, Kristoffer E.}, title = {From interface design to the software instrument --- Mapping as an approach to FX-instrument building}, pages = {133--138}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176199}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0026.pdf} }
Marco Marchini, François Pachet, and Benoît Carré. 2017. Rethinking Reflexive Looper for structured pop music. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 139–144. http://doi.org/10.5281/zenodo.1176201
Abstract
Download PDF DOI
Reflexive Looper (RL) is a live-looping system which allows a solo musician to incarnate the different roles of a whole rhythm section by looping rhythms, chord progressions, bassline and more. The loop pedal, is still the most used device for those types of performances, accounting for many of the cover songs performances on youtube, but not all kinds of song apply. Unlike a common loop pedal, each layer of sound in RL is produced by an intelligent looping-agent which adapts to the musician and respects given constraints, using constrained optimization. In its original form, RL worked well for jazz guitar improvisation but was unsuited to structured music such as pop songs. In order to bring the system on pop stage, we revisited the system interaction, following the guidelines of professional users who tested it extensively. We describe the revisited system which can accommodate both pop and jazz. Thanks to intuitive pedal interaction and structure-constraints, the new RL deals with pop music and has been already used in several in live concert situations.
@inproceedings{mmarchini2017, author = {Marchini, Marco and Pachet, François and Carré, Benoît}, title = {Rethinking Reflexive Looper for structured pop music}, pages = {139--144}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176201}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0027.pdf} }
Victor Zappi, Andrew Allen, and Sidney Fels. 2017. Shader-based Physical Modelling for the Design of Massive Digital Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 145–150. http://doi.org/10.5281/zenodo.1176203
Abstract
Download PDF DOI
Physical modelling is a sophisticated synthesis technique, often used in the design of Digital Musical Instruments (DMIs). Some of the most precise physical simulations of sound propagation are based on Finite-Difference Time-Domain (FDTD) methods, which are stable, highly parameterizable but characterized by an extremely heavy computational load. This drawback hinders the spread of FDTD from the domain of off-line simulations to the one of DMIs. With this paper, we present a novel approach to real-time physical modelling synthesis, which implements a 2D FDTD solver as a shader program running on the GPU directly within the graphics pipeline. The result is a system capable of running fully interactive, massively sized simulation domains, suitable for novel DMI design. With the help of diagrams and code snippets, we provide the implementation details of a first interactive application, a drum head simulator whose source code is available online. Finally, we evaluate the proposed system, showing how this new approach can work as a valuable alternative to classic GPGPU modelling.
@inproceedings{vzappi2017, author = {Zappi, Victor and Allen, Andrew and Fels, Sidney}, title = {Shader-based Physical Modelling for the Design of Massive Digital Musical Instruments}, pages = {145--150}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176203}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0028.pdf} }
David Johnson and George Tzanetakis. 2017. VRMin: Using Mixed Reality to Augment the Theremin for Musical Tutoring. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 151–156. http://doi.org/10.5281/zenodo.1176205
Abstract
Download PDF DOI
The recent resurgence of Virtual Reality (VR) technologies provide new platforms for augmenting traditional music instruments. Instrument augmentation is a common approach for designing new interfaces for musical expression, as shown through hyperinstrument research. New visual affordances present in VR give designers new methods for augmenting instruments to extend not only their expressivity, but also their capabilities for computer assisted tutoring. In this work, we present VRMin, a mobile Mixed Reality (MR) application for augmenting a physical theremin, with an immersive virtual environment (VE), for real time computer assisted tutoring. We augment a physical theremin with 3D visual cues to indicate correct hand positioning for performing given notes and volumes. The physical theremin acts as a domain specific controller for the resulting MR environment. The initial effectiveness of this approach is measured by analyzing a performer’s hand position while training with and without the VRMin. We also evaluate the usability of the interface using heuristic evaluation based on a newly proposed set of guidelines designed for VR musical environments.
@inproceedings{djohnson2017, author = {Johnson, David and Tzanetakis, George}, title = {VRMin: Using Mixed Reality to Augment the Theremin for Musical Tutoring}, pages = {151--156}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176205}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0029.pdf} }
Richard Graham, Brian Bridges, Christopher Manzione, and William Brent. 2017. Exploring Pitch and Timbre through 3D Spaces: Embodied Models in Virtual Reality as a Basis for Performance Systems Design. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 157–162. http://doi.org/10.5281/zenodo.1176207
Abstract
Download PDF DOI
Our paper builds on an ongoing collaboration between theorists and practitioners within the computer music community, with a specific focus on three-dimensional environments as an incubator for performance systems design. In particular, we are concerned with how to provide accessible means of controlling spatialization and timbral shaping in an integrated manner through the collection of performance data from various modalities from an electric guitar with a multichannel audio output. This paper will focus specifically on the combination of pitch data treated within tonal models and the detection of physical performance gestures using timbral feature extraction algorithms. We discuss how these tracked gestures may be connected to concepts and dynamic relationships from embodied cognition, expanding on performative models for pitch and timbre spaces. Finally, we explore how these ideas support connections between sonic, formal and performative dimensions. This includes instrumental technique detection scenes and mapping strategies aimed at bridging music performance gestures across physical and conceptual planes.
@inproceedings{rgraham2017, author = {Graham, Richard and Bridges, Brian and Manzione, Christopher and Brent, William}, title = {Exploring Pitch and Timbre through 3D Spaces: Embodied Models in Virtual Reality as a Basis for Performance Systems Design}, pages = {157--162}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176207}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0030.pdf} }
Michael Gurevich. 2017. Discovering Instruments in Scores: A Repertoire-Driven Approach to Designing New Interfaces for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 163–168. http://doi.org/10.5281/zenodo.1176209
Abstract
Download PDF DOI
This paper situates NIME practice with respect to models of social interaction among human agents. It argues that the conventional model of composer-performer-listener, and the underlying mid-20th century metaphor of music as communication upon which it relies, cannot reflect the richness of interaction and possibility afforded by interactive digital technologies. Building on Paul Lansky’s vision of an expanded and dynamic social network, an alternative, ecological view of music-making is presented, in which meaning emerges not from "messages" communicated between individuals, but instead from the "noise" that arises through the uncertainty in their interactions. However, in our tendency in NIME to collapse the various roles in this network into a single individual, we place the increased potential afforded by digital systems at risk. Using examples from the author’s NIME practices, the paper uses a practice-based methodology to describe approaches to designing instruments that respond to the technologies that form the interfaces of the network, which can include scores and stylistic conventions. In doing so, the paper demonstrates that a repertoire—a seemingly anachronistic concept—and a corresponding repertoire-driven approach to creating NIMEs can in fact be a catalyst for invention and creativity.
@inproceedings{mgurevich2017, author = {Gurevich, Michael}, title = {Discovering Instruments in Scores: A Repertoire-Driven Approach to Designing New Interfaces for Musical Expression}, pages = {163--168}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176209}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0031.pdf} }
Joe Cantrell. 2017. Designing Intent: Defining Critical Meaning for NIME Practitioners. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 169–173. http://doi.org/10.5281/zenodo.1176211
Abstract
Download PDF DOI
The ideation, conception and implementation of new musical interfaces and instruments provide more than the mere construction of digital objects. As physical and digital assemblages, interfaces also act as traces of the authoring entities that created them. Their intentions, likes, dislikes, and ultimate determinations of what is creatively useful all get embedded into the available choices of the interface. In this light, the self-perception of the musical HCI and instrument designer can be seen as occupying a primary importance in the instruments and interfaces that eventually come to be created. The work of a designer who self-identifies as an artist may result in a vastly different outcome than one who considers him or herself to be an entrepreneur, or a scientist, for example. These differing definitions of self as well as their HCI outcomes require their own means of critique, understanding and expectations. All too often, these definitions are unclear, or the considerations of overlapping means of critique remain unexamined.
@inproceedings{jcantrell2017, author = {Cantrell, Joe}, title = {Designing Intent: Defining Critical Meaning for NIME Practitioners}, pages = {169--173}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176211}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0032.pdf} }
Juan Vasquez, Koray Tahiroğlu, and Johan Kildal. 2017. Idiomatic Composition Practices for New Musical Instruments: Context, Background and Current Applications. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 174–179. http://doi.org/10.5281/zenodo.1181424
Abstract
Download PDF DOI
One of the reasons of why some musical instruments more successfully continue their evolution and actively take part in the history of music is partially attributed to the existing compositions made specifically for them, pieces that remain and are still played over a long period of time. This is something we know, performing these compositions keeps the characteristics of the instruments alive and able to survive. This paper presents our contribution to this discussion with a context and historical background for idiomatic compositions. Looking beyond the classical era, we discuss how the concept of idiomatic music has influenced research and composition practices in the NIME community; drawing more attention in the way current idiomatic composition practices considered specific NIME affordances for sonic, social and spatial interaction. We present particular projects that establish idiomatic writing as a part of a new repertoire for new musical instruments. The idiomatic writing approach to composing music for NIME can shift the unique characteristics of new instruments to a more established musical identity, providing a shared understanding and a common literature to the community.
@inproceedings{jvasquez2017, author = {Vasquez, Juan and Tahiroğlu, Koray and Kildal, Johan}, title = {Idiomatic Composition Practices for New Musical Instruments: Context, Background and Current Applications}, pages = {174--179}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1181424}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0033.pdf} }
Florent Berthaut, Cagan Arslan, and Laurent Grisoni. 2017. Revgest: Augmenting Gestural Musical Instruments with Revealed Virtual Objects. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 180–185. http://doi.org/10.5281/zenodo.1176213
Abstract
Download PDF DOI
Gestural interfaces, which make use of physiological signals, hand / body postures or movements, have become widespread for musical expression. While they may increase the transparency and expressiveness of instruments, they may also result in limited agency, for musicians as well as for spectators. This problem becomes especially true when the implemented mappings between gesture and music are subtle or complex. These instruments may also restrict the appropriation possibilities of controls, by comparison to physical interfaces. Most existing solutions to these issues are based on distant and/or limited visual feedback (LEDs, small screens). Our approach is to augment the gestures themselves with revealed virtual objects. Our contributions are, first a novel approach of visual feedback that allow for additional expressiveness, second a software pipeline for pixel-level feedback and control that ensures tight coupling between sound and visuals, and third, a design space for extending gestural control using revealed interfaces. We also demonstrate and evaluate our approach with the augmentation of three existing gestural musical instruments.
@inproceedings{fberthaut2017, author = {Berthaut, Florent and Arslan, Cagan and Grisoni, Laurent}, title = {Revgest: Augmenting Gestural Musical Instruments with Revealed Virtual Objects}, pages = {180--185}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176213}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0034.pdf} }
Akito van Troyer. 2017. MM-RT: A Tabletop Musical Instrument for Musical Wonderers. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 186–191. http://doi.org/10.5281/zenodo.1176215
Abstract
Download PDF DOI
MM-RT (material and magnet — rhythm and timbre) is a tabletop musical instrument equipped with electromagnetic actuators to offer a new paradigm of musical expression and exploration. After expanding on prior work with electromagnetic instrument actuation and tabletop musical interfaces, the paper explains why and how MM-RT, through its physicality and ergonomics, has been designed specifically for musical wonderers: people who want to know more about music in installation, concert, and everyday contexts. Those wonderers aspire to interpret and explore music rather than focussing on a technically correct realization of music. Informed by this vision, we then describe the design and technical implementation of this tabletop musical instrument. The paper concludes with discussions about future works and how to trigger musical wonderers’ sonic curiosity to encounter, explore, invent, and organize sounds for music creation using a musical instrument like MM-RT.
@inproceedings{atroyer2017, author = {van Troyer, Akito}, title = {MM-RT: A Tabletop Musical Instrument for Musical Wonderers}, pages = {186--191}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176215}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0035.pdf} }
Fabio Morreale and Andrew McPherson. 2017. Design for Longevity: Ongoing Use of Instruments from NIME 2010-14. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 192–197. http://doi.org/10.5281/zenodo.1176218
Abstract
Download PDF DOI
Every new edition of NIME brings dozens of new DMIs and the feeling that only a few of them will eventually break through. Previous work tried to address this issue with a deductive approach by formulating design frameworks; we addressed this issue with a inductive approach by elaborating on successes and failures of previous DMIs. We contacted 97 DMI makers that presented a new instrument at five successive editions of NIME (2010-2014); 70 answered. They were asked to indicate the original motivation for designing the DMI and to present information about its uptake. Results confirmed that most of the instruments have difficulties establishing themselves. Also, they were asked to reflect on the specific factors that facilitated and those that hindered instrument longevity. By grounding these reflections on existing reserach on NIME and HCI, we propose a series of design considerations for future DMIs.
@inproceedings{fmorreale2017, author = {Morreale, Fabio and McPherson, Andrew}, title = {Design for Longevity: Ongoing Use of Instruments from NIME 2010-14}, pages = {192--197}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176218}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0036.pdf} }
Samuel Delalez and Christophe d’Alessandro. 2017. Vokinesis: Syllabic Control Points for Performative Singing Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 198–203. http://doi.org/10.5281/zenodo.1176220
Abstract
Download PDF DOI
Performative control of voice is the process of real-time speech synthesis or modification by the means of hands or feet gestures. Vokinesis, a system for real-time rhythm and pitch modification and control of singing is presented. Pitch and vocal effort are controlled by a stylus on a graphic tablet. The concept of Syllabic Control Points (SCP) is introduced for timing and rhythm control. A chain of phonetic syllables have two types of temporal phases : the steady phases, which correspond to the vocalic nuclei, and the transient phases, which correspond to the attacks and/or codas. Thus, syllabic rhythm control methods need transient and steady phases control points, corresponding to the ancient concept of the arsis and thesis is prosodic theory. SCP allow for accurate control of articulation, using hand or feet. In the Tap mode, SCP are triggered by pressing and releasing a control button. In the Fader mode, continuous variation of the SCP sequencing rate is controlled with expression pedals. Vokinesis has been tested successfully in musical performances, using both syllabic rhythm control modes. This system opens new musical possibilities, and can be extended to other types of sounds beyond voice.
@inproceedings{sdelalez2017, author = {Delalez, Samuel and d'Alessandro, Christophe}, title = {Vokinesis: Syllabic Control Points for Performative Singing Synthesis}, pages = {198--203}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176220}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0037.pdf} }
Gareth Young, Dave Murphy, and Jeffrey Weeter. 2017. A Qualitative Analysis of Haptic Feedback in Music Focused Exercises. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 204–209. http://doi.org/10.5281/zenodo.1176222
Abstract
Download PDF DOI
We present the findings of a pilot-study that analysed the role of haptic feedback in a musical context. To examine the role of haptics in Digital Musical Instrument (DMI) design an experiment was formulated to measure the users’ perception of device usability across four separate feedback stages: fully haptic (force and tactile combined), constant force only, vibrotactile only, and no feedback. The study was piloted over extended periods with the intention of exploring the application and integration of DMIs in real-world musical contexts. Applying a music orientated analysis of this type enabled the investigative process to not only take place over a comprehensive period, but allowed for the exploration of DMI integration in everyday compositional practices. As with any investigation that involves creativity, it was important that the participants did not feel rushed or restricted. That is, they were given sufficient time to explore and assess the different feedback types without constraint. This provided an accurate and representational set of qualitative data for validating the participants’ experience with the different feedback types they were presented with.
@inproceedings{gyoung2017, author = {Young, Gareth and Murphy, Dave and Weeter, Jeffrey}, title = {A Qualitative Analysis of Haptic Feedback in Music Focused Exercises}, pages = {204--209}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176222}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0038.pdf} }
Jingyin He, Jim Murphy, Dale A. Carnegie, and Ajay Kapur. 2017. Towards Related-Dedicated Input Devices for Parametrically Rich Mechatronic Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 210–215. http://doi.org/10.5281/zenodo.1176224
Abstract
Download PDF DOI
In the recent years, mechatronic musical instruments (MMI) have become increasingly parametrically rich. Researchers have developed different interaction strategies to negotiate the challenge of interfacing with each of the MMI’s high-resolution parameters in real time. While mapping strategies hold an important aspect of the musical interaction paradigm for MMI, attention on dedicated input devices to perform these instruments live should not be neglected. This paper presents the findings of a user study conducted with participants possessing specialized musicianship skills for MMI music performance and composition. Study participants are given three musical tasks to complete using a mechatronic chordophone with high dimensionality of control via different musical input interfaces (one input device at a time). This representative user study reveals the features of related-dedicated input controllers, how they compare against the typical MIDI keyboard/sequencer paradigm in human-MMI interaction, and provide an indication of the musical function that expert users prefer for each input interface.
@inproceedings{jhe2017, author = {He, Jingyin and Murphy, Jim and Carnegie, Dale A. and Kapur, Ajay}, title = {Towards Related-Dedicated Input Devices for Parametrically Rich Mechatronic Musical Instruments}, pages = {210--215}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176224}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0039.pdf} }
Asha Blatherwick, Luke Woodbury, and Tom Davis. 2017. Design Considerations for Instruments for Users with Complex Needs in SEN Settings. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 216–221. http://doi.org/10.5281/zenodo.1176226
Abstract
Download PDF DOI
Music technology can provide unique opportunities to allow access to music making for those with complex needs in special educational needs (SEN) settings. Whilst there is a growing trend of research in this area, technology has been shown to face a variety of issues leading to underuse in this context. This paper reviews issues raised in literature and in practice for the use of music technology in SEN settings. The paper then reviews existing principles and frameworks for designing digital musical instruments (DMIs.) The reviews of literature and current frameworks are then used to inform a set of design considerations for instruments for users with complex needs, and in SEN settings. 18 design considerations are presented with connections to literature and practice. An implementation example including future work is presented, and finally a conclusion is then offered.
@inproceedings{ablatherwick2017, author = {Blatherwick, Asha and Woodbury, Luke and Davis, Tom}, title = {Design Considerations for Instruments for Users with Complex Needs in SEN Settings}, pages = {216--221}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176226}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0040.pdf} }
Abram Hindle and Daryl Posnett. 2017. Performance with an Electronically Excited Didgeridoo. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 222–226. http://doi.org/10.5281/zenodo.1176228
Abstract
Download PDF DOI
The didgeridoo is a wind instrument composed of a single large tube often used as drone instrument for backing up the mids and lows of an ensemble. A didgeridoo is played by buzzing the lips and blowing air into the didgeridoo. To play a didgeridoo continously one can employ circular breathing but the volume of air required poses a real challenge to novice players. In this paper we replace the expense of circular breathing and lip buzzing with electronic excitation, thus creating an electro-acoustic didgeridoo or electronic didgeridoo. Thus we describe the didgeridoo excitation signal, how to replicate it, and the hardware necessary to make an electro-acoustic didgeridoo driven by speakers and controllable from a computer. To properly drive the didgeridoo we rely upon 4th-order ported bandpass speaker boxes to help guide our excitation signals into an attached acoustic didgeridoo. The results somewhat replicate human didgeridoo playing, enabling a new kind of mid to low electro-acoustic accompaniment without the need for circular breathing.
@inproceedings{ahindle2017, author = {Hindle, Abram and Posnett, Daryl}, title = {Performance with an Electronically Excited Didgeridoo}, pages = {222--226}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176228}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0041.pdf} }
Michael Zbyszyński, Mick Grierson, and Matthew Yee-King. 2017. Rapid Prototyping of New Instruments with CodeCircle. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 227–230. http://doi.org/10.5281/zenodo.1181420
Abstract
Download PDF DOI
Our research examines the use of CodeCircle, an online, collaborative HTML, CSS, and JavaScript editor, as a rapid prototyping environment for musically expressive instruments. In CodeCircle, we use two primary libraries: MaxiLib and RapidLib. MaxiLib is a synthesis and sample processing library, ported from the C++ library Maximillian, which interfaces with the Web Audio API for sound generation in the browser. RapidLib is a product of the Rapid-Mix project, and allows users to implement interactive machine learning, using "programming by demonstration" to design new expressive interactions.
@inproceedings{mzbyszynski2017, author = {Zbyszyński, Michael and Grierson, Mick and Yee-King, Matthew}, title = {Rapid Prototyping of New Instruments with CodeCircle}, pages = {227--230}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1181420}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0042.pdf} }
Federico Visi, Baptiste Caramiaux, Michael Mcloughlin, and Eduardo Miranda. 2017. A Knowledge-based, Data-driven Method for Action-sound Mapping. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 231–236. http://doi.org/10.5281/zenodo.1176230
Abstract
Download PDF DOI
This paper presents a knowledge-based, data-driven method for using data describing action-sound couplings collected from a group of people to generate multiple complex mappings between the performance movements of a musician and sound synthesis. This is done by using a database of multimodal motion data collected from multiple subjects coupled with sound synthesis parameters. A series of sound stimuli is synthesised using the sound engine that will be used in performance. Multimodal motion data is collected by asking each participant to listen to each sound stimulus and move as if they were producing the sound using a musical instrument they are given. Multimodal data is recorded during each performance, and paired with the synthesis parameters used for generating the sound stimulus. The dataset created using this method is then used to build a topological representation of the performance movements of the subjects. This representation is then used to interactively generate training data for machine learning algorithms, and define mappings for real-time performance. To better illustrate each step of the procedure, we describe an implementation involving clarinet, motion capture, wearable sensor armbands, and waveguide synthesis.
@inproceedings{fvisi2017, author = {Visi, Federico and Caramiaux, Baptiste and Mcloughlin, Michael and Miranda, Eduardo}, title = {A Knowledge-based, Data-driven Method for Action-sound Mapping}, pages = {231--236}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176230}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0043.pdf} }
Spencer Salazar and Mark Cerqueira. 2017. ChuckPad: Social Coding for Computer Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 237–240. http://doi.org/10.5281/zenodo.1176232
Abstract
Download PDF DOI
ChuckPad is a network-based platform for sharing code, modules, patches, and even entire musical works written on the ChucK programming language and other music programming platforms. ChuckPad provides a single repository and record of musical code from supported musical programming systems, an interface for organizing, browsing, and searching this body of code, and a readily accessible means of evaluating the musical output of code in the repository. ChuckPad consists of an open-source modular backend service to be run on a network server or cloud infrastructure and a client library to facilitate integrating end-user applications with the platform. While ChuckPad has been initially developed for sharing ChucK source code, its design can accommodate any type of music programming system oriented around small text- or binary-format documents. To this end, ChuckPad has also been extended to the Auraglyph handwriting-based graphical music programming system.
@inproceedings{ssalazar2017, author = {Salazar, Spencer and Cerqueira, Mark}, title = {ChuckPad: Social Coding for Computer Music}, pages = {237--240}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176232}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0044.pdf} }
Axel Berndt, Simon Waloschek, Aristotelis Hadjakos, and Alexander Leemhuis. 2017. AmbiDice: An Ambient Music Interface for Tabletop Role-Playing Games. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 241–244. http://doi.org/10.5281/zenodo.1176234
Abstract
Download PDF DOI
Tabletop role-playing games are a collaborative narrative experience. Throughout gaming sessions, Ambient music and noises are frequently used to enrich and facilitate the narration. With AmbiDice we introduce a tangible interface and music generator specially devised for this application scenario. We detail the technical implementation of the device, the software architecture of the music system (AmbientMusicBox) and the scripting language to compose Ambient music and soundscapes. AmbiDice was presented to experienced players and gained positive feedback and constructive suggestions for further development.
@inproceedings{aberndt2017, author = {Berndt, Axel and Waloschek, Simon and Hadjakos, Aristotelis and Leemhuis, Alexander}, title = {AmbiDice: An Ambient Music Interface for Tabletop Role-Playing Games}, pages = {241--244}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176234}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0045.pdf} }
Sam Ferguson, Anthony Rowe, Oliver Bown, Liam Birtles, and Chris Bennewith. 2017. Sound Design for a System of 1000 Distributed Independent Audio-Visual Devices. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 245–250. http://doi.org/10.5281/zenodo.1176236
Abstract
Download PDF DOI
This paper describes the sound design for Bloom, a light and sound installation made up of 1000 distributed independent audio-visual pixel devices, each with RGB LEDs, Wifi, Accelerometer, GPS sensor, and sound hardware. These types of systems have been explored previously, but only a few systems have exceeded 30-50 devices and very few have included sound capability, and therefore the sound design possibilities for large systems of distributed audio devices are not yet well understood. In this article we describe the hardware and software implementation of sound synthesis for this system, and the implications for design of media for this context.
@inproceedings{sferguson2017, author = {Ferguson, Sam and Rowe, Anthony and Bown, Oliver and Birtles, Liam and Bennewith, Chris}, title = {Sound Design for a System of 1000 Distributed Independent Audio-Visual Devices}, pages = {245--250}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176236}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0046.pdf} }
Richard Vogl and Peter Knees. 2017. An Intelligent Drum Machine for Electronic Dance Music Production and Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 251–256. http://doi.org/10.5281/zenodo.1176238
Abstract
Download PDF DOI
An important part of electronic dance music (EDM) is the so-called beat. It is defined by the drum track of the piece and is a style defining element. While producing EDM, creating the drum track tends to be delicate, yet labor intensive work. In this work we present a touch-interface-based prototype with the goal to simplify this task. The prototype aims at supporting musicians to create rhythmic patterns in the context of EDM production and live performances. Starting with a seed pattern which is provided by the user, a list of variations with varying degree of deviation from the seed pattern is generated. The interface provides simple ways to enter, edit, visualize and browse through the patterns. Variations are generated by means of an artificial neural network which is trained on a database of drum rhythm patterns extracted from a commercial drum loop library. To evaluate the user interface and pattern generation quality a user study with experts in EDM production was conducted. It was found that participants responded positively to the user interface and the quality of the generated patterns. Furthermore, the experts consider the prototype helpful for both studio production situations and live performances.
@inproceedings{rvogl2017, author = {Vogl, Richard and Knees, Peter}, title = {An Intelligent Drum Machine for Electronic Dance Music Production and Performance}, pages = {251--256}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176238}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0047.pdf} }
Martin Snejbjerg Jensen, Ole Adrian Heggli, Patricia Alves Da Mota, and Peter Vuust. 2017. A low-cost MRI compatible keyboard. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 257–260. http://doi.org/10.5281/zenodo.1176240
Abstract
Download PDF DOI
Neuroimaging is a powerful tool to explore how and why humans engage in music. Magnetic resonance imaging (MRI) has allowed us to identify brain networks and regions implicated in a range of cognitive tasks including music perception and performance. However, MRI-scanners are noisy and cramped, presenting a challenging environment for playing an instrument. Here, we present an MRI-compatible polyphonic keyboard with a materials cost of 850 USD, designed and tested for safe use in 3T (three Tesla) MRI-scanners. We describe design considerations, and prior work in the field. In addition, we provide recommendations for future designs and comment on the possibility of using the keyboard in magnetoencephalography (MEG) systems. Preliminary results indicate a comfortable playing experience with no disturbance of the imaging process.
@inproceedings{mjensen2017, author = {Jensen, Martin Snejbjerg and Heggli, Ole Adrian and Mota, Patricia Alves Da and Vuust, Peter}, title = {A low-cost MRI compatible keyboard}, pages = {257--260}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176240}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0048.pdf} }
Sang Won Lee, Jungho Bang, and Georg Essl. 2017. Live Coding YouTube: Organizing Streaming Media for an Audiovisual Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 261–266. http://doi.org/10.5281/zenodo.1176242
Abstract
Download PDF DOI
Music listening has changed greatly with the emergence of music streaming services, such as Spotify or YouTube. In this paper, we discuss an artistic practice that organizes streaming videos to perform a real-time improvisation via live coding. A live coder uses any available video from YouTube, a video streaming service, as source material to perform an improvised audiovisual piece. The challenge is to manipulate the emerging media that are streamed from a networked service. The musical gesture can be limited due to the provided functionalities of the YouTube API. However, the potential sonic and visual space that a musician can explore is practically infinite. The practice embraces the juxtaposition of manipulating emerging media in old-fashioned ways similar to experimental musicians in the 60’s physically manipulating tape loops or scratching vinyl records on a phonograph while exploring the possibility of doing so by drawing on the gigantic repository of all kinds of videos. In this paper, we discuss the challenges of using streaming videos from the platform as musical materials in computer music and introduce a live coding environment that we developed for real-time improvisation.
@inproceedings{slee2017, author = {Lee, Sang Won and Bang, Jungho and Essl, Georg}, title = {Live Coding YouTube: Organizing Streaming Media for an Audiovisual Performance}, pages = {261--266}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176242}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0049.pdf} }
Solen Kiratli, Akshay Cadambi, and Yon Visell. 2017. HIVE: An Interactive Sculpture for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 267–270. http://doi.org/10.5281/zenodo.1176244
Abstract
Download PDF DOI
In this paper we present HIVE, a parametrically designed interactive sound sculpture with embedded multi-channel digital audio which explores the intersection of sculptural form and musical instrument design. We examine sculpture as an integral part of music composition and performance, expanding the definition of musical instrument to include the gestalt of loudspeakers, architectural spaces, and material form. After examining some related works, we frame HIVE as an interactive sculpture for musical expression. We then describe our design and production process, which hinges on the relationship between sound, space, and sculptural form. Finally, we discuss the installation and its implications.
@inproceedings{skiratli2017, author = {Kiratli, Solen and Cadambi, Akshay and Visell, Yon}, title = {HIVE: An Interactive Sculpture for Musical Expression}, pages = {267--270}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176244}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0050.pdf} }
Matthew Blessing and Edgar Berdahl. 2017. The JoyStyx: A Quartet of Embedded Acoustic Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 271–274. http://doi.org/10.5281/zenodo.1176246
Abstract
Download PDF DOI
The JoyStyx Quartet is a series of four embedded acoustic instruments. Each of these instruments is a five-voice granular synthesizer which processes a different sound source to give each a unique timbre and range. The performer interacts with these voices individually with five joysticks positioned to lay under the performer’s fingertips. The JoyStyx uses a custom-designed printed circuit board. This board provides the joystick layout and connects them to an Arduino Micro, which serializes the ten analog X/Y position values and the five digital button presses. This data controls the granular and spatial parameters of a Pure Data patch running on a Raspberry Pi 2. The nature of the JoyStyx construction causes the frequency response to be coloured by the materials and their geometry, leading to a unique timbre. This endows the instrument with a more “analog” or “natural” sound, despite relying on computer-based algorithms. In concert, the quartet performance with the JoyStyx may potentially be the first performance ever with a quartet of Embedded Acoustic Instruments.
@inproceedings{mblessing2017, author = {Blessing, Matthew and Berdahl, Edgar}, title = {The JoyStyx: A Quartet of Embedded Acoustic Instruments}, pages = {271--274}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176246}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0051.pdf} }
Graham Wakefield and Charles Roberts. 2017. A Virtual Machine for Live Coding Language Design. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 275–278. http://doi.org/10.5281/zenodo.1176248
Abstract
Download PDF DOI
The growth of the live coding community has been coupled with a rich development of experimentation in new domain-specific languages, sometimes idiosyncratic to the interests of their performers. Nevertheless, programming language design may seem foreboding to many, steeped in computer science that is distant from the expertise of music performance. To broaden access to designing unique languages-as-instruments we developed an online programming environment that offers liveness in the process of language design as well as performance. The editor utilizes the Parsing Expression Grammar formalism for language design, and a virtual machine featuring collaborative multitasking for execution, in order to support a diversity of language concepts and affordances. The editor is coupled with online tutorial documentation aimed at the computer music community, with live examples embedded. This paper documents the design and use of the editor and its underlying virtual machine.
@inproceedings{gwakefield2017, author = {Wakefield, Graham and Roberts, Charles}, title = {A Virtual Machine for Live Coding Language Design}, pages = {275--278}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176248}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0052.pdf} }
Tom Davis. 2017. The Feral Cello: A Philosophically Informed Approach to an Actuated Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 279–282. http://doi.org/10.5281/zenodo.1176250
Abstract
Download PDF DOI
There have been many NIME papers over the years on augmented or actuated instruments [2][10][19][22]. Many of these papers have focused on the technical description of how these instruments have been produced, or as in the case of Machover’s #8216;Hyperinstruments’ [19], on producing instruments over which performers have ‘absolute control’ and emphasise ‘learnability. perfectibility and repeatability’ [19]. In contrast to this approach, this paper outlines a philosophical position concerning the relationship between instruments and performers in improvisational contexts that recognises the agency of the instrument within the performance process. It builds on a post-phenomenological understanding of the human/instrument relationship in which the human and the instrument are understood as co-defining entities without fixed boundaries; an approach that actively challenges notions of instrumental mastery and ‘absolute control’. This paper then takes a practice-based approach to outline how such philosophical concerns have fed into the design of an augmented, actuated cello system, The Feral Cello, that has been designed to explicitly explore these concerns through practice.
@inproceedings{tdavis2017, author = {Davis, Tom}, title = {The Feral Cello: A Philosophically Informed Approach to an Actuated Instrument}, pages = {279--282}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176250}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0053.pdf} }
Francisco Bernardo, Nicholas Arner, and Paul Batchelor. 2017. O Soli Mio: Exploring Millimeter Wave Radar for Musical Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 283–286. http://doi.org/10.5281/zenodo.1176252
Abstract
Download PDF DOI
This paper describes an exploratory study of the potential for musical interaction of Soli, a new radar-based sensing technology developed by Google’s Advanced Technology and Projects Group (ATAP). We report on our hand-on experience and outcomes within the Soli Alpha Developers program. We present early experiments demonstrating the use of Soli for creativity in musical contexts. We discuss the tools, workflow, the affordances of the prototypes for music making, and the potential for design of future NIME projects that may integrate Soli.
@inproceedings{fbernardo2017, author = {Bernardo, Francisco and Arner, Nicholas and Batchelor, Paul}, title = {O Soli Mio: Exploring Millimeter Wave Radar for Musical Interaction}, pages = {283--286}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176252}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0054.pdf} }
Constanza Levican, Andres Aparicio, Vernon Belaunde, and Rodrigo Cadiz. 2017. Insight2OSC: using the brain and the body as a musical instrument with the Emotiv Insight. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 287–290. http://doi.org/10.5281/zenodo.1176254
Abstract
Download PDF DOI
Brain computer interfaces are being widely adopted for music creation and interpretation, and they are becoming a truly new category of musical instruments. Indeed, Miranda has coined the term Brain-computer Musical Interface (BCMI) to refer to this category. There are no "plug-n-play" solutions for a BCMI, these kinds of tools usually require the setup and implementation of particular software configurations, customized for each EEG device. The Emotiv Insight is a low-cost EEG apparatus that outputs several kinds of data, such as EEG rhythms or facial expressions, from the user’s brain activity. We have developed a BCMI, in the form of a freely available middle-ware, using the Emotiv Insight for EEG input and signal processing. The obtained data, via blue-tooth is broad-casted over the network formatted for the OSC protocol. Using this software, we tested the device’s adequacy as a BCMI by using the provided data in order to control different sound synthesis algorithms in MaxMSP. We conclude that the Emotiv Insight is an interesting choice for a BCMI due to its low-cost and ease of use, but we also question its reliability and robustness.
@inproceedings{clevican2017, author = {Levican, Constanza and Aparicio, Andres and Belaunde, Vernon and Cadiz, Rodrigo}, title = {Insight2OSC: using the brain and the body as a musical instrument with the Emotiv Insight}, pages = {287--290}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176254}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0055.pdf} }
Benjamin Smith and Neal Anderson. 2017. ArraYnger: New Interface for Interactive 360° Spatialization. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 291–295. http://doi.org/10.5281/zenodo.1176256
Abstract
Download PDF DOI
Interactive real-time spatialization of audio over large immersive speaker arrays poses significant interface and control challenges for live performers. Fluidly moving and mixing numerous sound objects over unique speaker configurations requires specifically designed software interfaces and systems. Currently available software solutions either impose configuration limitations, require extreme degrees of expertise, or extensive configuration time to use. A new system design, focusing on simplicity, ease of use, and live interactive spatialization is described. Automation of array calibration and tuning is included to facilitate rapid deployment and configuration. Comparisons with other solutions show favorability in terms of complexity, depth of control, and required features.
@inproceedings{bsmith2017, author = {Smith, Benjamin and Anderson, Neal}, title = {ArraYnger: New Interface for Interactive 360° Spatialization}, pages = {291--295}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176256}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0056.pdf} }
Alexandra Murray-Leslie and Andrew Johnston. 2017. The Liberation of the Feet: Demaking the High Heeled Shoe For Theatrical Audio-Visual Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 296–301. http://doi.org/10.5281/zenodo.1176258
Abstract
Download PDF DOI
This paper describes a series of fashionable sounding shoe and foot based appendages made between 2007-2017. The research attempts to demake the physical high-heeled shoe through the iterative design and fabrication of new foot based musical instruments. This process of demaking also changes the usual purpose of shoes and associated stereotypes of high heeled shoe wear. Through turning high heeled shoes into wearable musical instruments for theatrical audio visual expressivity we question why so many musical instruments are made for the hands and not the feet? With this creative work we explore ways to redress the imbalance and consider what a genuinely “foot based” expressivity could be.
@inproceedings{aleslie2017, author = {Murray-Leslie, Alexandra and Johnston, Andrew}, title = {The Liberation of the Feet: Demaking the High Heeled Shoe For Theatrical Audio-Visual Expression}, pages = {296--301}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176258}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0057.pdf} }
Christiana Rose. 2017. SALTO: A System for Musical Expression in the Aerial Arts. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 302–306. http://doi.org/10.5281/zenodo.1176260
Abstract
Download PDF DOI
Wearable sensor technology and aerial dance movement can be integrated to provide a new performance practice and perspective on interactive kinesonic composition. SALTO (Sonic Aerialist eLecTrOacoustic system), is a system which allows for the creation of collaborative works between electroacoustic composer and aerial choreographer. The system incorporates aerial dance trapeze movement, sensors, digital synthesis, and electroacoustic composition. In SALTO, the Max software programming environment employs parameters and mapping techniques for translating the performer’s movement and internal experience into sound. Splinter (2016), a work for aerial choreographer/performer and the SALTO system, highlights the expressive qualities of the system in a performance setting.
@inproceedings{crose2017, author = {Rose, Christiana}, title = {SALTO: A System for Musical Expression in the Aerial Arts}, pages = {302--306}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176260}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0058.pdf} }
Marije Baalman. 2017. Wireless Sensing for Artistic Applications, a Reflection on Sense/Stage to Motivate the Design of the Next Stage. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 307–312. http://doi.org/10.5281/zenodo.1176262
Abstract
Download PDF DOI
Academic research projects focusing on wireless sensor networks rarely live on after the funded research project has ended. In contrast, the Sense/Stage project has evolved over the past 6 years outside of an academic context and has been used in a multitude of artistic projects. This paper presents how the project has developed, the diversity of the projects that have been made with the technology, feedback from users on the system and an outline for the design of a successor to the current system.
@inproceedings{mbaalman2017, author = {Baalman, Marije}, title = {Wireless Sensing for Artistic Applications, a Reflection on Sense/Stage to Motivate the Design of the Next Stage}, pages = {307--312}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176262}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0059.pdf} }
Ivica Bukvic and Spencer Lee. 2017. Glasstra: Exploring the Use of an Inconspicuous Head Mounted Display in a Live Technology-Mediated Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 313–318. http://doi.org/10.5281/zenodo.1176264
Abstract
Download PDF DOI
The following paper explores the Inconspicuous Head-Mounted Display within the context of a live technology-mediated music performance. For this purpose in 2014 the authors have developed Glasstra, an Android/Google Glass networked display designed to project real-time orchestra status to the conductor, with the primary goal of minimizing the on-stage technology footprint and with it audience’s potential distraction with technology. In preparation for its deployment in a real-world performance setting the team conducted a user study aimed to define relevant constraints of the Google Glass display. Based on the observed data, a conductor part from an existing laptop orchestra piece was retrofitted, thereby replacing the laptop with a Google Glass running Glasstra and a similarly inconspicuous forearm-mounted Wiimote controller. Below we present findings from the user study that have informed the design of the visual display, as well as multi-perspective observations from a series of real-world performances, including the designer, user, and the audience. We use findings to offer a new hypothesis, an inverse uncanny valley or what we refer to as uncanny mountain pertaining to audience’s potential distraction with the technology within the context of a live technology-mediated music performance as a function of minimizing on-stage technological footprint.
@inproceedings{ibukvic2017, author = {Bukvic, Ivica and Lee, Spencer}, title = {Glasstra: Exploring the Use of an Inconspicuous Head Mounted Display in a Live Technology-Mediated Music Performance}, pages = {313--318}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176264}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0060.pdf} }
Scott Barton, Ethan Prihar, and Paulo Carvalho. 2017. Cyther: a Human-playable, Self-tuning Robotic Zither. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 319–324. http://doi.org/10.5281/zenodo.1176266
Abstract
Download PDF DOI
Human-robot musical interaction typically consists of independent, physically-separated agents. We developed Cyther — a human-playable, self-tuning robotic zither – to allow a human and a robot to interact cooperatively through the same physical medium to generate music. The resultant co- dependence creates new responsibilities, roles, and expressive possibilities for human musicians. We describe some of these possibilities in the context of both technical features and artistic implementations of the system.
@inproceedings{sbarton2017, author = {Barton, Scott and Prihar, Ethan and Carvalho, Paulo}, title = {Cyther: a Human-playable, Self-tuning Robotic Zither}, pages = {319--324}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176266}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0061.pdf} }
Beici Liang, György Fazekas, Andrew McPherson, and Mark Sandler. 2017. Piano Pedaller: A Measurement System for Classification and Visualisation of Piano Pedalling Techniques. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 325–329. http://doi.org/10.5281/zenodo.1176268
Abstract
Download PDF DOI
This paper presents the results of a study of piano pedalling techniques on the sustain pedal using a newly designed measurement system named Piano Pedaller. The system is comprised of an optical sensor mounted in the piano pedal bearing block and an embedded platform for recording audio and sensor data. This enables recording the pedalling gesture of real players and the piano sound under normal playing conditions. Using the gesture data collected from the system, the task of classifying these data by pedalling technique was undertaken using a Support Vector Machine (SVM). Results can be visualised in an audio based score following application to show pedalling together with the player’s position in the score.
@inproceedings{bliang2017, author = {Liang, Beici and Fazekas, György and McPherson, Andrew and Sandler, Mark}, title = {Piano Pedaller: A Measurement System for Classification and Visualisation of Piano Pedalling Techniques}, pages = {325--329}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176268}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0062.pdf} }
Jason Long, Jim Murphy, Dale A. Carnegie, and Ajay Kapur. 2017. A Closed-Loop Control System for Robotic Hi-hats. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 330–335. http://doi.org/10.5281/zenodo.1176272
Abstract
Download PDF DOI
While most musical robots that are capable of playing the drum kit utilise a relatively simple striking motion, the hi-hat, with the additional degree of motion provided by its pedal, requires more involved control strategies in order to produce expressive performances on the instrument. A robotic hi-hat should be able to control not only the striking timing and velocity to a high degree of precision, but also dynamically control the position of the two cymbals in a way that is consistent, reproducible and intuitive for composers and other musicians to use. This paper describes the creation of a new, multifaceted hi-hat control system that utilises a closed-loop distance sensing and calibration mechanism in conjunction with an embedded musical information retrieval system to continuously calibrate the hi-hat’s action both before and during a musical performance. This is achieved by combining existing musical robotic devices with a newly created linear actuation mechanism, custom amplification, acquisition and DSP hardware, and embedded software algorithms. This new approach allows musicians to create expressive and reproducible musical performances with the instrument using consistent musical parameters, and the self-calibrating nature of the instrument lets users focus on creating music instead of maintaining equipment.
@inproceedings{jlong2017, author = {Long, Jason and Murphy, Jim and Carnegie, Dale A. and Kapur, Ajay}, title = {A Closed-Loop Control System for Robotic Hi-hats}, pages = {330--335}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176272}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0063.pdf} }
Stratos Kountouras and Ioannis Zannos. 2017. Gestus: Teaching Soundscape Composition and Performance with a Tangible Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 336–341. http://doi.org/10.5281/zenodo.1176274
Abstract
Download PDF DOI
Tangible user interfaces empower artists, boost their creative expression and enhance performing art. However, most of them are designed to work with a set of rules, many of which require advanced skills and target users above a certain age. Here we present a comparative and quantitative study of using TUIs as an alternative teaching tool in experimenting with and creating soundscapes with children. We describe an informal interactive workshop involving schoolchildren. We focus on the development of playful uses of technology to help children empirically understand audio feature extraction basic techniques. We promote tangible interaction as an alternative learning method in the creation of synthetic soundscape based on sounds recorded in a natural outdoor environment as main sources of sound. We investigate how schoolchildren perceive natural sources of sound and explore practices that reuse prerecorded material through a tangible interactive controller. We discuss the potential benefits of using TUIs as an alternative empirical method for tangible learning and interaction design, and its impact on encouraging and motivating creativity in children. We summarize our findings and review children’s biehavioural indicators of engagement and enjoyment in order to provide insight to the design of TUIs based on user experience.
@inproceedings{skountouras2017, author = {Kountouras, Stratos and Zannos, Ioannis}, title = {Gestus: Teaching Soundscape Composition and Performance with a Tangible Interface}, pages = {336--341}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176274}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0064.pdf} }
Hazar Emre Tez and Nick Bryan-Kinns. 2017. Exploring the Effect of Interface Constraints on Live Collaborative Music Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 342–347. http://doi.org/10.5281/zenodo.1176276
Abstract
Download PDF DOI
This research investigates how applying interaction constraints to digital music instruments (DMIs) affects the way that experienced music performers collaborate and find creative ways to make live improvised music on stage. The constraints are applied in two forms: i) Physically implemented on the instruments themselves, and ii) hidden rules that are defined on a network between the instruments and triggered depending on the musical actions of the performers. Six experienced musicians were recruited for a user study which involved rehearsal and performance. Performers were given deliberately constrained instruments containing a touch sensor, speaker, battery and an embedded computer. Results of the study show that whilst constraints can lead to more structured improvisation, the resultant music may not fit with performers’ true intentions. It was also found that when external musical material is introduced to guide the performers into a collective convergence, it is likely to be ignored because it was perceived by performers as being out of context.
@inproceedings{htez2017, author = {Tez, Hazar Emre and Bryan-Kinns, Nick}, title = {Exploring the Effect of Interface Constraints on Live Collaborative Music Improvisation}, pages = {342--347}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176276}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0065.pdf} }
Irmandy Wicaksono and Joseph Paradiso. 2017. FabricKeyboard: Multimodal Textile Sensate Media as an Expressive and Deformable Musical Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 348–353. http://doi.org/10.5281/zenodo.1176278
Abstract
Download PDF DOI
This paper presents FabricKeyboard: a novel deformable keyboard interface based on a multi-modal fabric sensate surface. Multi-layer fabric sensors that detect touch, proximity, electric field, pressure, and stretch are machine-sewn in a keyboard pattern on a stretchable substrate. The result is a fabric-based musical controller that combines both the discrete controls of a keyboard and various continuous controls from the embedded fabric sensors. This enables unique tactile experiences and new interactions both with physical and non-contact gestures: physical by pressing, pulling, stretching, and twisting the keys or the fabric and non-contact by hovering and waving towards/against the keyboard and an electromagnetic source. We have also developed additional fabric-based modular interfaces such as a ribbon-controller and trackpad, allowing performers to add more expressive and continuous controls. This paper will discuss implementation strategies for our system-on-textile, fabric-based sensor developments, as well as sensor-computer interfacing and musical mapping examples of this multi-modal and expressive fabric keyboard.
@inproceedings{iwicaksono2017, author = {Wicaksono, Irmandy and Paradiso, Joseph}, title = {FabricKeyboard: Multimodal Textile Sensate Media as an Expressive and Deformable Musical Interface}, pages = {348--353}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176278}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0066.pdf} }
Kristians Konovalovs, Jelizaveta Zovnercuka, Ali Adjorlu, and Daniel Overholt. 2017. A Wearable Foot-mounted / Instrument-mounted Effect Controller: Design and Evaluation. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 354–357. http://doi.org/10.5281/zenodo.1176280
Abstract
Download PDF DOI
This paper explores a new interaction possibility for increasing performer freedom via a foot-mounted wearable, and an instrument-mounted device that maintain stomp-box styles of interactivity, but without the restrictions normally associated with the original design of guitar effect pedals. The classic foot activated effect pedals that are used to alter the sound of the instrument are stationary, forcing the performer to return to the same location in order to interact with the pedals. This paper presents a new design that enables the performer to interact with the effect pedals anywhere on the stage. By designing a foot&instrument-mounted effect controller, we kept the strongest part of the classical pedal design, while allowing the activation of the effect at any location on the stage. The usability of the device has been tested on thirty experienced guitar players. Their performance has been recorded and compared, and their opinion has been investigated through questionnaire and interview. The results of the experiment showed that, in theory, foot&instrument-mounted effect controller can replace standard effect pedals and at the same time provide more mobility on a stage.
@inproceedings{kkonovalovs2017, author = {Konovalovs, Kristians and Zovnercuka, Jelizaveta and Adjorlu, Ali and Overholt, Daniel}, title = {A Wearable Foot-mounted / Instrument-mounted Effect Controller: Design and Evaluation}, pages = {354--357}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176280}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0067.pdf} }
Herbert Ho-Chun Chang, Lloyd May, and Spencer Topel. 2017. Nonlinear Acoustic Synthesis in Augmented Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 358–363. http://doi.org/10.5281/zenodo.1176282
Abstract
Download PDF DOI
This paper discusses nonlinear acoustic synthesis in augmented musical instruments via acoustic transduction. Our work expands previous investigations into acoustic amplitude modulation, offering new prototypes that produce intermodulation in several instrumental contexts. Our results show nonlinear intermodulation distortion can be generated and controlled in electromagnetically driven acoustic interfaces that can be deployed in acoustic instruments through augmentation, thus extending the nonlinear acoustic synthesis to a broader range of sonic applications.
@inproceedings{hchang2017, author = {Chang, Herbert Ho-Chun and May, Lloyd and Topel, Spencer}, title = {Nonlinear Acoustic Synthesis in Augmented Musical Instruments}, pages = {358--363}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176282}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0068.pdf} }
Georg Hajdu, Benedict Carey, Goran Lazarevic, and Eckhard Weymann. 2017. From Atmosphere to Intervention: The circular dynamic of installations in hospital waiting areas. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 364–369. http://doi.org/10.5281/zenodo.1176284
Abstract
Download PDF DOI
This paper is a description of a pilot project conducted at the Hamburg University of Music and Drama (HfMT) during the academic year 2015-16. In this project we have addressed how interventions via interactive, generative music systems may contribute to the improvement of the atmosphere and thus to the well-being of patients in hospital waiting areas. The project was conducted by both the students of the music therapy and multimedia composition programs and has thus offered rare insights into the dynamic of such undertakings covering both the therapeutic underpinnings, as well as the technical means required to achieve a particular result. DJster, the engine we used for the generative processes is based on Clarence Barlow’s probabilistic algorithms. Equipped with the proper periphery (sensors, sound modules and spatializers), we looked at three different scenarios, each requiring specific musical and technological solutions. The pilot was concluded by a symposium in 2017 and the development of a prototype system. The symposium yielded a diagram detailing the circular dynamic of the factors involved in this particular project, while the prototype was demoed in 2016 at the HfMT facilities. The system will be installed permanently at the University Medical Center Hamburg-Eppendorf (UKE) in June 2017.
@inproceedings{ghajdu2017, author = {Hajdu, Georg and Carey, Benedict and Lazarevic, Goran and Weymann, Eckhard}, title = {From Atmosphere to Intervention: The circular dynamic of installations in hospital waiting areas}, pages = {364--369}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176284}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0069.pdf} }
Dom Brown, Chris Nash, and Tom Mitchell. 2017. A User Experience Review of Music Interaction Evaluations. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 370–375. http://doi.org/10.5281/zenodo.1176286
Abstract
Download PDF DOI
The need for thorough evaluations is an emerging area of interest and importance in music interaction research. As a large degree of DMI evaluation is concerned with exploring the subjective experience: ergonomics, action-sound mappings and control intimacy; User Experience (UX) methods are increasingly being utilised to analyse an individual’s experience of new musical instruments, from which we can extract meaningful, robust findings and subsequently generalised and useful recommendations. However, many music interaction evaluations remain informal. In this paper, we provide a meta-review of 132 papers from the 2014 – 2016 proceedings of the NIME, SMC and ICMC conferences to collate the aspects of UX research that are already present in music interaction literature, and to highlight methods from UX’s widening field of research that have not yet been explored. Our findings show that usability and aesthetics are the primary focus of evaluations in music interaction research, and other important components of the user experience such as enchantment, motivation and frustration are frequently if not always overlooked. We argue that these factors are prime areas for future research in the field and their consideration in design and evaluation could lead to a better understanding of NIMEs and other computer music technology.
@inproceedings{dbrown2017, author = {Brown, Dom and Nash, Chris and Mitchell, Tom}, title = {A User Experience Review of Music Interaction Evaluations}, pages = {370--375}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176286}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0070.pdf} }
Wayne Siegel. 2017. Conducting Sound in Space. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 376–380. http://doi.org/10.5281/zenodo.1176288
Abstract
Download PDF DOI
This paper discusses control of multichannel sound diffusion by means of motion-tracking hardware and software within the context of a live performance. The idea developed from the author’s previous use of motion-tracking technology in his own artistic practice as a composer and performer. Various motion tracking systems were considered, experiments were conducted with three sound diffusion setups at three venues and a new composition for solo performer and motion-tracking system took form.
@inproceedings{wsiegel2017, author = {Siegel, Wayne}, title = {Conducting Sound in Space}, pages = {376--380}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176288}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0071.pdf} }
Spencer Salazar, Sarah Reid, and Daniel McNamara. 2017. The Fragment String. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 381–386. http://doi.org/10.5281/zenodo.1176290
Abstract
Download PDF DOI
The Fragment String is a new digital musical instrument designed to reinterpret and reflect upon the sounds of the instruments it is performed in collaboration with. At its core, it samples an input audio signal and allows the performer to replay these samples through a granular resynthesizer. Normally the Fragment String samples an acoustic instrument that accompanies it, but in the absence of this input it will amplify the ambient environment and electronic noise of the input audio path to audible levels and sample these. This ability to leverage both structural, tonal sound and unstructured noise provide the instrument with multiple dimensions of musical expressivity. The relative magnitude of the physical gestures required to manipulate the instrument and control the sound also engage an audience in its performance. This straightforward yet expressive design has lent the Fragment String to a variety of performance techniques and settings. These are explored through case studies in a five year history of Fragment String-based compositions and performances, illustrating the strengths and limitations of these interactions and their sonic output.
@inproceedings{ssalazar2017a, author = {Salazar, Spencer and Reid, Sarah and McNamara, Daniel}, title = {The Fragment String}, pages = {381--386}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176290}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0072.pdf} }
Staas de Jong. 2017. Ghostfinger: a novel platform for fully computational fingertip controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 387–392. http://doi.org/10.5281/zenodo.1176292
Abstract
Download PDF DOI
We present Ghostfinger, a technology for highly dynamic up/down fingertip haptics and control. The overall user experience offered by the technology can be described as that of tangibly and audibly interacting with a small hologram. More specifically, Ghostfinger implements automatic visualization of the dynamic instantiation/parametrization of algorithmic primitives that together determine the current haptic conditions for fingertip action. Some aspects of this visualization are visuospatial: A floating see-through cursor provides real-time, to-scale display of the fingerpad transducer, as it is being moved by the user. Simultaneously, each haptic primitive instance is represented by a floating block shape, type-colored, variably transparent, and possibly overlapping with other such block shapes. Further aspects of visualization are symbolic: Each instance is also represented by a type symbol, lighting up within a grid if the instance is providing output to the user. We discuss the system’s user interface, programming interface, and potential applications. This from a general perspective that articulates and emphasizes the uniquely enabling role of the principle of computation in the implementation of new forms of instrumental control of musical sound. Beyond the currently presented technology, this also reflects more broadly on the role of Digital Musical Instruments (DMIs) in NIME.
@inproceedings{sjong2017, author = {de Jong, Staas}, title = {Ghostfinger: a novel platform for fully computational fingertip controllers}, pages = {387--392}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176292}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0073.pdf} }
Jack Armitage, Fabio Morreale, and Andrew McPherson. 2017. The finer the musician, the smaller the details: NIMEcraft under the microscope. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 393–398. http://doi.org/10.5281/zenodo.1176294
Abstract
Download PDF DOI
Many digital musical instrument design frameworks have been proposed that are well suited for analysis and comparison. However, not all provide applicable design suggestions, especially where subtle but important details are concerned. Using traditional lutherie as a model, we conducted a series of interviews to explore how violin makers “go beyond the obvious”, and how players perceive and describe subtle details of instrumental quality. We find that lutherie frameworks provide clear design methods and have substantial empirical backing, but are not enough to make a fine violin. Success comes after acquiring sufficient tacit knowledge, which enables detailed craft through subjective, empirical methods. Testing instruments for subtle qualities was suggested to be a different skill to playing. Whilst players are able to identify some specific details about instrumental quality by comparison, these are often not actionable, and important aspects of “sound and feeling” are much more difficult to describe. In the DMI domain, we introduce NIMEcraft to describe subtle differences between otherwise identical instruments and their underlying design processes, and consider how to improve the dissemination of NIMEcraft.
@inproceedings{jarmitage2017, author = {Armitage, Jack and Morreale, Fabio and McPherson, Andrew}, title = {The finer the musician, the smaller the details: NIMEcraft under the microscope}, pages = {393--398}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176294}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0074.pdf} }
Sandor Mehes, Maarten van Walstijn, and Paul Stapleton. 2017. Virtual-Acoustic Instrument Design: Exploring the Parameter Space of a String-Plate Model. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 399–403. http://doi.org/10.5281/zenodo.1176296
Abstract
Download PDF DOI
Exploration is an intrinsic element of designing and engaging with acoustic as well as digital musical instruments. This paper reports on the ongoing development of a virtual-acoustic instrument based on a physical model of a string coupled nonlinearly to a plate. The performer drives the model by tactile interaction with a string-board controller fitted with piezo-electric sensors. The string-plate model is formulated in a way that prioritises its parametric explorability. Where the roles of creating performance gestures and designing instruments are traditionally separated, such a design provides a continuum across these domains. The string-plate model, its real-time implementation, and the control interface are described, and the system is preliminarily evaluated through informal observations of how musicians engage with the system.
@inproceedings{smehes2017, author = {Mehes, Sandor and van Walstijn, Maarten and Stapleton, Paul}, title = {Virtual-Acoustic Instrument Design: Exploring the Parameter Space of a String-Plate Model}, pages = {399--403}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176296}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0075.pdf} }
Nicolas Bouillot, Zack Settel, and Michal Seta. 2017. SATIE: a live and scalable 3D audio scene rendering environment for large multi-channel loudspeaker configurations. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 404–409. http://doi.org/10.5281/zenodo.1176298
Abstract
Download PDF DOI
Recent advances in computing offer the possibility to scale real-time 3D virtual audio scenes to include hundreds of simultaneous sound sources, rendered in realtime, for large numbers of audio outputs. Our Spatial Audio Toolkit for Immersive Environments (SATIE), allows us to render these dense audio scenes to large multi-channel (e.g. 32 or more) loudspeaker systems, in realtime and controlled from external software such as 3D scenegraph software. As we describe here, SATIE is designed for improved scalability: minimum dependency between nodes in the audio DSP graph for parallel audio computation, controlling sound objects by groups and load balancing computation of geometry that allow to reduce the number of messages for controlling simultaneously a high number of sound sources. The paper presents SATIE along with example use case scenarios. Our initial work demonstrates SATIE’s flexibility, and has provided us with novel sonic sensations such as “audio depth of field” and real-time sound swarming.
@inproceedings{nbouillot2017, author = {Bouillot, Nicolas and Settel, Zack and Seta, Michal}, title = {SATIE: a live and scalable 3D audio scene rendering environment for large multi-channel loudspeaker configurations}, pages = {404--409}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176298}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0076.pdf} }
Hugo Scurto, Frédéric Bevilacqua, and Jules Françoise. 2017. Shaping and Exploring Interactive Motion-Sound Mappings Using Online Clustering Techniques. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 410–415. http://doi.org/10.5281/zenodo.1176270
Abstract
Download PDF DOI
Machine learning tools for designing motion-sound relationships often rely on a two-phase iterative process, where users must alternate between designing gestures and performing mappings. We present a first prototype of a user adaptable tool that aims at merging these design and performance steps into one fully interactive experience. It is based on an online learning implementation of a Gaussian Mixture Model supporting real-time adaptation to user movement and generation of sound parameters. To allow both fine-tune modification tasks and open-ended improvisational practices, we designed two interaction modes that either let users shape, or guide interactive motion-sound mappings. Considering an improvisational use case, we propose two example musical applications to illustrate how our tool might support various forms of corporeal engagement with sound, and inspire further perspectives for machine learning-mediated embodied musical expression.
@inproceedings{hscurto2017, author = {Scurto, Hugo and Bevilacqua, Frédéric and Françoise, Jules}, title = {Shaping and Exploring Interactive Motion-Sound Mappings Using Online Clustering Techniques}, pages = {410--415}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176270}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0077.pdf} }
Kiran Bhumber, Bob Pritchard, and Kitty Rodé. 2017. A Responsive User Body Suit (RUBS). Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 416–419. http://doi.org/10.5281/zenodo.1176300
Abstract
Download PDF DOI
We describe the Responsive User Body Suit (RUBS), a tactile instrument worn by performers that allows the generation and manipulation of audio output using touch triggers. The RUBS system is a responsive interface between organic touch and electronic audio, intimately located on the performer’s body. This system offers an entry point into a more intuitive method of music performance. A short overview of body instrument philosophy and related work is followed by the development and implementation process of the RUBS as both an interface and performance instrument. Lastly, observations, design challenges and future goals are discussed.
@inproceedings{kbhumber2017, author = {Bhumber, Kiran and Pritchard, Bob and Rodé, Kitty}, title = {A Responsive User Body Suit (RUBS)}, pages = {416--419}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176300}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0078.pdf} }
Marie Højlund, Morten Riis, Daniel Rothmann, and Jonas Kirkegaard. 2017. Applying the EBU R128 Loudness Standard in live-streaming sound sculptures. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 420–425. http://doi.org/10.5281/zenodo.1176354
Abstract
Download PDF DOI
This paper describes the development of a loudness-based compressor for live audio streams. The need for this device arose while developing the public sound art project The Overheard, which involves mixing together several live audio streams through a web based mixing interface. In order to preserve a natural sounding dynamic image from the varying sound sources that can be played back under varying conditions, an adaptation of the EBU R128 loudness measurement recommendation, originally developed for levelling non-real-time broadcast material, has been applied. The paper describes the Pure Data implementation and the necessary compromises enforced by the live streaming condition. Lastly observations regarding design challenges, related application areas and future goals are presented.
@inproceedings{mhojlund2017, author = {Højlund, Marie and Riis, Morten and Rothmann, Daniel and Kirkegaard, Jonas}, title = {Applying the EBU R128 Loudness Standard in live-streaming sound sculptures}, pages = {420--425}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176354}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0079.pdf} }
Edgar Berdahl, Matthew Blessing, Matthew Williams, Pacco Tan, Brygg Ullmer, and Jesse Allison. 2017. Spatial Audio Approaches for Embedded Sound Art Installations with Loudspeaker Line Arrays. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 426–430. http://doi.org/10.5281/zenodo.1176302
Abstract
Download PDF DOI
The concept of embedded acoustic systems for diffusing spatial audio is considered. This paradigm is enabled by advancements in floating-point hardware on inexpensive embedded Linux systems. Examples are presented using line array configurations for electroacoustic music and for making interactive kiosk and poster systems.
@inproceedings{eberdahl2017, author = {Berdahl, Edgar and Blessing, Matthew and Williams, Matthew and Tan, Pacco and Ullmer, Brygg and Allison, Jesse}, title = {Spatial Audio Approaches for Embedded Sound Art Installations with Loudspeaker Line Arrays}, pages = {426--430}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176302}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0080.pdf} }
Fiona Keenan and Sandra Pauletto. 2017. Design and Evaluation of a Digital Theatre Wind Machine. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 431–435. http://doi.org/10.5281/zenodo.1176304
Abstract
Download PDF DOI
This paper presents the next stage of an investigation into the potential of historical theatre sound effects as a resource for Sonic Interaction Design (SID). An acoustic theatre wind machine was constructed, and a digital physical modelling-based version of this specific machine was programmed using the Sound Designer’s Toolkit (SDT) in Max/MSP. The acoustic wind machine was fitted with 3D printed gearing to mechanically drive an optical encoder and control the digital synthesis engine in real time. The design of this system was informed by an initial comparison between the acoustic wind machine and the first iteration of its digital counterpart. To explore the main acoustic parameters and the sonic range of the acoustic and digital wind machines in operation, three simple and distinct rotational gestures were performed, with the resulting sounds recorded simultaneously, facilitating an analysis of the real-time performance of both sources. The results are reported, with an outline of future work.
@inproceedings{fkeenan2017, author = {Keenan, Fiona and Pauletto, Sandra}, title = {Design and Evaluation of a Digital Theatre Wind Machine}, pages = {431--435}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176304}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0081.pdf} }
Ian Hattwick and Marcelo M. Wanderley. 2017. Design of Hardware Systems for Professional Artistic Applications. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 436–441. http://doi.org/10.5281/zenodo.1176306
Abstract
Download PDF DOI
In this paper we present a discussion of the development of hardware systems in collaboration with professional artists, a context which presents both challenges and opportunities for researchers interested in the uses of technology in artistic practice. The establishment of design specifications within these contexts can be challenging, especially as they are likely to change during the development process. In order to assist in the consideration of the complete set of design specifications, we identify seven aspects of hardware design relevant to our applications: function, aesthetics, support for artistic creation, system architecture, manufacturing, robustness, and reusability. Examples drawn from our previous work are used to illustrate the characteristics of interdependency and temporality, and form the basis of case studies investigating support for artistic creation and reusability. We argue that the consideration of these design aspects at appropriate times within the development process may facilitate the ability of hardware systems to support continued use in professional applications.
@inproceedings{ihattwick2017, author = {Hattwick, Ian and Wanderley, Marcelo M.}, title = {Design of Hardware Systems for Professional Artistic Applications}, pages = {436--441}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176306}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0082.pdf} }
Alexander Refsum Jensenius, Victor Gonzalez Sanchez, Agata Zelechowska, and Kari Anne Vadstensvik Bjerkestrand. 2017. Exploring the Myo controller for sonic microinteraction. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 442–445. http://doi.org/10.5281/zenodo.1176308
Abstract
Download PDF DOI
This paper explores sonic microinteraction using muscle sensing through the Myo armband. The first part presents results from a small series of experiments aimed at finding the baseline micromotion and muscle activation data of people being at rest or performing short/small actions. The second part presents the prototype instrument MicroMyo, built around the concept of making sound with little motion. The instrument plays with the convention that inputting more energy into an instrument results in more sound. MicroMyo, on the other hand, is built so that the less you move, the more it sounds. Our user study shows that while such an "inverse instrument" may seem puzzling at first, it also opens a space for interesting musical interactions.
@inproceedings{ajensenius2017, author = {Jensenius, Alexander Refsum and Sanchez, Victor Gonzalez and Zelechowska, Agata and Bjerkestrand, Kari Anne Vadstensvik}, title = {Exploring the Myo controller for sonic microinteraction}, pages = {442--445}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176308}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0083.pdf} }
Joseph Tilbian and Andres Cabrera. 2017. Stride for Interactive Musical Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 446–449. http://doi.org/10.5281/zenodo.1176310
Abstract
Download PDF DOI
Stride is a language tailored for designing new digital musical instruments and interfaces. Stride enables designers to fine tune the sound and the interactivity of the instruments they wish to create. Stride code provides a high-level description of processes in a platform agnostic manner. The syntax used to define these processes can also be used to define low-level signal processing algorithms. Unlike other domain-specific languages for sound synthesis and audio processing, Stride can generate optimized code that can run on any supported hardware platform. The generated code can be compiled to run on a full featured operating system or bare metal on embedded devices. Stride goes further and enables a designer to consolidate various supported hardware and software platforms, define the communication between them, and target them as a single heterogeneous system.
@inproceedings{jtilbian2017, author = {Tilbian, Joseph and Cabrera, Andres}, title = {Stride for Interactive Musical Instrument Design}, pages = {446--449}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176310}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0084.pdf} }
José Miguel Fernandez, Thomas Köppel, Nina Verstraete, Grégoire Lorieux, Alexander Vert, and Philippe Spiesser. 2017. GeKiPe, a gesture-based interface for audiovisual performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 450–455. http://doi.org/10.5281/zenodo.1176312
Abstract
Download PDF DOI
We present here GeKiPe, a gestural interface for musical expression, combining images and sounds, generated and controlled in real time by a performer. GeKiPe is developed as part of a creation project, exploring the control of virtual instruments through the analysis of gestures specific to instrumentalists, and to percussionists in particular. GeKiPe was used for the creation of a collaborative stage performance (Sculpt), in which the musician and their movements are captured by different methods (infrared Kinect cameras and gesture-sensors on controller gloves). The use of GeKiPe as an alternate sound and image controller allowed us to combine body movement, musical gestures and audiovisual expressions to create challenging collaborative performances.
@inproceedings{jfernandez2017, author = {Fernandez, José Miguel and Köppel, Thomas and Verstraete, Nina and Lorieux, Grégoire and Vert, Alexander and Spiesser, Philippe}, title = {GeKiPe, a gesture-based interface for audiovisual performance}, pages = {450--455}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176312}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0085.pdf} }
Jeppe Larsen and Hendrik Knoche. 2017. Hear you later alligator: How delayed auditory feedback affects non-musically trained people’s strumming. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 456–459. http://doi.org/10.5281/zenodo.1176314
Abstract
Download PDF DOI
Many musical instruments exhibit an inherent latency or delayed auditory feedback (DAF) between actuator activation and the occurrence of sound. We investigated how DAF (73ms and 250ms) affects musically trained (MT) and non-musically trained (NMT) people’s ability to synchronize the audible strum of an actuated guitar to a metronome at 60bpm and 120bpm. The long DAF matched a subdivision of the overall tempo. We compared their performance using two different input devices with feedback before or on activation. While 250ms DAF hardly affected musically trained participants, non-musically trained participants’ performance declined substantially both in mean synchronization error and its spread. Neither tempo nor input devices affected performance.
@inproceedings{jlarsen2017a, author = {Larsen, Jeppe and Knoche, Hendrik}, title = {Hear you later alligator: How delayed auditory feedback affects non-musically trained people's strumming}, pages = {456--459}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176314}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0086.pdf} }
Michael Mulshine and Jeff Snyder. 2017. OOPS: An Audio Synthesis Library in C for Embedded (and Other) Applications. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 460–463. http://doi.org/10.5281/zenodo.1176316
Abstract
Download PDF DOI
This paper introduces an audio synthesis library written in C with "object oriented" programming principles in mind. We call it OOPS: Object-Oriented Programming Sound, or, "Oops, it’s not quite Object-Oriented Programming in C". The library consists of several UGens (audio components) and a framework to manage these components. The design emphases of the library are efficiency and organizational simplicity, with particular attention to the needs of embedded systems audio development.
@inproceedings{mmulshine2017, author = {Mulshine, Michael and Snyder, Jeff}, title = {OOPS: An Audio Synthesis Library in C for Embedded (and Other) Applications}, pages = {460--463}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176316}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0087.pdf} }
Maria Kallionpää, Chris Greenhalgh, Adrian Hazzard, David M. Weigl, Kevin R. Page, and Steve Benford. 2017. Composing and Realising a Game-like Performance for Disklavier and Electronics. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 464–469. http://doi.org/10.5281/zenodo.1176318
Abstract
Download PDF DOI
“Climb!” is a musical composition that combines the ideas of a classical virtuoso piece and a computer game. We present a case study of the composition process and realization of “Climb!”, written for Disklavier and a digital interactive engine, which was co-developed together with the musical score. Specifically, the engine combines a system for recognising and responding to musical trigger phrases along with a dynamic digital score renderer. This tool chain allows for the composer’s original scoring to include notational elements such as trigger phrases to be automatically extracted to auto-configure the engine for live performance. We reflect holistically on the development process to date and highlight the emerging challenges and opportunities. For example, this includes the potential for further developing the workflow around the scoring process and the ways in which support for musical triggers has shaped the compositional approach.
@inproceedings{mkallionpaa2017, author = {Kallionpää, Maria and Greenhalgh, Chris and Hazzard, Adrian and Weigl, David M. and Page, Kevin R. and Benford, Steve}, title = {Composing and Realising a Game-like Performance for Disklavier and Electronics}, pages = {464--469}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176318}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0088.pdf} }
Thor Magnusson. 2017. Contextualizing Musical Organics: An Ad-hoc Organological Classification Approach. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 470–475. http://doi.org/10.5281/zenodo.1176320
Abstract
Download PDF DOI
New digital musical instruments are difficult for organologists to deal with, due to their heterogeneous origins, interdisciplinary science, and fluid, open-ended nature. NIMEs are studied from a range of disciplines, such as musicology, engineering, human-computer interaction, psychology, design, and performance studies. Attempts to continue traditional organology classifications for electronic and digital instruments have been made, but with unsatisfactory results. This paper raises the problem of tree-like classifications of digital instruments, proposing an alternative approach: musical organics . Musical organics is a philosophical attempt to tackle the problems inherent in the organological classification of digital instruments. Shifting the emphasis from hand-coded classification to information retrieval supported search and clustering, an open and distributed system that anyone can contribute to is proposed. In order to show how such a system could incorporate third-party additions, the paper also presents an organological ontogenesis of three innovative musical instruments: the saxophone, the Minimoog, and the Reactable. This micro-analysis of innovation in the field of musical instruments can help forming a framework for the study of how instruments are adopted in musical culture.
@inproceedings{tmagnusson2017, author = {Magnusson, Thor}, title = {Contextualizing Musical Organics: An Ad-hoc Organological Classification Approach}, pages = {470--475}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176320}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0089.pdf} }
Stefano Fasciani. 2017. Physical Audio Digital Filters. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 476–480. http://doi.org/10.5281/zenodo.1176322
Abstract
Download PDF DOI
We propose an approach to insert physical objects in audio digital signal processing chains, filtering the sound with the acoustic impulse response of any solid measured in real-time. We model physical objects as a linear time-invariant system, which is used as an audio filter. By interacting with the object or with the measuring hardware we can dynamically modify the characteristics of the filter. The impulse response is obtained correlating a noise signal injected in the object through an acoustic actuator with the signal received from an acoustic sensor placed on the object. We also present an efficient multichannel implementation of the system, which enables further creative applications beyond audio filtering, including tangible signal patching and sound spatialization.
@inproceedings{sfasciani2017, author = {Fasciani, Stefano}, title = {Physical Audio Digital Filters}, pages = {476--480}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176322}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0090.pdf} }
Benjamin Taylor. 2017. A History of the Audience as a Speaker Array. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 481–486. http://doi.org/10.5281/zenodo.1176324
Abstract
Download PDF DOI
Distributed music as a performance practice has seen significant growth over the past decade. This paper surveys the development of the genre, documenting important precedents, peripheral influences, and core works. We additionally discuss common modes of implementation in the genre and contrast these approaches and their motivations.
@inproceedings{btaylor2017, author = {Taylor, Benjamin}, title = {A History of the Audience as a Speaker Array}, pages = {481--486}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176324}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0091.pdf} }
Takumi Ogata and Gil Weinberg. 2017. Robotically Augmented Electric Guitar for Shared Control. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 487–488. http://doi.org/10.5281/zenodo.1176326
Abstract
Download PDF DOI
This paper is about a novel robotic guitar that establishes shared control between human performers and mechanical actuators. Unlike other mechatronic guitar instruments that perform pre-programmed music automatically, this guitar allows the human and actuators to produce sounds jointly; there exists a distributed control between the human and robotic components. The interaction allows human performers to have full control over the melodic, harmonic, and expressive elements of the instrument while mechanical actuators excite and dampen the string with a rhythmic pattern. Guitarists can still access the fretboard without the physical interference of a mechatronic system, so they can play melodies and chords as well as perform bends, slides, vibrato, and other expressive techniques. Leveraging the capabilities of mechanical actuators, the mechanized hammers can output complex rhythms and speeds not attainable by humans. Furthermore, the rhythmic patterns can be algorithmically or stochastically generated by the hammer, which supports real-time interactive improvising.
@inproceedings{togata2017, author = {Ogata, Takumi and Weinberg, Gil}, title = {Robotically Augmented Electric Guitar for Shared Control}, pages = {487--488}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176326}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0092.pdf} }
Ben Neill. 2017. The Mutantrumpet. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 489–490. http://doi.org/10.5281/zenodo.1176328
Abstract
Download PDF DOI
Ben Neill will demonstrate the mutantrumpet, a hybrid electro-acoustic instrument. The capabilities of the mutantrumpet are designed to erase the boundaries between acoustic and electronic musical creation and performance. It is both an expanded acoustic instrument and an electronic controller capable of interacting with audio and video simultaneously. The demonstration will explore the multi-faceted possibilities that are offered by the mutantrumpet in several brief, wide ranging musical examples composed and improvised by Neill. Interactive video performance techniques and collaborations will be integrated into the excerpts. The aesthetics of live intermedia performance will be discussed along with a technical overview of the interface and associated software applications Junxion and RoSa from STEIM, Amsterdam. Reflections on the development of a virtuosic performance technique with a hybrid instrument and influences from collaborators Robert Moog, David Behrman, Ralph Abraham, DJ Spooky and others will be included in the presentation.
@inproceedings{bneill2017, author = {Neill, Ben}, title = {The Mutantrumpet}, pages = {489--490}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176328}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0093.pdf} }
Scott Smallwood. 2017. Locus Sono: A Listening Game for NIME. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 491–492. http://doi.org/10.5281/zenodo.1176330
Abstract
Download PDF DOI
This paper/poster describes the development of an experimental listening game called Locus Sono; a 3D audio puzzle game where listening and exploration are the key forms of interaction. The game was developed by a motivation to create an interactive audio environment in which sound is the key to solving in-game puzzles. This work is a prototype for a larger planned work and illustrates a first step in a more complex audio gaming scenario, which will also be partially described in this short paper
@inproceedings{ssmallwood2017, author = {Smallwood, Scott}, title = {Locus Sono: A Listening Game for NIME}, pages = {491--492}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176330}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0094.pdf} }
Richard Polfreman and Benjamin Oliver. 2017. Rubik’s Cube, Music’s Cube. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 493–494. http://doi.org/10.5281/zenodo.1176332
Abstract
Download PDF DOI
2017 marks the 40th anniversary of the Rubik’s Cube (under its original name the Magic Cube). This paper-demonstration describes explorations of the cube as a performance controller for music. The pattern of colors on a face of the cube is detected via USB video camera and supplemented by EMG data from the performer to model the performer’s interaction with the cube. This system was trialed in a variety of audio scenarios and deployed in the composition “Rubik’s Study No. 1”, a work based on solving the cube with audible connections to 1980’s pop culture. The cube was found to be an engaging musical controller, with further potential to be explored.
@inproceedings{rpolfreman2017, author = {Polfreman, Richard and Oliver, Benjamin}, title = {Rubik's Cube, Music's Cube}, pages = {493--494}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176332}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0095.pdf} }
Charles Martin and Jim Torresen. 2017. MicroJam: An App for Sharing Tiny Touch-Screen Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 495–496. http://doi.org/10.5281/zenodo.1176334
Abstract
Download PDF DOI
MicroJam is a mobile app for sharing tiny touch-screen performances. Mobile applications that streamline creativity and social interaction have enabled a very broad audience to develop their own creative practices. While these apps have been very successful in visual arts (particularly photography), the idea of social music-making has not had such a broad impact. MicroJam includes several novel performance concepts intended to engage the casual music maker and inspired by current trends in social creativity support tools. Touch-screen performances are limited to five seconds, instrument settings are posed as sonic “filters”, and past performances are arranged as a timeline with replies and layers. These features of MicroJam encourage users not only to perform music more frequently, but to engage with others in impromptu ensemble music making.
@inproceedings{cmartin2017, author = {Martin, Charles and Torresen, Jim}, title = {MicroJam: An App for Sharing Tiny Touch-Screen Performances}, pages = {495--496}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176334}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0096.pdf} }
Ryu Nakagawa and Shotaro Hirata. 2017. AEVE: An Audiovisual Experience Using VRHMD and EEG. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 497–498. http://doi.org/10.5281/zenodo.1176336
Abstract
Download PDF DOI
The AEVE provides for a brain-computer-interface (BCI) controlled audiovisual experience, presented through a virtual reality head-mounted display (VRHMD). We have developed an audiovisual art piece where progression through 3 sections and 1 extra section occurs using an “Attention” value derived from the Electroencephalography (EEG) data. The only interaction in this work is perspective that is participant’s view, and the EEG data. However, we believe the simple interaction amplifies the participant’s feeling of immersion. Through the narrative of the work and the simple interaction, we attempt to connect some concepts such as audiovisual experience, virtual reality (VR), BCI, grid, consciousness, memory, universe, etc. in a minimal way.
@inproceedings{rnakagawa2017, author = {Nakagawa, Ryu and Hirata, Shotaro}, title = {AEVE: An Audiovisual Experience Using VRHMD and EEG}, pages = {497--498}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176336}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0097.pdf} }
Rodrigo Cadiz and Alvaro Sylleros. 2017. Arcontinuo: the Instrument of Change. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 499–500. http://doi.org/10.5281/zenodo.1176338
Abstract
Download PDF DOI
The Arcontinuo is an electronic musical instrument designed from a perspective based in the study of their potential users and their interaction with existing musical interfaces. Arcontinuo aims to change the way electronic music is performed, as it is capable of incorporating natural and ergonomic human gestures, allowing the musician to engage with the instrument and as a result, enhance the connection with the audience. Arcontinuo challenges the notion of what a musical gesture is and goes against traditional ways of performing music, by proposing a concept that we call smart playing mapping, as a way of achieving a better and more meaningful performance.
@inproceedings{rcadiz2017, author = {Cadiz, Rodrigo and Sylleros, Alvaro}, title = {Arcontinuo: the Instrument of Change}, pages = {499--500}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176338}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0098.pdf} }
Rob Blazey. 2017. Kalimbo: an Extended Thumb Piano and Minimal Control Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 501–502. http://doi.org/10.5281/zenodo.1176340
Abstract
Download PDF DOI
Kalimbo is an extended kalimba, built from repurposed materials and fitted with sensors that enable it to function as a reductionist control interface through physical gestures and capacitive sensing. The work demonstrates an attempt to apply theories and techniques from visual collage art to the concept of musical performance ecologies. The body of the instrument emerged from material-led making, and the disparate elements of a particular musical performance ecology (acoustic instrument, audio effects, samples, synthesis and controls) are juxtaposed and unified into one coherent whole. As such, Kalimbo demonstrates how visual arts, in particular collage, can inform the design and creation of new musical instruments, interfaces and streamlined performance ecologies.
@inproceedings{rblazey2017, author = {Blazey, Rob}, title = {Kalimbo: an Extended Thumb Piano and Minimal Control Interface}, pages = {501--502}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176340}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0099.pdf} }
Joseph Tilbian, Andres Cabrera, Steffen Martin, and Lukasz Olczyk. 2017. Stride on Saturn M7 for Interactive Musical Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 503–504. http://doi.org/10.5281/zenodo.1176342
Abstract
Download PDF DOI
This demonstration introduces the Stride programming language, the Stride IDE, and the Saturn M7 embedded audio development board. Stride is a declarative and reactive domain specific programming language for real-time sound synthesis, processing, and interaction design. The Stride IDE is a cross-platform integrated development environment for Stride. Saturn M7 is an embedded audio development board by Okra Engineering, designed around an ARM Cortex-M7 processor based microcontroller. It targets high-end multi-channel audio processing and synthesis with very low latency and power consumption. The microcontroller has a rich set of audio and communication peripherals, capable of performing complex real-time DSP tasks with double precision floating point accuracy. This demonstration will showcase specific features of the Stride language, which facilitates the design of new interactive musical instruments. The Stride IDE will be used to compose Stride code and generate code for the Saturn M7 board. The various hardware capabilities of the Saturn M7 board will also be presented.
@inproceedings{jtilbian2017a, author = {Tilbian, Joseph and Cabrera, Andres and Martin, Steffen and Olczyk, Lukasz}, title = {Stride on Saturn M7 for Interactive Musical Instrument Design}, pages = {503--504}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176342}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0100.pdf} }
Tetsuro Kitahara, Sergio Giraldo, and Rafael Ramírez. 2017. JamSketch: A Drawing-based Real-time Evolutionary Improvisation Support System. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 505–506. http://doi.org/10.5281/zenodo.1176344
Abstract
Download PDF DOI
In this paper, we present JamSketch, a real-time improvisation support system which automatically generates melodies according to melodic outlines drawn by the users. The system generates the improvised melodies based on (1) an outline sketched by the user using a mouse or a touch screen, (2) a genetic algorithm based on a dataset of existing music pieces as well as musical knowledge, and (3) an expressive performance model for timing and dynamic transformations. The aim of the system is to allow people with no prior musical knowledge to be able to enjoy playing music by improvising melodies in real time.
@inproceedings{tkitahara2017, author = {Kitahara, Tetsuro and Giraldo, Sergio and Ramírez, Rafael}, title = {JamSketch: A Drawing-based Real-time Evolutionary Improvisation Support System}, pages = {505--506}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176344}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0101.pdf} }
Jacob Harrison and Andrew McPherson. 2017. An Adapted Bass Guitar for One-Handed Playing. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 507–508. http://doi.org/10.5281/zenodo.1176346
Abstract
Download PDF DOI
We present an attachment for the bass guitar which allows MIDI-controlled actuated fretting. This adapted instrument is presented as a potential method of augmenting the bass guitar for those with upper-limb disabilities. We conducted an online survey of 48 bassists in order to highlight the most important aspects of bass playing. We found that timbral and dynamic features related to the plucking hand were most important to the survey respondents. We designed an actuated fretting mechanism to replace the role of the fretting hand in order to preserve plucking hand techniques. We then conducted a performance study in which experienced bassists prepared and performed an accompaniment to a backing track with the adapted bass. The performances highlighted ways in which adapting a fretted string instrument in this way impacts plucking hand technique.
@inproceedings{jharrison2017, author = {Harrison, Jacob and McPherson, Andrew}, title = {An Adapted Bass Guitar for One-Handed Playing}, pages = {507--508}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176346}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0102.pdf} }
Krzysztof Cybulski. 2017. Feedboxes. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 509–510. http://doi.org/10.5281/zenodo.1176348
Abstract
Download PDF DOI
Feedboxes are interactive sound objects that generate rhythmic and harmonic patterns. Their purpose is to create intuitive tools for live improvisation, without the need for using computer with midi controller or fixed playback. Their only means of communication is sound — they "listen" with the microphone and "speak" with the speaker, thus interaction with Feedboxes is very similar to playing with real musicians. The boxes could be used together with any instrument, or on their own – in this case they create a feedback loop by listening and responding to each other, creating ever-changing rhythmic structures. Feedboxes react to incoming sounds in simple, predefined manner. Yet, when used together, their behaviour may become quite complex. Each of two boxes has its own sound and set of simple rules.
@inproceedings{kcybulski2017, author = {Cybulski, Krzysztof}, title = {Feedboxes}, pages = {509--510}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176348}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0103.pdf} }
Seth Glickman, Byunghwan Lee, Fu Yen Hsiao, and Shantanu Das. 2017. Music Everywhere — Augmented Reality Piano Improvisation Learning System. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 511–512. http://doi.org/10.5281/zenodo.1176350
Abstract
Download PDF DOI
This paper describes the design and implementation of an augmented reality (AR) piano learning tool that uses a Microsoft HoloLens and a MIDI-over-Bluetooth-enabled electric piano. The tool presents a unique visual interface—a “mirror key overlay” approach—fitted for the AR environment, and opens up the possibility of on-instrument learning experiences. The curriculum focuses on teaching improvisation in blues, rock, jazz and classical genres. Users at the piano engage with interactive lessons, watch virtual hand demonstrations, see and hear example improvisations, and play their own solos and accompaniment along with AR-projected virtual musicians. The tool aims to be entertaining yet also effective in teaching core musical concepts.
@inproceedings{sglickman2017, author = {Glickman, Seth and Lee, Byunghwan and Hsiao, Fu Yen and Das, Shantanu}, title = {Music Everywhere --- Augmented Reality Piano Improvisation Learning System}, pages = {511--512}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176350}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0104.pdf} }
Juan Bender, Gabriel Lecup, and Sergio Fernandez. 2017. Song Kernel — Explorations in Intuitive Use of Harmony. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 513–514. http://doi.org/10.5281/zenodo.1176352
Abstract
Download PDF DOI
Song Kernel is a chord-and-note harmonizing musical input interface applicable to electronic instruments in both hardware and software format. It enables to play chords and melodies while visualizing harmonic functions of chords within a scale of western music in one single static pattern. It provides amateur musicians, as well as people with no experience in playing music, a graphic and intuitive way to play songs, manage harmonic structures and identify composition patterns.
@inproceedings{jbender2017, author = {Bender, Juan and Lecup, Gabriel and Fernandez, Sergio}, title = {Song Kernel --- Explorations in Intuitive Use of Harmony}, pages = {513--514}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, issn = {2220-4806}, doi = {10.5281/zenodo.1176352}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0105.pdf} }
2016
Eric Sheffield, Edgar Berdahl, and Andrew Pfalz. 2016. The Haptic Capstans: Rotational Force Feedback for Music using a FireFader Derivative Device. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 1–2. http://doi.org/10.5281/zenodo.1176002
Abstract
Download PDF DOI
The Haptic Capstans are two rotational force-feedback knobs circumscribed by eye-catching LED rings. In this work, the Haptic Capstans are programmed using physical models in order to experiment with audio-visual-haptic interactions for music applications.
@inproceedings{Sheffield2016, author = {Sheffield, Eric and Berdahl, Edgar and Pfalz, Andrew}, title = {The Haptic Capstans: Rotational Force Feedback for Music using a FireFader Derivative Device}, pages = {1--2}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Demonstrations}, doi = {10.5281/zenodo.1176002}, url = {http://www.nime.org/proceedings/2016/nime2016_paper00012.pdf} }
Jason Long, Dale Carnegie, and Ajay Kapur. 2016. The Closed-Loop Robotic Glockenspiel: Improving Musical Robots with Embedded Musical Information Retrieval. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 2–7. http://doi.org/10.5281/zenodo.3964607
Abstract
Download PDF DOI
Musical robots provide artists and musicians with the ability to realise complex new musical ideas in real acoustic space. However, most musical robots are created with open-loop control systems, many of which require time consuming calibration and do not reach the level of reliability of other electronic musical instruments such as synthesizers. This paper outlines the construction of a new robotic musical instrument, the Closed-Loop Robotic Glockenspiel, and discusses the improved robustness, usability and expressive capabilities that closed-loop control systems and embedded musical information retrieval processes can afford robotic musical instruments. The hardware design of the instrument is described along with the firmware of the embedded MIR system. The result is a new desktop robotic musical instrument that is capable of continuous unaided re-calibration, is as simple to use as more traditional hardware electronic sound-sources and provides musicians with new expressive capabilities.
@inproceedings{Long2016, author = {Long, Jason and Carnegie, Dale and Kapur, Ajay}, title = {The Closed-Loop Robotic Glockenspiel: Improving Musical Robots with Embedded Musical Information Retrieval}, pages = {2--7}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.3964607}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0002.pdf} }
Benedict Carey. 2016. SpectraScore VR: Networkable virtual reality software tools for real-time composition and performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 3–4. http://doi.org/10.5281/zenodo.1176004
Abstract
Download PDF DOI
This paper describes a package of modular tools developed for use with virtual reality peripherals to allow for music composition, performance and viewing in ‘real-time’ across networks within a spectralist paradigm. The central tool is SpectraScore, a Max/MSP abstraction for analysing audio signals and ranking the resultant partials according to their harmonic pitch class profiles. This data triggers the generation of objects in a virtual world based on the ‘topography’ of the source sound, which is experienced by network clients via Google Cardboard headsets. They use their movements to trigger audio in various microtonal tunings and incidentally generate scores. These scores are transmitted to performers who improvise music from this notation using Leap Motion Theremins, also in VR space. Finally, the performance is broadcast via a web audio stream, which can be heard by the composer-audience in the initial virtual world. The ‘real-time composers’ and performers are not required to have any prior knowledge of complex computer systems and interact either using head position tracking, or with a Oculus Rift DK2 and a Leap Motion Camera.
@inproceedings{Carey2016a, author = {Carey, Benedict}, title = {SpectraScore VR: Networkable virtual reality software tools for real-time composition and performance}, pages = {3--4}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Demonstrations}, doi = {10.5281/zenodo.1176004}, url = {http://www.nime.org/proceedings/2016/nime2016_paper00022.pdf} }
Herbert H.C. Chang and Spencer Topel. 2016. Electromagnetically Actuated Acoustic Amplitude Modulation Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 8–13. http://doi.org/10.5281/zenodo.3964599
Abstract
Download PDF DOI
This paper discusses a new approach to acoustic amplitude modulation. Building on prior work with electromagnetic augmentation of acoustic instruments, we begin with a theory of operation model to describe the mechanical forces necessary to produce acoustic amplitude modulation synthesis. We then propose an implementation of our model as an instrumental prototype. The results illustrate that our acoustic amplitude modulation system produces controllable sideband components, and that synthesis generated from our corresponding numerical dynamic system model closely approximates the acoustic result of the physical system.
@inproceedings{Chang2016, author = {Chang, Herbert H.C. and Topel, Spencer}, title = {Electromagnetically Actuated Acoustic Amplitude Modulation Synthesis}, pages = {8--13}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.3964599}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0003.pdf} }
Edgar Berdahl, Danny Holmes, and Eric Sheffield. 2016. Wireless Vibrotactile Tokens for Audio-Haptic Interaction with Touchscreen Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 5–6. http://doi.org/10.5281/zenodo.1175984
Abstract
Download PDF DOI
New interfaces for vibrotactile interaction with touchscreens are realized. An electromagnetic design for wireless actuation of 3D-printed conductive tokens is analyzed. Example music interactions are implemented using physical modeling paradigms, each investigated within the context of a particular token that suggests a different interaction metaphor.
@inproceedings{Berdahl2016, author = {Berdahl, Edgar and Holmes, Danny and Sheffield, Eric}, title = {Wireless Vibrotactile Tokens for Audio-Haptic Interaction with Touchscreen Interfaces}, pages = {5--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Demonstrations}, doi = {10.5281/zenodo.1175984}, url = {http://www.nime.org/proceedings/2016/nime2016_paper00032.pdf} }
Alex Baldwin, Troels Hammer, Edvinas Pechiulis, Peter Williams, Dan Overholt, and Stefania Serafin. 2016. Tromba Moderna: A Digitally Augmented Medieval Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 14–19. http://doi.org/10.5281/zenodo.3964592
Abstract
Download PDF DOI
An interactive museum exhibit of a digitally augmented medieval musical instrument, the tromba marina, is presented. The tromba marina is a curious single stringed instrument with a rattling bridge, from which a trumpet-like timbre is produced. The physical instrument was constructed as a replica of one found in Musikmuseet in Frederiksberg. The replicated instrument was augmented with a pickup, speakers and digital signal processing to create a more reliable, approachable and appropriate instrument for interactive display in the museum. We report on the evaluation of the instrument performed at the Danish museum of musical instruments.
@inproceedings{Baldwin2016, author = {Baldwin, Alex and Hammer, Troels and Pechiulis, Edvinas and Williams, Peter and Overholt, Dan and Serafin, Stefania}, title = {Tromba Moderna: A Digitally Augmented Medieval Instrument}, pages = {14--19}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.3964592}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0004.pdf} }
Henning Berg. 2016. Tango: Software for Computer-Human Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 7–8. http://doi.org/10.5281/zenodo.1175990
Abstract
Download PDF DOI
This demonstration describes Tango, software for Computer-Human Improvisation developed for more than 25 years by Henning Berg. Tango listens to an improvising musician, analyses what it hears and plays musical responses which relate directly to the musical input. If the improviser in turn reacts to these answers, a musical loop between the human and the machine can emerge. The way input and reaction correlate and the predictability of Tango’s responses can be defined by the user via a setup of improvising environments, called Rooms. Real-time sampling with knowledge of the musical content behind the samples and Midi-handling are unified via Tango’s own monophonic audio-to-Midi, time stretching and pitch shifting algorithms. Both audio and Midi can be used by Tango’s modules (e.g. Listeners, Players, Modifiers, Metronomes or Harmony) for input and output. A flexible real time control system allows for internal and external remote control and scaling of most parameters. The free software for Windows7 with all necessary folders, English and German manuals, many example-Rooms and a few videos can be downloaded at www.henning-berg.de.
@inproceedings{Berg2016, author = {Berg, Henning}, title = {Tango: Software for Computer-Human Improvisation}, pages = {7--8}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Demonstrations}, doi = {10.5281/zenodo.1175990}, url = {http://www.nime.org/proceedings/2016/nime2016_paper00042.pdf} }
Andrew McPherson, Robert Jack, and Giulio Moro. 2016. Action-Sound Latency: Are Our Tools Fast Enough? Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 20–25. http://doi.org/10.5281/zenodo.3964611
Abstract
Download PDF DOI
The importance of low and consistent latency in interactive music systems is well-established. So how do commonly-used tools for creating digital musical instruments and other tangible interfaces perform in terms of latency from user action to sound output? This paper examines several common configurations where a microcontroller (e.g. Arduino) or wireless device communicates with computer-based sound generator (e.g. Max/MSP, Pd). We find that, perhaps surprisingly, almost none of the tested configurations meet generally-accepted guidelines for latency and jitter. To address this limitation, the paper presents a new embedded platform, Bela, which is capable of complex audio and sensor processing at submillisecond latency.
@inproceedings{McPherson2016, author = {McPherson, Andrew and Jack, Robert and Moro, Giulio}, title = {Action-Sound Latency: Are Our Tools Fast Enough?}, pages = {20--25}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.3964611}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0005.pdf} }
Edgar Berdahl, Andrew Pfalz, and Stephen David Beck. 2016. Very Slack Strings: A Physical Model and Its Use in the Composition Quartet for Strings. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 9–10. http://doi.org/10.5281/zenodo.1175988
Abstract
Download PDF DOI
Virtual “slack” strings are designed for and employed by the Laptop Orchestra of Louisiana. These virtual strings are “slack” in the sense that they can be very easily displaced, bent, tugged upon, etc. This enables force-feedback control of widely ranging pitch glides, by as much as an octave or more, simply by bending the virtual string. To realize a slack string design, a virtual spring with a specific nonlinear characteristic curve is designed. Violin, viola, and cello-scale models are tuned and employed by the Laptop Orchestra of Louisiana in Quartet for Strings.
@inproceedings{Berdahl2016a, author = {Berdahl, Edgar and Pfalz, Andrew and Beck, Stephen David}, title = {Very Slack Strings: A Physical Model and Its Use in the Composition Quartet for Strings}, pages = {9--10}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Demonstrations}, doi = {10.5281/zenodo.1175988}, url = {http://www.nime.org/proceedings/2016/nime2016_paper00052.pdf} }
Reid Oda and Rebecca Fiebrink. 2016. The Global Metronome: Absolute Tempo Sync For Networked Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 26–31. http://doi.org/10.5281/zenodo.1176096
Abstract
Download PDF DOI
At a time in the near future, many computers (including devices such as smart-phones) will have system clocks that are synchronized to a high degree (less than 1 ms of error). This will enable us to coordinate events across unconnected devices with a degree of accuracy that was previously impossible. In particular, high clock synchronization means that we can use these clocks to synchronize tempo between humans or sequencers with little-to-no communication between the devices. To facilitate this low-overhead tempo synchronization, we propose the Global Metronome, which is a simple, computationally cheap method to obtain absolute tempo synchronization. We present experimental results demonstrating the effectiveness of using the Global Metronome and compare the performance to MIDI clock sync, a common synchronization method. Finally, we present an open source implementation of a Global Metronome server using a GPS-connected Raspberry Pi that can be built for under $100.
@inproceedings{Oda2016, author = {Oda, Reid and Fiebrink, Rebecca}, title = {The Global Metronome: Absolute Tempo Sync For Networked Musical Performance}, pages = {26--31}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176096}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0006.pdf} }
Scott Smallwood. 2016. Coronium 3500: A Solarsonic Installation for Caramoor. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 32–35. http://doi.org/10.5281/zenodo.1176127
Abstract
Download PDF DOI
This paper describes the development, creation, and deployment of a sound installation entitled Coronium 3500 (Lucie’s Halo), commissioned by the Caramoor Center for Music and the Arts. The piece, a 12-channel immersive sound installation driven by solar power, was exhibited as part of the exhibition In the Garden of Sonic Delights from June 7 to Nov. 4, 2014, and again for similar duration in 2015. Herein I describe the aesthetic and technical details of the piece and its ultimate deployment, as well as reflecting on the results and the implications for future work.
@inproceedings{Smallwood2016, author = {Smallwood, Scott}, title = {Coronium 3500: A Solarsonic Installation for Caramoor}, pages = {32--35}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176127}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0007.pdf} }
Tomas Laurenzo. 2016. 5500: performance, control, and politics. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 36–40. http://doi.org/10.5281/zenodo.1176058
Abstract
Download PDF DOI
In the period between June 2014 and June 2015, at least 5,500 immigrants died trying to reach Europe from Africa while crossing the Mediterranean Sea. In this paper we present 5500, a piano performance that is a part of an on-going project that investigates the incorporation of electrical muscle stimulation (EMS) into musical performances, with a particular interest in the political significance of the negotiation of control that arises. 5500 consists of a performance of Beethoven’s Sonata Pathétique, where the pianist’s execution is disrupted using computer-controlled electrodes which stimulate the muscles in his or her arms causing their involuntary contractions and affecting the final musical result.
@inproceedings{Laurenzo2016, author = {Laurenzo, Tomas}, title = {5500: performance, control, and politics}, pages = {36--40}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176058}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0008.pdf} }
Bridget Johnson, Michael Norris, and Ajay Kapur. 2016. speaker.motion: A Mechatronic Loudspeaker System for Live Spatialisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 41–45. http://doi.org/10.5281/zenodo.1176046
Abstract
Download PDF DOI
This paper provides an overview of a new mechatronic loudspeaker system: speaker.motion. The system affords automated positioning of a loudspeaker in real-time in order to manipulate the spatial qualities of electronic music. The paper gives a technical overview of how the system’s hardware and software were developed and the design criteria and methodology. There is discussion of the unique features of the speaker.motion spatialisation system and the methods of user interaction, as well as a look at the creative possibilities that the loudspeakers afford. The creative affordances are explored through the case study of two new pieces written for the speaker.motion system. It is hoped that the speaker.motion system will afford composers and performers with a new range of spatial aesthetics to use in spatial performances, and encourage exploration of the acoustic properties of physical performance and installation spaces in electronic music.
@inproceedings{Johnson2016, author = {Johnson, Bridget and Norris, Michael and Kapur, Ajay}, title = {speaker.motion: A Mechatronic Loudspeaker System for Live Spatialisation}, pages = {41--45}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176046}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0009.pdf} }
Doug Van Nort, Ian Jarvis, and Michael Palumbo. 2016. Towards a Mappable Database of Emergent Gestural Meaning. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 46–50. http://doi.org/10.5281/zenodo.1176092
Abstract
Download PDF DOI
This paper presents our work towards a database of performance activity that is grounded in an embodied view on meaning creation that crosses sense modalities. Our system design is informed by the philosophical and aesthestic intentions of the laboratory context within which it is designed, focused on distribution of performance activity across temporal and spatial dimensions, and expanded notions of the instrumental system as environmental performative agent. We focus here on design decisions that result from this overarching worldview on digitally-mediated performance.
@inproceedings{Nort2016, author = {Nort, Doug Van and Jarvis, Ian and Palumbo, Michael}, title = {Towards a Mappable Database of Emergent Gestural Meaning}, pages = {46--50}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176092}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0010.pdf} }
Jason Long, Ajay Kapur, and Dale Carnegie. 2016. An Analogue Interface for Musical Robots. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 51–54. http://doi.org/10.5281/zenodo.1176072
Abstract
Download PDF DOI
The majority of musical robotics performances, projects and installations utilise microcontroller hardware to digitally interface the robotic instruments with sequencer software and other musical controllers, often via a personal computer. While in many ways digital interfacing offers considerable power and flexibility, digital protocols, equipment and audio workstations often tend to suggest particular music-making work-flows and have resolution and timing limitations. This paper describes the creation of a hardware interface that allows direct communication between analogue synthesizer equipment and simple robotic musical instruments entirely in the analogue domain without the use of computers, microcontrollers or software of any kind. Several newly created musical robots of various designs are presented, together with a custom built hardware interface with circuitry that enables analogue synthesizers to interface with the robots without any digital intermediary. This enables novel methods of musical expression, creates new music-making work-flows for composing and improvising with musical robots and takes advantage of the low latency and infinite resolution of analogue circuits.
@inproceedings{Long2016a, author = {Long, Jason and Kapur, Ajay and Carnegie, Dale}, title = {An Analogue Interface for Musical Robots}, pages = {51--54}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176072}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0011.pdf} }
Natasha Barrett and Alexander Refsum Jensenius. 2016. The ‘Virtualmonium’: an instrument for classical sound diffusion over a virtual loudspeaker orchestra. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 55–60. http://doi.org/10.5281/zenodo.1175974
Abstract
Download PDF DOI
Despite increasingly accessible and user-friendly multi-channel compositional tools, many composers still choose stereo formats for their work, where the compositional process is allied to diffusion performance over a ‘classical’ loudspeaker orchestra. Although such orchestras remain common within UK institutions as well as in France, they are in decline in the rest of the world. In contrast, permanent, high-density loudspeaker arrays are on the rise, as is the practical application of 3-D audio technologies. Looking to the future, we need to reconcile the performance of historical and new stereo works, side-by-side native 3-D compositions. In anticipation of this growing need, we have designed and tested a prototype ‘Virtualmonium’. The Virtualmonium is an instrument for classical diffusion performance over an acousmonium emulated in higher-order Ambisonics. It allows composers to custom-design loudspeaker orchestra emulations for the performance of their works, rehearse and refine performances off-site, and perform classical repertoire alongside native 3-D formats in the same concert. This paper describes the technical design of the Virtualmonium, assesses the success of the prototype in some preliminary listening tests and concerts, and speculates how the instrument can further composition and performance practice.
@inproceedings{Barrett2016, author = {Barrett, Natasha and Jensenius, Alexander Refsum}, title = {The `Virtualmonium': an instrument for classical sound diffusion over a virtual loudspeaker orchestra}, pages = {55--60}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1175974}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0012.pdf} }
Julian Jaramillo Arango and Daniel Melàn Giraldo. 2016. The Smartphone Ensemble. Exploring mobile computer mediation in collaborative musical performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 61–64. http://doi.org/10.5281/zenodo.1175850
Abstract
Download PDF DOI
This paper reports the goals, procedures and recent activities of the Smartphone Ensemble, an academic group of musicians and designers exploring mobile phones social mediation in musical contexts. The SE was created in the Design and Creation program at the Caldas University in Manizales, Colombia and includes six regular members. The group intends to enhance links among musicians, and between the musicians and their audience, by leveraging the network capabilities and mobility of smart phones, and exploring the expressivity of urban space. Through the creation of pieces and interventions that are related to urban experiences, the Smartphone Ensemble envisions alternatives to the standard musical performance space. In this regard, the performances intend to be urban interventions, not traditional concerts, they progress according to previously defined tours around the city that the group embarks while playing
@inproceedings{Arango2016, author = {Arango, Julian Jaramillo and Giraldo, Daniel Mel\`{a}n}, title = {The Smartphone Ensemble. Exploring mobile computer mediation in collaborative musical performance}, pages = {61--64}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1175850}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0013.pdf} }
Alex Hofmann, Vasileios Chatziioannou, Alexander Mayer, and Harry Hartmann. 2016. Development of Fibre Polymer Sensor Reeds for Saxophone and Clarinet. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 65–68. http://doi.org/10.5281/zenodo.1176028
Abstract
Download PDF DOI
Electronic pickup systems for acoustic instruments are often used in popular and contemporary music performances because they allow amplification and modification of a clean and direct signal. Strain gauge sensors on saxophone and clarinet reeds have been shown to be a useful tool to gain insight into tongue articulation during performance but also capture the reed vibrations. In our previous design, we used a procedure with epoxy adhesive to glue the strain gauge sensors to the flat side of the synthetic single reeds. The new design integrates the sensor inside a synthetic reed, respectively between layers of fibre polymer and wood. This allows an industrial production of sensor reeds. Sensor reeds open up new possibilities to pick up woodwind instruments and to analyse, to modify, and to amplify the signal. A signal-to-noise analysis of the signals from both designs showed that a sensor, glued to the outside of the reed, produced a cleaner signal.
@inproceedings{Hofmann2016, author = {Hofmann, Alex and Chatziioannou, Vasileios and Mayer, Alexander and Hartmann, Harry}, title = {Development of Fibre Polymer Sensor {Reed}s for Saxophone and Clarinet}, pages = {65--68}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176028}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0014.pdf} }
Ajay Kapur, Jim Murphy, Michael Darling, Eric Heep, Bruce Lott, and Ness Morris. 2016. MalletOTon and the Modulets: Modular and Extensible Musical Robots. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 69–72. http://doi.org/10.5281/zenodo.1176050
Abstract
Download PDF DOI
This paper presents two new musical robot systems and an accompanying driver electronics array. These systems are designed to allow for modular extensibility and ease of use with different instrument systems. The first system discussed is MalletOTon, a mechatronic mallet instrument player that may be re-configured to play a number of different instruments. Secondly, the Modulet mechatronic noisemakers are presented. These instruments are discrete modules that may be installed throughout a space in a wide variety of configurations. In addition to presenting the aforementioned new instruments, the Novalis system is shown. Novalis is an open-ended driver system for mechatronic instruments, designed to afford rapid deployment and modularity. Where prior mechatronic instruments are often purpose-built, the robots and robot electronics presented in this paper may be re-deployed in a wide-ranging, diverse manner. Taken as a whole, the design practices discussed in this paper go toward establishing a paradigm of modular and extensible mechatronic instrument development.
@inproceedings{Kapur2016, author = {Kapur, Ajay and Murphy, Jim and Darling, Michael and Heep, Eric and Lott, Bruce and Morris, Ness}, title = {MalletOTon and the Modulets: Modular and Extensible Musical Robots}, pages = {69--72}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176050}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0015.pdf} }
Ben Olson. 2016. Transforming 8-Bit Video Games into Musical Interfaces via Reverse Engineering and Augmentation. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 73–77. http://doi.org/10.5281/zenodo.1176100
Abstract
Download PDF DOI
Video games and music have influenced each other since the beginning of the consumer video game era. In particular the chiptune genre of music uses sounds from 8-bit video games; these sounds have even found their way into contemporary popular music. However, in this genre, game sounds are arranged using conventional musical interfaces, meaning the games themselves (their algorithms, design and interactivity) play no role in the creation of the music. This paper describes a new way of creating music with 8-bit games, by reverse engineering and augmenting them with run-time scripts. A new API, Emstrument, is presented which allows these scripts to send MIDI to music production software. The end result is game-derived musical interfaces any computer musician can use with their existing workflow. This enhances prior work in repurposing games as musical interfaces by allowing musicians to use the original games instead of having to build new versions with added musical capabilities. Several examples of both new musical instruments and dynamic interactive musical compositions using Emstrument are presented, using iconic games from the 8-bit era.
@inproceedings{Olson2016, author = {Olson, Ben}, title = {Transforming 8-Bit Video Games into Musical Interfaces via Reverse Engineering and Augmentation}, pages = {73--77}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176100}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0016.pdf} }
Juliana Cherston, Ewan Hill, Steven Goldfarb, and Joseph Paradiso. 2016. Musician and Mega-Machine: Compositions Driven by Real-Time Particle Collision Data from the ATLAS Detector. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 78–83. http://doi.org/10.5281/zenodo.1176012
Abstract
Download PDF DOI
We present a sonification platform for generating audio driven by real-time particle collision data from the ATLAS experiment at CERN. This paper provides a description of the data-to-audio mapping interfaces supported by the project’s composition tool as well as a preliminary evaluation of the platform’s evolution to meet the aesthetic needs of vastly distinct musical styles and presentation venues. Our work has been conducted in collaboration with the ATLAS Outreach team and is part of a broad vision to better harness real-time sensor data as a canvas for artistic expression. Data-driven streaming audio can be treated as a reimagined form of live radio for which composers craft the instruments but real-time particle collisions pluck the strings.
@inproceedings{Cherston2016, author = {Cherston, Juliana and Hill, Ewan and Goldfarb, Steven and Paradiso, Joseph}, title = {Musician and Mega-Machine: Compositions Driven by Real-Time Particle Collision Data from the ATLAS Detector}, pages = {78--83}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176012}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0017.pdf} }
Anders Lind and Daniel Nylén. 2016. Mapping Everyday Objects to Digital Materiality in The Wheel Quintet: Polytempic Music and Participatory Art. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 84–89. http://doi.org/10.5281/zenodo.1176064
Abstract
Download PDF DOI
Digitalization has enabled material decoupling of sound from the physical devices traditionally used to conceive it. This paper reports an artistic exploration of novel mappings between everyday objects and digital sound. The Wheel Quintet—a novel musical instrument comprising four bicycle wheels and a skateboard—was created using off-the-shelf components and visual programming in Max/MSP. The use of everyday objects sought to enable people to quickly master the instrument, regardless of their musical backgrounds, and collectively create polytempic musical textures in a participatory art context. Applying an action research approach, the paper examines in detail two key cycles of planning, action, and analysis related to the instrument, involving an interactive museum exhibition open to the public and a concert hall performance conducted by an animated music notation system. Drawing on insights from the study, the paper contributes new knowledge concerning the creation and use of novel interfaces for music composition and performance enabled by digitalization.
@inproceedings{Lind2016, author = {Lind, Anders and Nyl\'{e}n, Daniel}, title = {Mapping Everyday Objects to Digital Materiality in The Wheel Quintet: Polytempic Music and Participatory Art}, pages = {84--89}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176064}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0018.pdf} }
Alfonso Balandra, Hironori Mitake, and Shoichi Hasegawa. 2016. Haptic Music Player—Synthetic audio-tactile stimuli generation based on the notes’ pitch and instruments’ envelope mapping. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 90–95. http://doi.org/10.5281/zenodo.1175968
Abstract
Download PDF DOI
An entertainment environment to enrich music listening experience is purposed. This environment is composed of 3 modules: a MIDI player, a music animation and a haptic module that translates the notes played by one instrument into a resemblant vibration. To create the haptic vibration, the notes’ relative pitch in the song are calculated, then these positions are mapped into the haptic signals’ amplitude and frequency. Also, the envelope of the haptic signal is modified, by using an ADSR filter, to have the same envelope as the audio signal. To evaluate the perceived cross-modal similarity between users, two experiments were performed. In both, the users used the complete entertainment environment to rank the similarity between 3 different haptic signals, with triangular, square and analogue envelopes and 4 different instruments in a classical song. The first experiment was performed with the purposed amplitude and frequency technique, while the second experiment was performed with constant frequency and amplitude. Results, show different envelope user preferences. The square and triangular envelopes were preferred in the first experiment, while only analogue envelopes were preferred in the second. This suggests that the users’ envelope perception was masked by the changes in amplitude and frequency.
@inproceedings{Balandra2016, author = {Balandra, Alfonso and Mitake, Hironori and Hasegawa, Shoichi}, title = {Haptic Music Player---Synthetic audio-tactile stimuli generation based on the notes' pitch and instruments' envelope mapping}, pages = {90--95}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1175968}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0019.pdf} }
Madeline Huberth and Chryssie Nanou. 2016. Notation for 3D Motion Tracking Controllers: A Gametrak Case Study. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 96–105. http://doi.org/10.5281/zenodo.1176034
Abstract
Download PDF DOI
Notation systems are used in almost all fields, especially for the communication and expression of ideas. This paper proposes and discusses a notation system for Gametrak-based computer music instruments. The notation system’s design is informed both by Western music notation and dance notation, as well as common mappings used in laptop orchestras. It is designed to be sound-agnostic, primarily instructing the performer in their motions. While the discussion of such a notation system may be particularly timely due to the growing commercially-available 3D motion tracking controllers, the notation system may prove especially useful in the context of Gametrak and laptop orchestra, for which score-based representation can help clarify performer interaction and serve as a teaching tool in documenting prior work.
@inproceedings{Huberth2016, author = {Huberth, Madeline and Nanou, Chryssie}, title = {Notation for {3D} Motion Tracking Controllers: A Gametrak Case Study}, pages = {96--105}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176034}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0020.pdf} }
Cem Cakmak, Anil Camci, and Angus Forbes. 2016. Networked Virtual Environments as Collaborative Music Spaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 106–111. http://doi.org/10.5281/zenodo.1176002
Abstract
Download PDF DOI
In this paper, we describe a novel multimedia system for networked musical collaboration. Our system, called Monad, offers a 3D virtual environment that can be shared by multiple participants to collaborate remotely on a musical performance. With Monad, we explore how various features of this environment in relation to game mechanics, network architecture, and audiovisual aesthetics can be used to mitigate problems inherent to networked musical performance, such as time delays, data loss, and reduced agency of users. Finally, we describe the results of a series of qualitative user studies that illustrate the effectiveness of some of our design decisions with two separate versions of Monad.
@inproceedings{Cakmak2016, author = {Cakmak, Cem and Camci, Anil and Forbes, Angus}, title = {Networked Virtual Environments as Collaborative Music Spaces}, pages = {106--111}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176002}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0022.pdf} }
Dominic Becking, Christine Steinmeier, and Philipp Kroos. 2016. Drum-Dance-Music-Machine: Construction of a Technical Toolset for Low-Threshold Access to Collaborative Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 112–117. http://doi.org/10.5281/zenodo.1175980
Abstract
Download PDF DOI
Most instruments traditionally used to teach music in early education, like xylophones or flutes, encumber children with the additional difficulty of an unfamiliar and unnatural interface. The most simple expressive interaction, that even the smallest children use in order to make music, is pounding at surfaces. Through the design of an instrument with a simple interface, like a drum, but which produces a melodic sound, children can be provided with an easy and intuitive means to produce consonance. This should then be further complemented with information from analysis and interpretation of childlike gestures and dance moves, reflecting their natural understanding of musical structure and motion. Based on these assumptions we propose a modular and reactive system for dynamic composition with accessible interfaces, divided into distinct plugins usable in a standard digital audio workstation. This paper describes our concept and how it can facilitate access to collaborative music making for small children. A first prototypical implementation has been designed and developed during the ongoing research project Drum-Dance-Music-Machine (DDMM), a cooperation with the local social welfare association AWO Hagen and the chair of musical education at the University of Applied Sciences Bielefeld.
@inproceedings{Becking2016, author = {Becking, Dominic and Steinmeier, Christine and Kroos, Philipp}, title = {Drum-Dance-Music-Machine: Construction of a Technical Toolset for Low-Threshold Access to Collaborative Musical Performance}, pages = {112--117}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1175980}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0023.pdf} }
Sasha Leitman and John Granzow. 2016. Music Maker: 3d Printing and Acoustics Curriculum. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 118–121. http://doi.org/10.5281/zenodo.1176062
Abstract
Download PDF DOI
Music Maker is a free online resource that provides files for 3D printing woodwind and brass mouthpieces and tutorials for using those mouthpieces to learn about acoustics and music. The mouthpieces are designed to fit into standard plumbing and automobile parts that can be easily purchased at home improvement and automotive stores. The result is a musical tool that can be used as simply as a set of building blocks to bridge the gap between our increasingly digital world of fabrication and the real-world materials that make up our daily lives. An increasing number of schools, libraries and community groups are purchasing 3D printers but many are still struggling to create engaging and relevant curriculum that ties into academic subjects. Making new musical instruments is a fantastic way to learn about acoustics, physics and mathematics.
@inproceedings{Leitman2016, author = {Leitman, Sasha and Granzow, John}, title = {Music Maker: 3d Printing and Acoustics Curriculum}, pages = {118--121}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176062}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0024.pdf} }
Jacob T. Sello. 2016. The Hexenkessel: A Hybrid Musical Instrument for Multimedia Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 122–131. http://doi.org/10.5281/zenodo.1176118
Abstract
Download PDF DOI
This paper introduces the Hexenkessel—an augmented musical instrument for interactive multimedia arts. The Hexenkessel is a classical timpani with its drumhead acting as a tangible user interface for expressive multimedia performances on stage.
@inproceedings{Sello2016, author = {Sello, Jacob T.}, title = {The Hexenkessel: A Hybrid Musical Instrument for Multimedia Performances}, pages = {122--131}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176118}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0025.pdf} }
Otso Lähdeoja. 2016. Active Acoustic Instruments for Electronic Chamber Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 132–136. http://doi.org/10.5281/zenodo.1176054
Abstract
Download PDF DOI
This paper presents an ongoing project for augmenting acoustic instruments with active acoustics. Active acoustics are defined as audio-rate vibration driven into the instruments physical structure, inducing air-borne sound output. The instrument’s acoustic sound is thus doubled by an electronic soundscape radiating from the same source. The article is centered on a case study on two guitars, one with hexaphonic sound capture and the other with monophonic pickup. The article discusses the design, implementation, acoustics, sound capture and processing of an active acoustic instrument, as well as gestural control using the Leap Motion sensor. Extensions towards other instruments are presented, in connection with related artistic projects and ‘electronic chamber music’ aesthetics.
@inproceedings{Lnicode228hdeoja2016, author = {L\''{a}hdeoja, Otso}, title = {Active Acoustic Instruments for Electronic Chamber Music}, pages = {132--136}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176054}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0027.pdf} }
Evan Lynch and Joseph Paradiso. 2016. SensorChimes: Musical Mapping for Sensor Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 137–142. http://doi.org/10.5281/zenodo.1176074
Abstract
Download PDF DOI
We present a composition framework that facilitates novel musical mappings for large-scale distributed networks of environmental sensors. A library of C-externals called ChainFlow for the graphical programming language Max/MSP that provides an interface to real-time and historical data for large sensor deployments was designed and implemented. This library along with spatialized audio techniques were used to create immersive musical compositions which can be presented on their own or complemented by a graphical 3D virtual world. Musical works driven by a sensor network deployed in a wetland restoration project called Tidmarsh are presented as case studies in augmented presence through musical mapping.
@inproceedings{Lynch2016, author = {Lynch, Evan and Paradiso, Joseph}, title = {SensorChimes: Musical Mapping for Sensor Networks}, pages = {137--142}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176074}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0028.pdf} }
Kyosuke Nakanishi, Paul Haimes, Tetsuaki Baba, and Kumiko Kushiyama. 2016. NAKANISYNTH: An Intuitive Freehand Drawing Waveform Synthesiser Application for iOS Devices. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 143–145. http://doi.org/10.5281/zenodo.1176086
Abstract
Download PDF DOI
NAKANISYNTH is a synthesiser application available on iOS devices that provides a simple and intuitive interface, allowing users to produce sound loops by freehand drawing sound waves and envelope curves. The interface provides a simple way of interacting: the only input required involves drawing two waveforms, meaning that users can easily produce various sounds intuitively without the need for complex manipulation. The application’s interface comprises of an interchangeable ribbon and keyboard feature, plus two panels where users can edit waveforms, allowing users to make sounds. This simple approach to the interface means that it is easy for users to understand the relationship between a waveform and the sound that it produces.
@inproceedings{Nakanishi2016, author = {Nakanishi, Kyosuke and Haimes, Paul and Baba, Tetsuaki and Kushiyama, Kumiko}, title = {NAKANISYNTH: An Intuitive Freehand Drawing Waveform Synthesiser Application for iOS Devices}, pages = {143--145}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176086}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0029.pdf} }
Richard Vindriis and Dale Carnegie. 2016. StrumBot—An Overview of a Strumming Guitar Robot. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 146–151. http://doi.org/10.5281/zenodo.1176135
Abstract
Download PDF DOI
StrumBot is a novel standalone six stringed robotic guitar consisting of mechanisms designed to enable musical expressivity and minimise acoustic noise. It is desirable for less than 60 dBA of noise at 1 m to be emitted to allow StrumBot to play in intimate venues such as cafés or restaurants without loud motor noises detracting from the musical experience. StrumBot improves upon previous RMI’s by allowing additional expressive opportunities for a composer to utilise. StrumBot can perform slides, vibrato, muting techniques, pitch bends, pluck power variances, timbre control, complex chords and fast strumming patterns. A MIDI input allows commercial or custom controllers to operate StrumBot. Novel note allocation algorithms were created to allow a single MIDI stream of notes to be allocated across the six guitar strings. Latency measurements from MIDI input to string pluck are as low as 40 ms for a best case scenario strum, allowing StrumBot to accompany a live musician with minimal audible delay. A relay based loop switcher is incorporated, allowing StrumBot to activate standard commercial guitar pedals based on a MIDI instruction.
@inproceedings{Vindriis2016, author = {Vindriis, Richard and Carnegie, Dale}, title = {StrumBot---An Overview of a Strumming Guitar Robot}, pages = {146--151}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176135}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0030.pdf} }
Tim Shaw, Simon Bowen, and John Bowers. 2016. Unfoldings: Multiple Explorations of Sound and Space. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 152–157. http://doi.org/10.5281/zenodo.1176122
Abstract
Download PDF DOI
This paper describes a long term, collaborative project Sound Spaces. Within this project we creatively investigated various environments and built a collection of artworks in response to material gathered through a number of practical field visits. Our responses were presented in numerous, idiosyncratic ways and took shape through a number of concerted making activities. The work was conducted both in and with the public, allowing participants to inform the creative decisions made throughout the project as well as experiencing the building of the artworks. Within this essay we report on our process, presentation and offer alternative methods for collecting material and presenting representations of space. We describe the many responses made during our time and related these to research concerns relevant to the NIME community. We conclude with our findings and, through the production of an annotated portfolio, offer our main emerging themes as points of discussion.
@inproceedings{Shaw2016, author = {Shaw, Tim and Bowen, Simon and Bowers, John}, title = {Unfoldings: Multiple Explorations of Sound and Space}, pages = {152--157}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176122}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0031.pdf} }
Alexandra Rieger and Spencer Topel. 2016. Driftwood: Redefining Sound Sculpture Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 158–159. http://doi.org/10.5281/zenodo.1176110
Abstract
Download PDF DOI
The Driftwood is a maneuverable sculptural instrument & controller. Tactilely, it is a micro-terrain one can explore with the hands as with the ears. Closed circuit sensors, moving wooden parts and Piezo microphones are discussed in the design phase alongside background and musical implementation concepts. Electronics and nature converge in this instrument harmoniously referencing our changing world and environment. When engaging with the sonic sculpture silent objects become audible and rest-wood is venerated. It is revealed to the musician interacting with Driftwood that our actions intervene directly with issues relating to sustainability and the amount of value we place on the world we live in. Every scrap of wood was once a tree, Driftwood reminds us of this in a multi-sensory playing experience. The Driftwood proposes a reinterpretation of the process of music creation, awareness and expression.
@inproceedings{Rieger2016, author = {Rieger, Alexandra and Topel, Spencer}, title = {Driftwood: Redefining Sound Sculpture Controllers}, pages = {158--159}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Demonstrations}, doi = {10.5281/zenodo.1176110}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0032.pdf} }
Rebecca Kleinberger and Akito van Troyer. 2016. Dooremi: a Doorway to Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 160–161. http://doi.org/10.5281/zenodo.1176052
Abstract
Download PDF DOI
The following paper documents the prototype of a musical door that interactively plays sounds, melodies, and sound textures when in use. We took the natural interactions people have with doors—grabbing and turning the knob and pushing and puling motions—and turned them into musical activities. The idea behind this project comes from the fact that the activity of using a door is almost always accompanied by a sound that is generally ignored by the user. We believe that this sound can be considered musically rich and expressive because each door has specific sound characteristics and each person makes it sound slightly different. By augmenting the door to create an unexpected sound, this project encourages us to listen to our daily lives with a musician’s critical ear, and reminds us of the musicality of our everyday activities.
@inproceedings{Kleinberger2016, author = {Kleinberger, Rebecca and van Troyer, Akito}, title = {Dooremi: a Doorway to Music}, pages = {160--161}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Demonstrations}, doi = {10.5281/zenodo.1176052}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0033.pdf} }
Carl Jürgen Normark, Peter Parnes, Robert Ek, and Harald Andersson. 2016. The extended clarinet. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 162–167. http://doi.org/10.5281/zenodo.1176090
Abstract
Download PDF DOI
This paper describes how a classical instrument, the clarinet, can be extended with modern technology to create a new and easy to use augmented instrument. The paper describes the design process, technical details and how a musician can use the instrument. The clarinet bell is extended with sensor technology in order to improve the ways the clarinet is traditionally played and improve the performing artist’s musical and performative expressions. New ways of performing music with a clarinet also opens up for novel ways of composing musical pieces. The design is iterated in two versions with improved hardware and form factor where everything is packaged into the clarinet bell. The clarinet uses electronics that wirelessly sends sensor data to a computer that processes a live audio feed via the software MAX 7 and plays it back via loudspeakers on the stage. The extended clarinet provides several ways of transforming audio and also adds several ways of making performances more visually interesting. It is shown that this way of using sensor technology in a traditional musical instrument adds new dimensions to the performance and allows creative persons to express themselves in new ways as well as giving the audience an improved experience.
@inproceedings{Normark2016, author = {Normark, Carl J\''{u}rgen and Parnes, Peter and Ek, Robert and Andersson, Harald}, title = {The extended clarinet}, pages = {162--167}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176090}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0034.pdf} }
Yoichi Nagashima. 2016. Multi Rubbing Tactile Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 168–169. http://doi.org/10.5281/zenodo.1176084
Abstract
Download PDF DOI
This is a report of a novel tactile musical instrument. This instrument is called Multi Rubbing Tactile Instrument (MRTI2015), using ten pieces of PAW sensor produced by the RT corporation. Previous research was focused on untouchable instruments, but this approach is fully tactile—rub and touch. The ten PAW sensors are assigned on the surface of the egg-like plastic case to fit the ten fingers grasping the instrument. The controller is mbed (NucleoF401RE), and it communicates with the host PC via high speed serial (115200bps) by an MIDI-like protocol. Inside the egg-like plastic case, this instrument has eight blue-LEDs which are controlled by the host in order to display the grasping nuances. The prototype of this instrument contains realtime visualizing system with chaotic graphics by Open-GL. I will report on the principle of the sensor, and details about realizing the new system.
@inproceedings{Nagashim2016, author = {Nagashima, Yoichi}, title = {Multi Rubbing Tactile Instrument}, pages = {168--169}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Demonstrations}, doi = {10.5281/zenodo.1176084}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0035.pdf} }
Leshao Zhang, Yongmeng Wu, and Mathieu Barthet. 2016. A Web Application for Audience Participation in Live Music Performance: The Open Symphony Use Case. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 170–175. http://doi.org/10.5281/zenodo.1176147
Abstract
Download PDF DOI
This paper presents a web-based application enabling audiences to collaboratively contribute to the creative process during live music performances. The system aims at enhancing audience engagement and creating new forms of live music experiences. Interaction between audience and performers is made possible through a client/server architecture enabling bidirectional communication of creative data. Audience members can vote for pre-determined musical attributes using a smartphone-friendly and cross-platform web application. The system gathers audience members’ votes and provide feedback through visualisations that can be tailored for specific needs. In order to support multiple performers and large audiences, automatic audience-to-performer groupings are handled by the application. The framework was applied to support live interactive musical improvisations where creative roles are shared amongst audience and performers (Open Symphony). Qualitative analyses of user surveys highlighted very positive feedback related to themes such as engagement and creativity and also identified further design challenges around audience sense of control and latency.
@inproceedings{Zhang2016, author = {Zhang, Leshao and Wu, Yongmeng and Barthet, Mathieu}, title = {A Web Application for Audience Participation in Live Music Performance: The Open Symphony Use Case}, pages = {170--175}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176147}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0036.pdf} }
Antonio Deusany de Carvalho Junior, Sang Won Lee, and Georg Essl. 2016. Understanding Cloud Support for the Audience Participation Concert Performance of Crowd in C[loud]. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 176–181. http://doi.org/10.5281/zenodo.1176008
Abstract
Download PDF DOI
Cloud services allow musicians and developers to build audience participation software with minimal network configuration for audience and no need for server-side development. In this paper we discuss how a cloud service supported the audience participation music performance, Crowd in C[loud], which enables audience participation on a large scale using the audience audience’s smartphones. We present the detail of the cloud service technology and an analysis of the network transaction data regarding the performance. This helps us to understand the nature of cloud-based audience participation pieces based on the characteristics of a performance reality and provides cues about the technology’s scalability.
@inproceedings{CarvalhoJunior2016, author = {de Carvalho Junior, Antonio Deusany and Lee, Sang Won and Essl, Georg}, title = {Understanding Cloud Support for the Audience Participation Concert Performance of Crowd in C[loud]}, pages = {176--181}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176008}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0037.pdf} }
Ge Wang. 2016. Game Design for Expressive Mobile Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 182–187. http://doi.org/10.5281/zenodo.1176141
Abstract
Download PDF DOI
This article presents observations and strategies for designing game-like elements for expressive mobile musical interactions. The designs of several popular commercial mobile music instruments are discussed and compared, along with the different ways they integrate musical information and game-like elements. In particular, issues of designing goals, rules, and interactions are balanced with articulating expressiveness. These experiences aim to invite and engage users with game design while maintaining and encouraging open-ended musical expression and exploration. A set of observations is derived, leading to a broader design motivation and philosophy.
@inproceedings{Wang2016, author = {Wang, Ge}, title = {Game Design for Expressive Mobile Music}, pages = {182--187}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176141}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0038.pdf} }
Jan Banas, Razvan Paisa, Iakovos Vogiatzoglou, Francesco Grani, and Stefania Serafin. 2016. Design and evaluation of a gesture driven wave field synthesis auditory game. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 188–193. http://doi.org/10.5281/zenodo.1175972
Abstract
Download PDF DOI
An auditory game has been developed as a part of research in Wavefield Synthesis. In order to design and implement this game, a number of technologies have been incorporated in the development process. By pairing motion capture with a WiiMote new dimension of movement input was achieved. We present an evaluation study where the game was assessed.
@inproceedings{Banas2016, author = {Banas, Jan and Paisa, Razvan and Vogiatzoglou, Iakovos and Grani, Francesco and Serafin, Stefania}, title = {Design and evaluation of a gesture driven wave field synthesis auditory game}, pages = {188--193}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1175972}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0039.pdf} }
Mehmet Aydin Baytas, Tilbe Goksun, and Oguzhan Ozcan. 2016. The Perception of Live-sequenced Electronic Music via Hearing and Sight. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 194–199. http://doi.org/10.5281/zenodo.1175978
Abstract
Download PDF DOI
In this paper, we investigate how watching a live-sequenced electronic music performance, compared to merely hearing the music, contributes to spectators’ experiences of tension. We also explore the role of the performers’ effective and ancillary gestures in conveying tension, when they can be seen. To this end, we conducted an experiment where 30 participants heard, saw, or both heard and saw a live-sequenced techno music performance recording while they produced continuous judgments on their experience of tension. Eye tracking data was also recorded from participants who saw the visuals, to reveal aspects of the performance that influenced their tension judgments. We analysed the data to explore how auditory and visual components and the performer’s movements contribute to spectators’ experience of tension. Our results show that their perception of emotional intensity is consistent across hearing and sight, suggesting that gestures in live-sequencing can be a medium for expressive performance.
@inproceedings{Baytas2016, author = {Baytas, Mehmet Aydin and Goksun, Tilbe and Ozcan, Oguzhan}, title = {The Perception of Live-sequenced Electronic Music via Hearing and Sight}, pages = {194--199}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1175978}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0040.pdf} }
S. Astrid Bin, Nick Bryan-Kinns, and Andrew P. McPherson. 2016. Skip the Pre-Concert Demo: How Technical Familiarity and Musical Style Affect Audience Response. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 200–205. http://doi.org/10.5281/zenodo.1175994
Abstract
Download PDF DOI
This paper explores the roles of technical and musical familiarity in shaping audience response to digital musical instrument (DMI) performances. In an audience study conducted during an evening concert, we examined two primary questions: first, whether a deeper understanding of how a DMI works increases an audience’s enjoyment and interest in the performance; and second, given the same DMI and same performer, whether playing in a conventional (vernacular) versus an experimental musical style affects an audience’s response. We held a concert in which two DMI creator-performers each played two pieces in differing styles. Before the concert, each half the 64-person audience was given a technical explanation of one of the instruments. Results showed that receiving an explanation increased the reported understanding of that instrument, but had no effect on either the reported level of interest or enjoyment. On the other hand, performances in experimental versus conventional style on the same instrument received widely divergent audience responses. We discuss implications of these findings for DMI design.
@inproceedings{Bin2016, author = {Bin, S. Astrid and Bryan-Kinns, Nick and McPherson, Andrew P.}, title = {Skip the Pre-Concert Demo: How Technical Familiarity and Musical Style Affect Audience Response}, pages = {200--205}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1175994}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0041.pdf} }
Jiayue Cecilia Wu, Madeline Huberth, Yoo Hsiu Yeh, and Matt Wright. 2016. Evaluating the Audience’s Perception of Real-time Gestural Control and Mapping Mechanisms in Electroacoustic Vocal Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 206–211. http://doi.org/10.5281/zenodo.1176143
Abstract
Download PDF DOI
This paper presents an empirical evaluation of a digital music instrument (DMI) for electroacoustic vocal performance, the Tibetan Singing Prayer Wheel (TSPW). Specifically, we study audience preference for the way it maps horizontal spinning gestures to vocal processing parameters. We filmed six songs with the singer using the TSPW, and created two alternative soundtracks for each song: one desynchronized, and one with the mapping inverted. Participants viewed all six songs with either the original or desynchronized soundtrack (Experiment 1), or either the original or inverted-mapping soundtrack (Experiment 2). Participants were asked several questions via questionnaire after each song. Overall, they reported higher engagement and preference for the original versions, suggesting that audiences of the TSPW prefer more highly synchronized performances, as well as more intuitive mappings, though level of perceived expression of the performer only significantly differed in Experiment 1. Further, we believe that our experimental methods contribute to how DMIs can be evaluated from the audience’s (a recently noted underrepresented stakeholder) perspective.
@inproceedings{Wu2016, author = {Wu, Jiayue Cecilia and Huberth, Madeline and Yeh, Yoo Hsiu and Wright, Matt}, title = {Evaluating the Audience's Perception of Real-time Gestural Control and Mapping Mechanisms in Electroacoustic Vocal Performance}, pages = {206--211}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176143}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0042.pdf} }
Sang Won Lee, Georg Essl, and Mari Martinez. 2016. Live Writing : Writing as a Real-time Audiovisual Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 212–217. http://doi.org/10.5281/zenodo.1176060
Abstract
Download PDF DOI
This paper suggests a novel form of audiovisual performance — live writing — that transforms creative writing into a real-time performing art. The process of typing a poem on the fly is captured and augmented to create an audiovisual performance that establishes natural links among the components of typing gestures, the poem being written on the fly, and audiovisual artifacts. Live writing draws upon ideas from the tradition of live coding in which the process of programming is revealed to the audience in real-time. This paper discusses the motivation behind the idea, interaction schemes and a performance interface for such a performance practice. Our live writing performance system is enabled by a custom text editor, writing-sound mapping strategies of our choice, a poem-sonification, and temporal typography. We describe two live writing performances that take different approaches as they vary the degree of composition and improvisation in writing.
@inproceedings{Lee2016, author = {Lee, Sang Won and Essl, Georg and Martinez, Mari}, title = {Live Writing : Writing as a Real-time Audiovisual Performance}, pages = {212--217}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176060}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0043.pdf} }
Kunal Jathal and Tae-Hong Park. 2016. The HandSolo: A Hand Drum Controller for Natural Rhythm Entry and Production. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 218–223. http://doi.org/10.5281/zenodo.1176042
Abstract
Download PDF DOI
The majority of electronic percussion controllers on the market today are based on location-oriented striking techniques, resulting in a finger drumming interaction paradigm, that is both fundamentally eclectic as well as imposingly contr. The few controllers that allow hand-drumming techniques also invariably conform to region-based triggering design, or, in trade-off for expressivity, end up excluding hardware connectivity options that are vital to the context of the modern electronic rhythm producer. The HandSolo is a timbre-based drum controller that allows the use of natural, hand-drumming strokes, whilst offering the same end-goal functionality that percussion controller users have come to expect over the past decade.
@inproceedings{Jathal2016, author = {Jathal, Kunal and Park, Tae-Hong}, title = {The HandSolo: A Hand Drum Controller for Natural Rhythm Entry and Production}, pages = {218--223}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176042}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0044.pdf} }
Chris Nash. 2016. The ’E’ in QWERTY: Musical Expression with Old Computer Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 224–229. http://doi.org/10.5281/zenodo.1176088
Abstract
Download PDF DOI
This paper presents a development of the ubiquitous computer keyboard to capture velocity and other continuous musical properties, in order to support more expressive interaction with music software. Building on existing ‘virtual piano’ utilities, the device is designed to provide a richer mechanism for note entry within predominantly non-realtime editing tasks, in applications where keyboard interaction is a central component of the user experience (score editors, sequencers, DAWs, trackers, live coding), and in which users draw on virtuosities in both music and computing. In the keyboard, additional hardware combines existing scan code (key press) data with accelerometer readings to create a secondary USB device, using the same cable but visible to software as a separate USB MIDI device aside existing USB HID functionality. This paper presents and evaluates an initial prototype, developed using an Arduino board and inexpensive sensors, and discusses design considerations and test findings in musical applications, drawing on user studies of keyboard-mediated music interaction. Without challenging more established (and expensive) performance devices; significant benefits are demonstrated in notation-mediated interaction, where the user’s focus rests with software.
@inproceedings{Nash2016, author = {Nash, Chris}, title = {The 'E' in QWERTY: Musical Expression with Old Computer Interfaces}, pages = {224--229}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176088}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0045.pdf} }
Stewart Greenhill and Cathie Travers. 2016. Focal : An Eye-Tracking Musical Expression Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 230–235. http://doi.org/10.5281/zenodo.1176022
Abstract
Download PDF DOI
We present Focal, an eye-tracking musical expression controller which allows hands-free control over audio effects and synthesis parameters during peformance. A see-through head-mounted display projects virtual dials and switches into the visual field. The performer controls these with a single expression pedal, switching context by glancing at the object they wish to control. This simple interface allows for minimal physical disturbance to the instrumental musician, whilst enabling the control of many parameters otherwise only achievable with multiple foot pedalboards. We describe the development of the system, including the construction of the eye-tracking display, and the design of the musical interface. We also present a comparison of a performance between Focal and conventional controllers.
@inproceedings{Greenhill2016, author = {Greenhill, Stewart and Travers, Cathie}, title = {Focal : An Eye-Tracking Musical Expression Controller}, pages = {230--235}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176022}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0046.pdf} }
Aidan Meacham, Sanjay Kannan, and Ge Wang. 2016. The Laptop Accordion. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 236–240. http://doi.org/10.5281/zenodo.1176078
Abstract
Download PDF DOI
The ‘Laptop Accordion’ co-opts the commodity laptop computer to craft an expressive, whimsical accordion-like instrument. It utilizes the opening and closing of the laptop screen as a physical metaphor for accordion bellows, and the laptop keyboard as musical buttonboard. Motion is tracked using the laptop camera via optical flow and mapped to continuous control over dynamics, while the sound is generated in real-time. The instrument uses both skeuomorphic and abstract onscreen graphics which further reference the core mechanics of ‘squeezebox’ instruments. The laptop accordion provides several game modes, while overall offering an unconventional aesthetic experience in music making.
@inproceedings{Meacham2016, author = {Meacham, Aidan and Kannan, Sanjay and Wang, Ge}, title = {The Laptop Accordion}, pages = {236--240}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176078}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0047.pdf} }
Kasper buhl Jakobsen, Marianne Graves Petersen, Majken Kirkegaard Rasmussen, Jens Emil Groenbaek, jakob winge, and jeppe stougaard. 2016. Hitmachine: Collective Musical Expressivity for Novices. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 241–246. http://doi.org/10.5281/zenodo.1176038
Abstract
Download PDF DOI
This paper presents a novel platform for expressive music making called Hitmachine. Hitmachine lets you build and play your own musical instruments from Legos and sensors and is aimed towards empowering everyone to engage in rich music making despite of prior musical experience. The paper presents findings from a 4-day workshop where more that 150 children from ages 3-13 built and played their own musical instruments. The children used different sensors for playing and performed with their instruments on stage. The findings show how age influenced the children’s musical understanding and expressivity, and gives insight into important aspects to consider when designing for expressive music for novices.
@inproceedings{Jakobsen2016, author = {buhl Jakobsen, Kasper and Petersen, Marianne Graves and Rasmussen, Majken Kirkegaard and Groenbaek, Jens Emil and jakob winge and jeppe stougaard}, title = {Hitmachine: Collective Musical Expressivity for Novices}, pages = {241--246}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176038}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0048.pdf} }
Romain Michon, Julius Orion Iii Smith, Matthew Wright, and Chris Chafe. 2016. Augmenting the iPad: the BladeAxe. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 247–252. http://doi.org/10.5281/zenodo.1176080
Abstract
Download PDF DOI
In this paper, we present the BladeAxe: an iPad-based musical instrument leveraging the concepts of augmented mobile device and hybrid physical model controller. By being almost fully standalone, it can be used easily on stage in the frame of a live performance by simply plugging it to a traditional guitar amplifier or to any sound system. Its acoustical plucking system provides the performer with an extended expressive potential compared to a standard controller. After presenting an intermediate version of the BladeAxe, we’ll describe our final design. We will also introduce a similar instrument: the PlateAxe.
@inproceedings{Michon2016, author = {Michon, Romain and Smith, Julius Orion Iii and Wright, Matthew and Chafe, Chris}, title = {Augmenting the iPad: the BladeAxe}, pages = {247--252}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176080}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0049.pdf} }
Barah Héon-Morissette. 2016. Transdisciplinary Methodology: from Theory to the Stage, Creation for the SICMAP. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 253–258. http://doi.org/10.5281/zenodo.1176024
Abstract
Download PDF DOI
The author’s artistic practice as a composer and performer is transdisciplinary. The body as a vector associated with sound, gesture, video, physical space, and technological space, constitute the six founding elements. They give rise to works between music and dance, between musical theater and multimedia works leading to a new hybrid performative practice. These works are realized using a motion capture system by computer vision, SICMAP (Systéme Interactif de Captation du Mouvement en Art Performatif — Interactive Motion Capture System For The Performative Arts). In this paper, the author situates her artistic practice founded by the three pillars of transdisciplinary research methodology. The path taken by the performer-creator, leading to the conception of the SICMAP, is then explained through a reflection on the ‘dream instrument’. Followed by a technical description, the SICMAP is contextualized using theoretical models: the instrumental continuum and energy continuum, the ‘dream instrument’ and the typology of the instrumental gesture. Initiated by the SICMAP, these are then applied to a new paradigm the gesture-sound space and subsequently put into practice through the creation of the work From Infinity To Within.
@inproceedings{Hnicode233onMorissette2016, author = {H\'{e}on-Morissette, Barah}, title = {Transdisciplinary Methodology: from Theory to the Stage, Creation for the {SIC}MAP}, pages = {253--258}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176024}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0050.pdf} }
Xiao Xiao, Donald Derek Haddad, Thomas Sanchez, et al. 2016. Kinéphone: Exploring the Musical Potential of an Actuated Pin-Based Shape Display. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 259–264. http://doi.org/10.5281/zenodo.1176145
Abstract
Download PDF DOI
This paper explores how an actuated pin-based shape display may serve as a platform on which to build musical instruments and controllers. We designed and prototyped three new instruments that use the shape display not only as an input device, but also as a source of acoustic sound. These cover a range of interaction paradigms to generate ambient textures, polyrhythms, and melodies. This paper first presents existing work from which we drew interactions and metaphors for our designs. We then introduce each of our instruments and the back-end software we used to prototype them. Finally, we offer reflections on some central themes of NIME, including the relationship between musician and machine.
@inproceedings{Xiao2016, author = {Xiao, Xiao and Haddad, Donald Derek and Sanchez, Thomas and van Troyer, Akito and Kleinberger, R\'{e}becca and Webb, Penny and Paradiso, Joe and Machover, Tod and Ishii, Hiroshi}, title = {Kin\'{e}phone: Exploring the Musical Potential of an Actuated Pin-Based Shape Display}, pages = {259--264}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176145}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0051.pdf} }
Si Waite. 2016. Church Belles: An Interactive System and Composition Using Real-World Metaphors. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 265–270. http://doi.org/10.5281/zenodo.1176139
Abstract
Download PDF DOI
This paper presents a brief review of current literature detailing some of the issues and trends in composition and performance with interactive music systems. Of particular interest is how musicians interact with a separate machine entity that exercises agency over the creative process. The use of real-world metaphors as a strategy for increasing audience engagement is also discussed. The composition and system Church Belles is presented, analyzed and evaluated in terms of its architecture, how it relates to existing studies of musician-machine creative interaction and how the use of a real-world metaphor can promote audience perceptions of liveness. This develops previous NIME work by offering a detailed case study of the development process of both a system and a piece for popular, non-improvisational vocal/guitar music.
@inproceedings{Waite2016, author = {Waite, Si}, title = {Church Belles: An Interactive System and Composition Using Real-World Metaphors}, pages = {265--270}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176139}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0052.pdf} }
Ireti Olowe, Giulio Moro, and Mathieu Barthet. 2016. residUUm: user mapping and performance strategies for multilayered live audiovisual generation. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 271–276. http://doi.org/10.5281/zenodo.1176098
Abstract
Download PDF DOI
We propose residUUm, an audiovisual performance tool that uses sonification to orchestrate a particle system of shapes, as an attempt to build an audiovisual user interface in which all the actions of a performer on a laptop are intended to be explicitly interpreted by the audience. We propose two approaches to performing with residUUm and discuss the methods utilized to fulfill the promise of audience-visible interaction: mapping and performance strategies applied to express audiovisual interactions with multilayered sound-image relationships. The system received positive feedback from 34 audience participants on aspects such as aesthetics and audiovisual integration, and we identified further design challenges around performance clarity and strategy. We discuss residUUm’s development objectives, modes of interaction and the impact of an audience-visible interface on the performer and observer.
@inproceedings{Olowe2016, author = {Olowe, Ireti and Moro, Giulio and Barthet, Mathieu}, title = {residUUm: user mapping and performance strategies for multilayered live audiovisual generation}, pages = {271--276}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176098}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0053.pdf} }
Kirandeep Bhumber, Nancy Lee, and Brian Topp. 2016. Pendula: An Interactive Swing Installation and Performance Environment. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 277–285. http://doi.org/10.5281/zenodo.1175992
Abstract
Download PDF DOI
This paper describes the processes involved in developing Pendula, a performance environment and interactive installation using swings, interactive video, and audio. A presentation of the project is described using three swings. Gyroscopic and accelerometer data were used in each of the setups to control audio and visual parameters.The installation was presented as both an interactive environment and as a performance instrument, with multiple public performances. Construction of the physical devices used, circuits built, and software created is covered in this paper, along with a discussion of problems and their solutions encountered during the development of Pendula.
@inproceedings{Bhumber2016, author = {Bhumber, Kirandeep and Lee, Nancy and Topp, Brian}, title = {Pendula: An Interactive Swing Installation and Performance Environment}, pages = {277--285}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1175992}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0054.pdf} }
Matthew Dabin, Terumi Narushima, Stephen Beirne, Christian Ritz, and Kraig Grady. 2016. 3D Modelling and Printing of Microtonal Flutes. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 286–290. http://doi.org/10.5281/zenodo.1176014
Abstract
Download PDF DOI
This project explores the potential for 3D modelling and printing to create customised flutes that can play music in a variety of microtonal scales. One of the challenges in the field of microtonality is that conventional musical instruments are inadequate for realising the abundance of theoretical tunings that musicians wish to investigate. This paper focuses on the development of two types of flutes, the recorder and transverse flute, with interchangeable mouthpieces. These flutes are designed to play subharmonic microtonal scales. The discussion provides an overview of the design and implementation process, including calculation methods for acoustic modelling and 3D printing technologies, as well as an evaluation of some of the difficulties encountered. Results from our 3D printed flutes suggest that whilst further refinements are necessary in our designs, 3D modelling and printing techniques offer new and valuable methods for the design and production of customised musical instruments. The long term goal of this project is to create a system in which users can specify the tuning of their instrument to generate a 3D model and have it printed on demand.
@inproceedings{Dabin2016, author = {Dabin, Matthew and Narushima, Terumi and Beirne, Stephen and Ritz, Christian and Grady, Kraig}, title = {{3D} Modelling and Printing of Microtonal Flutes}, pages = {286--290}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176014}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0056.pdf} }
Alex Hofmann, Bernt Waerstad, and Kristoffer Koch. 2016. Csound Instruments On Stage. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 291–294. http://doi.org/10.5281/zenodo.1176030
Abstract
Download PDF DOI
Low cost, credit card size computers like the Raspberry Pi allow musicians to experiment with building software-based standalone musical instruments. The COSMO Project aims to provide an easy-to-use hardware and software framework to build Csound based instruments as hardware devices. Inside the instrument, the Csound software is running on a Raspberry Pi computer, connected to a custom designed interface board (COSMO-HAT) that allows to connect potentiometers, switches, LED’s, and sensors. A classic stomp box design is used to demonstrate how Csound can be brought on stage as a stand-alone hardware effect instrument.
@inproceedings{Hofmann2016a, author = {Hofmann, Alex and Waerstad, Bernt and Koch, Kristoffer}, title = {Csound Instruments On Stage}, pages = {291--294}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176030}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0057.pdf} }
Thomas Resch and Stefan Bilbao. 2016. Controlling complex virtuel instruments—A setup with note for Max and prepared piano sound synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 295–299. http://doi.org/10.5281/zenodo.1176108
Abstract
Download PDF DOI
This paper describes a setup for embedding complex virtual instruments such as a physical model of the prepared piano sound synthesis in the sequencing library note for Max. Based on the requirements of contemporary music and media arts, note introduces computer-aided composition techniques and graphical user interfaces for sequencing and editing into the real time world of Max/MSP. A piano roll, a microtonal musical score and the capability to attach floating-point lists of (theoretically) arbitrary length to a single note-on event, enables artists to play, edit and record compound sound synthesis with the necessary precision.
@inproceedings{Resch2016, author = {Resch, Thomas and Bilbao, Stefan}, title = {Controlling complex virtuel instruments---A setup with note~ for Max and prepared piano sound synthesis}, pages = {295--299}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176108}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0058.pdf} }
Dom Brown, Nathan Renney, Adam Stark, Chris Nash, and Tom Mitchell. 2016. Leimu: Gloveless Music Interaction Using a Wrist Mounted Leap Motion. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 300–304. http://doi.org/10.5281/zenodo.1176000
Abstract
Download PDF DOI
Camera-based motion tracking has become a popular enabling technology for gestural human-computer interaction. However, the approach suffers from several limitations which have been shown to be particularly problematic when employed within musical contexts. This paper presents Leimu, a wrist mount that couples a Leap Motion optical sensor with an inertial measurement unit to combine the benefits of wearable and camera-based motion tracking. Leimu is designed, developed and then evaluated using discourse and statistical analysis methods. The results indicate that the Leimu is an effective interface for gestural music interaction and offers improved tracking precision over Leap Motion positioned on a table top.
@inproceedings{Brown2016, author = {Brown, Dom and Renney, Nathan and Stark, Adam and Nash, Chris and Mitchell, Tom}, title = {Leimu: Gloveless Music Interaction Using a Wrist Mounted Leap Motion}, pages = {300--304}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176000}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0059.pdf} }
Esteban Gómez and Javier Jaimovich. 2016. Designing a Flexible Workflow for Complex Real-Time Interactive Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 305–309. http://doi.org/10.5281/zenodo.1176018
Abstract
Download PDF DOI
This paper presents the design of a Max/MSP flexible workflow framework built for complex real-time interactive performances. This system was developed for Emovere, an interdisciplinary piece for dance, biosignals, sound and visuals, yet it was conceived to accommodate interactive performances of different nature and of heterogeneous technical requirements, which we believe to represent a common underlying structure among these. The work presented in this document proposes a framework that takes care of the signal input/output stages, as well as storing and recalling presets and scenes, thus allowing the user to focus on the programming of interaction models and sound synthesis or sound processing. Results are presented with Emovere as an example case, discussing the advantages and further challenges that this framework offers for other performance scenarios.
@inproceedings{Gnicode243mez2016, author = {G\'{o}mez, Esteban and Jaimovich, Javier}, title = {Designing a Flexible Workflow for Complex Real-Time Interactive Performances}, pages = {305--309}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176018}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0060.pdf} }
Christina Volioti, Sotiris Manitsaris, Eleni Katsouli, and Athanasios Manitsaris. 2016. x2Gesture: how machines could learn expressive gesture variations of expert musicians. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 310–315. http://doi.org/10.5281/zenodo.1176137
Abstract
Download PDF DOI
There is a growing interest in ‘unlocking’ the motor skills of expert musicians. Motivated by this need, the main objective of this paper is to present a new way of modeling expressive gesture variations in musical performance. For this purpose, the 3D gesture recognition engine ‘x2Gesture’ (eXpert eXpressive Gesture) has been developed, inspired by the Gesture Variation Follower, which is initially designed and developed at IRCAM in Paris and then extended at Goldsmiths College in London. x2Gesture supports both learning of musical gestures and live performing, through gesture sonification, as a unified user experience. The deeper understanding of the expressive gestural variations permits to define the confidence bounds of the expert’s gestures, which are used during the decoding phase of the recognition. The first experiments show promising results in terms of recognition accuracy and temporal alignment between template and performed gesture, which leads to a better fluidity and immediacy and thus gesture sonification.
@inproceedings{Volioti2016, author = {Volioti, Christina and Manitsaris, Sotiris and Katsouli, Eleni and Manitsaris, Athanasios}, title = {x2Gesture: how machines could learn expressive gesture variations of expert musicians}, pages = {310--315}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176137}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0061.pdf} }
Javier Jaimovich. 2016. Emovere: Designing Sound Interactions for Biosignals and Dancers. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 316–320. http://doi.org/10.5281/zenodo.1176036
Abstract
Download PDF DOI
This paper presents the work developed for Emovere: an interactive real-time interdisciplinary performance that measures physiological signals from dancers to drive a piece that explores and reflects around the biology of emotion. This document focuses on the design of a series of interaction modes and materials that were developed for this performance, and are believed to be a contribution for the creation of artistic projects that work with dancers and physiological signals. The paper introduces the motivation and theoretical framework behind this project, to then deliver a detailed description and analysis of four different interaction modes built to drive this performance using electromyography and electrocardiography. Readers will find a discussion of the results obtained with these designs, as well as comments on future work.
@inproceedings{Jaimovich2016, author = {Jaimovich, Javier}, title = {Emovere: Designing Sound Interactions for Biosignals and Dancers}, pages = {316--320}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176036}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0062.pdf} }
Ene Alicia Söderberg, Rasmus Emil Odgaard, Sarah Bitsch, et al. 2016. Music Aid—Towards a Collaborative Experience for Deaf and Hearing People in Creating Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 321–326. http://doi.org/10.5281/zenodo.1176112
Abstract
Download PDF DOI
This paper explores the possibility of breaking the barrier between deaf and hearing people when it comes to the subject of making music. Suggestions on how deaf and hearing people can collaborate in creating music together, are presented. The conducted research will focus on deaf people with a general interest in music as well as hearing musicians as target groups. Through reviewing different related research areas, it is found that visualization of sound along with a haptic feedback can help deaf people interpret and interact with music. With this in mind, three variations of a collaborative user interface are presented, in which deaf and hearing people are meant to collaborate in creating short beats and melody sequences. Through evaluating the three prototypes, with two deaf people and two hearing musicians, it is found that the target groups can collaborate to some extent in creating beats. However, in order for the target groups to create melodic sequences together in a satisfactory manner, more detailed visualization and distributed haptic output is necessary, mostly due to the fact that the deaf test participants struggle in distinguishing between higher pitch and timbre.
@inproceedings{Snicode248derberg2016, author = {S\''{o}derberg, Ene Alicia and Odgaard, Rasmus Emil and Bitsch, Sarah and H\''{o}eg-Jensen, Oliver and Christensen, Nikolaj Schildt and Poulsen, S\''{o}ren Dahl and Gelineck, Steven}, title = {Music Aid---Towards a Collaborative Experience for Deaf and Hearing People in Creating Music}, pages = {321--326}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176112}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0063.pdf} }
Jeppe Veirum Larsen, Dan Overholt, and Thomas B. Moeslund. 2016. The Prospects of Musical Instruments For People with Physical Disabilities. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 327–331. http://doi.org/10.5281/zenodo.1176056
Abstract
Download PDF DOI
Many forms of enabling technologies exist today. While technologies aimed at enabling basic tasks in everyday life (locomotion, eating, etc.) are more common, musical instruments for people with disabilities can provide a chance for emotional enjoyment, as well as improve physical conditions through therapeutic use. The field of musical instruments for people with physical disabilities, however, is still an emerging area of research. In this article, we look at the current state of developments, including a survey of custom designed instruments, augmentations / modifications of existing instruments, music-supported therapy, and recent trends in the area. The overview is extrapolated to look at where the research is headed, providing insights for potential future work.
@inproceedings{Larsen2016, author = {Larsen, Jeppe Veirum and Overholt, Dan and Moeslund, Thomas B.}, title = {The Prospects of Musical Instruments For People with Physical Disabilities}, pages = {327--331}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176056}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0064.pdf} }
Christopher Benson, Bill Manaris, Seth Stoudenmier, and Timothy Ward. 2016. SoundMorpheus: A Myoelectric-Sensor Based Interface for Sound Spatialization and Shaping. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 332–337. http://doi.org/10.5281/zenodo.1175982
Abstract
Download PDF DOI
We present an innovative sound spatialization and shaping interface, called SoundMorpheus, which allows the placement of sounds in space, as well as the altering of sound characteristics, via arm movements that resemble those of a conductor. The interface displays sounds (or their attributes) to the user, who reaches for them with one or both hands, grabs them, and gently or forcefully sends them around in space, in a 360^∘ circle. The system combines MIDI and traditional instruments with one or more myoelectric sensors. These components may be physically collocated or distributed in various locales connected via the Internet. This system also supports the performance of acousmatic and electronic music, enabling performances where the traditionally central mixing board, need not be touched at all (or minimally touched for calibration). Finally, the system may facilitate the recording of a visual score of a performance, which can be stored for later playback and additional manipulation. We present three projects that utilize SoundMorpheus and demonstrate its capabilities and potential.
@inproceedings{Benson2016, author = {Benson, Christopher and Manaris, Bill and Stoudenmier, Seth and Ward, Timothy}, title = {SoundMorpheus: A Myoelectric-Sensor Based Interface for Sound Spatialization and Shaping}, pages = {332--337}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1175982}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0065.pdf} }
Gorkem Ozdemir, Anil Camci, and Angus Forbes. 2016. PORTAL: An Audiovisual Laser Performance System. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 338–343. http://doi.org/10.5281/zenodo.1176102
Abstract
Download PDF DOI
PORTAL is an interactive performance tool that uses a laser projector to visualize computer-generated audio signals. In this paper, we first offer an overview of earlier work on audiovisual and laser art that inspired the current project. We then discuss our own implementation, focusing not only on the technical issues related to the use of a laser projector in an artistic context, but also on the aesthetic considerations in dealing with the translation of sounds into visuals, and vice versa. We provide detailed descriptions of our hardware implementation, our software system, and its desktop and mobile interfaces, which are made available online. Finally, we offer the results of a user study we conducted in the form of an interactive online survey on audience perception of the relationship between analogous sounds and visuals, which was explored as part of our performance practice.
@inproceedings{Ozdemir2016, author = {Ozdemir, Gorkem and Camci, Anil and Forbes, Angus}, title = {PORTAL: An Audiovisual Laser Performance System}, pages = {338--343}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176102}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0066.pdf} }
Rikard Lindell, Koray Tahiroglu, Morten Riis, and Jennie Schaeffer. 2016. Materiality for Musical Expressions: an Approach to Interdisciplinary Syllabus Development for NIME. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 344–349. http://doi.org/10.5281/zenodo.1176066
Abstract
Download PDF DOI
We organised an elven day intense course in materiality for musical expressions to explore underlying principles of New Interfaces for Musical Expression (NIME) in higher education. We grounded the course in different aspects of materiality and gathered interdisciplinary student teams from three Nordic universities. Electronic music instrument makers participated in providing the course. In eleven days the students designed and built interfaces for musical expressions, composed a piece, and performed at the Norberg electronic music festival. The students explored the relationship between technology and possible musical expression with a strong connection to culture and place. The emphasis on performance provided closure and motivated teams to move forward in their design and artistic processes. On the basis of the course we discuss an interdisciplinary NIME course syllabus, and we infer that it benefits from grounding in materiality and in the place with a strong reference to culture.
@inproceedings{Lindell2016, author = {Lindell, Rikard and Tahiroglu, Koray and Riis, Morten and Schaeffer, Jennie}, title = {Materiality for Musical Expressions: an Approach to Interdisciplinary Syllabus Development for NIME}, pages = {344--349}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176066}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0067.pdf} }
Marcelo Gimenes, Pierre-Emmanuel Largeron, and Eduardo Miranda. 2016. Frontiers: Expanding Musical Imagination With Audience Participation. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 350–354. http://doi.org/10.5281/zenodo.1176020
Abstract
Download PDF DOI
This paper introduces Performance Without Borders and Embodied iSound, two sound installations performed at the 2016 Peninsula Arts Contemporary Music Festival at Plymouth University. Sharing in common the use of smartphones to afford real-time audience participation, two bespoke distributed computer systems (Sherwell and Levinsky Music, respectively). Whilst the first one implements a cloud-based voting system, the second implements movement tracking and iBeacon-based indoor-positioning to control the choice of soundtracks, audio synthesis, and surround sound positioning, among other parameters. The general concepts of the installations, in particular design and interactive possibilities afforded by the computer systems are presented.
@inproceedings{Gimenes2016, author = {Gimenes, Marcelo and Largeron, Pierre-Emmanuel and Miranda, Eduardo}, title = {Frontiers: Expanding Musical Imagination With Audience Participation}, pages = {350--354}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176020}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0068.pdf} }
Kevin Schlei, Chris Burns, and Aidan Menuge. 2016. PourOver: A Sensor-Driven Generative Music Platform. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 355–358. http://doi.org/10.5281/zenodo.1176114
Abstract
Download PDF DOI
The PourOver Sensor Framework is an open iOS framework designed to connect iOS control sources (hardware sensors, user input, custom algorithms) to an audio graph’s parameters. The design of the framework, motivation, and use cases are discussed. The framework is demonstrated in an end-user friendly iOS app PourOver, in which users can run Pd patches with easy access to hardware sensors and iOS APIs.
@inproceedings{Schlei2016, author = {Schlei, Kevin and Burns, Chris and Menuge, Aidan}, title = {PourOver: A Sensor-Driven Generative Music Platform}, pages = {355--358}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176114}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0069.pdf} }
Abram Hindle. 2016. Hacking NIMEs. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 359–364. http://doi.org/10.5281/zenodo.1176026
Abstract
Download PDF DOI
NIMEs typically focus on novelty but the cost of novelty is often to ignore other non-functional requirements and concerns such as usability or security. Digital security has probably not been a concern for performers due to the duration of their performances and lack of disrespectful hackers, known as crackers, in attendance carrying the appropriate equipment and software necessary to hack a performance. Yet many modern NIMEs could be hacked from smart-phones in the audience. The lack of security hardening makes NIMEs an easy target — but a question arises: if hacking can interrupt or modify a performance couldn’t hacking itself also be performance? Thus would music hacking, live-hacking, be similar to live-coding? In this paper we discuss how NIMEs are in danger of being hacked, and yet how hacking can be an act of performance too.
@inproceedings{Hindle2016, author = {Hindle, Abram}, title = {Hacking NIMEs}, pages = {359--364}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176026}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0070.pdf} }
Sergi Jordà, Daniel Gómez-Marín, Ángel Faraldo, and Perfecto Herrera. 2016. Drumming with style: From user needs to a working prototype. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 365–370. http://doi.org/10.5281/zenodo.1176048
Abstract
Download PDF DOI
This paper presents a generative drumming agent built from the results of an extensive survey carried out with electronic music producers, in two phases. Following the techniques of user-centered interaction design, an international group of beat producers was reviewed on the possibility of using AI algorithms to help them in the beat production workflow. The analyzed results of these tests were used as design requirements for constructing a system that would indeed perform some tasks alongside the producer. The first results of this working prototype are presented with a description of the system. The prototype is a stylistic drum generator that creates new rhythmic patterns after being trained with a collection of drum tracks. Further stages of development and potential algorithms are discussed.
@inproceedings{Jordnicode2252016, author = {Jord\`{a}, Sergi and G\'{o}mez-Mar\'{i}n, Daniel and \'{A}ngel Faraldo and Herrera, Perfecto}, title = {Drumming with style: From user needs to a working prototype}, pages = {365--370}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176048}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0071.pdf} }
Oliver Bown and Sam Ferguson. 2016. A Musical Game of Bowls Using the DIADs. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 371–372. http://doi.org/10.5281/zenodo.1175998
Abstract
Download PDF DOI
We describe a project in which a game of lawn bowls was recreated using Distributed Interactive Audio Devices (DIADs), to create an interactive musical experience in the form of a game. This paper details the design of the underlying digital music system, some of the compositional and design considerations, and the technical challenges involved. We discuss future directions for our system and compositional method.
@inproceedings{Bown2016, author = {Bown, Oliver and Ferguson, Sam}, title = {A Musical Game of Bowls Using the DIADs}, pages = {371--372}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Demonstrations}, doi = {10.5281/zenodo.1175998}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0072.pdf} }
Benjamin James Eyes and Laurits Esben Jongejan. 2016. How to Stop Sound: Creating a light instrument and ‘Interruption’ a piece for the Mimerlaven, Norberg Festival 2015. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 373–374. http://doi.org/10.5281/zenodo.1176016
Abstract
Download PDF DOI
During an electronic music performance it is common to see light and sound interacting electronically in many different ways. From sound and light shows, whereby light reacts to sound, or generated visuals are projected onto a screen behind the performer. However we asked the question what if we could convert sound to light and back again and control sound with light? Inspired by the huge acoustic of the Mimerlaven at Norberg festival we built a ‘light instrument’ that allowed us to interrupt and disrupt sound using light forming the basis of our piece ‘Interruption’.
@inproceedings{Eyes2016, author = {Eyes, Benjamin James and Jongejan, Laurits Esben}, title = {How to Stop Sound: Creating a light instrument and `Interruption' a piece for the Mimerlaven, Norberg Festival 2015.}, pages = {373--374}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Demonstrations}, doi = {10.5281/zenodo.1176016}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0073.pdf} }
Cat Hope, Stuart James, and Aaron Wyatt. 2016. Headline grabs for music: The development of the iPad score generator for ‘Loaded (NSFW).’ Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 375–376. http://doi.org/10.5281/zenodo.1176032
Abstract
Download PDF DOI
This paper-demonstration provides an overview of an generative music score adapted for the iPad by the Decibel new music ensemble. The original score ‘Loaded (NSFW)’ (2015) is by Western Australian composer Laura Jane Lowther, and is scored for ensemble and electronics, commissioned for a performance in April 2015 at the Perth Institute of Contemporary Arts. It engages and develops the Decibel Score Player application, a score reader and generator for the iPad as a tool for displaying an interactive score that requires performers to react to news headlines through musical means. The paper will introduce the concept for the player, how it was developed, and how it was used in the premiere performance. The associated demonstration shows how the score appears on the iPads.
@inproceedings{Hope2016, author = {Hope, Cat and James, Stuart and Wyatt, Aaron}, title = {Headline grabs for music: The development of the iPad score generator for `Loaded (NSFW)'}, pages = {375--376}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Demonstrations}, doi = {10.5281/zenodo.1176032}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0074.pdf} }
Benjamin Carey and Andrew Johnston. 2016. Reflection On Action in NIME Research: Two Complementary Perspectives. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 377–382. http://doi.org/10.5281/zenodo.1176006
Abstract
Download PDF DOI
This paper discusses practice-based research in the context of live performance with interactive systems. We focus on two approaches, both of which are concerned with documenting, examining and reflecting on the real-world behaviours and experiences of people and artefacts involved in the creation of new works. The first approach is primarily based on reflections by an individual performer/developer (auto-ethnography) and the second on interviews and observations. The rationales for both approaches are presented along with findings from research which applied them in order to illustrate and explore the characteristics of both. Challenges, including the difficulty of balancing rigour and relevance and the risks of negatively impacting on creative practices are articulated, as are the potential benefits.
@inproceedings{Carey2016, author = {Carey, Benjamin and Johnston, Andrew}, title = {Reflection On Action in NIME Research: Two Complementary Perspectives}, pages = {377--382}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176006}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0075.pdf} }
Càrthach Ó Nuanàin, Sergi Jordà, and Perfecto Herrera. 2016. An Interactive Software Instrument for Real-time Rhythmic Concatenative Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 383–387. http://doi.org/10.5281/zenodo.1176094
Abstract
Download PDF DOI
In this paper we describe an approach for generating and visualising new rhythmic patterns from existing audio in real-time using concatenative synthesis. We introduce a graph-based model enabling novel visualisation and manipulation of new patterns that mimics the rhythmic and timbral character of an existing target seed pattern using a separate database of palette sounds. Our approach is described, reporting on those features that may be useful in describing units of sound related to rhythm and how they might then be projected into two-dimensional space for visualisation using reduction techniques and clustering. We conclude the paper with our qualitative appraisal of using the interface and outline scope for future work.
@inproceedings{Nuannicode225in2016, author = {Nuan\`{a}in, C\`{a}rthach \'{O} and Jord\`{a}, Sergi and Herrera, Perfecto}, title = {An Interactive Software Instrument for Real-time Rhythmic Concatenative Synthesis}, pages = {383--387}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176094}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0076.pdf} }
Andrew J. Milne, Steffen A. Herff, David Bulger, William A. Sethares, and Roger T. Dean. 2016. XronoMorph: Algorithmic Generation of Perfectly Balanced and Well-Formed Rhythms. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 388–393. http://doi.org/10.5281/zenodo.1176082
Abstract
Download PDF DOI
We present an application XronoMorph for the algorithmic generation of rhythms in the context of creative composition and performance, and of musical analysis and education. XronoMorph makes use of visual and geometrical conceptualizations of rhythms, and allows the user to smoothly morph between rhythms. Sonification of the user generated geometrical constructs is possible using a built-in sampler, VST and AU plugins, or standalone synthesizers via MIDI. The algorithms are based on two underlying mathematical principles: perfect balance and well-formedness, both of which can be derived from coefficients of the discrete Fourier transform of the rhythm. The mathematical background, musical implications, and their implementation in the software are discussed.
@inproceedings{Milne2016, author = {Milne, Andrew J. and Herff, Steffen A. and Bulger, David and Sethares, William A. and Dean, Roger T.}, title = {XronoMorph: Algorithmic Generation of Perfectly Balanced and Well-Formed Rhythms}, pages = {388--393}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176082}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0077.pdf} }
Lindsay Vickery. 2016. Rhizomatic approaches to screen-based music notation. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 394–400. http://doi.org/10.5281/zenodo.1176133
Abstract
Download PDF DOI
The rhizome concept explored by Deleuze and Guatarri has had an important influence on formal thinking in music and new media. This paper explores the development of rhizomatic musical scores that are arranged cartographically with nodal points allowing for alternate pathways to be traversed. The challenges of pre-digital exemplars of rhizomatic structure are discussed. It follows the development of concepts and technology used in the creation of five works by the author Ubahn c. 1985: the Rosenberg Variations [2012], The Last Years [2012], Sacrificial Zones [2014], detritus [2015] and trash vortex [2015]. The paper discusses the potential for the evolution of novel formal structures using rhizomatic structures.
@inproceedings{Vickery2016, author = {Vickery, Lindsay}, title = {Rhizomatic approaches to screen-based music notation}, pages = {394--400}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176133}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0078.pdf} }
Stuart James. 2016. A Multi-Point 2D Interface: Audio-Rate Signals for Controlling Complex Multi-Parametric Sound Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 401–406. http://doi.org/10.5281/zenodo.1176040
Abstract
Download PDF DOI
This paper documents a method of controlling complex sound synthesis processes such as granular synthesis, additive synthesis, timbre morphology, swarm-based spatialisation, spectral spatialisation, and timbre spatialisation via a multi-parametric 2D interface. This paper evaluates the use of audio-rate control signals for sound synthesis, and discussing approaches to de-interleaving, synchronization, and mapping. The paper also outlines a number of ways of extending the expressivity of such a control interface by coupling this with another 2D multi-parametric nodes interface and audio-rate 2D table lookup. The paper proceeds to review methods of navigating multi-parameter sets via interpolation and transformation. Some case studies are finally discussed in the paper. The author has used this method to control complex sound synthesis processes that require control data for more that a thousand parameters.
@inproceedings{James2016, author = {James, Stuart}, title = {A Multi-Point {2D} Interface: Audio-Rate Signals for Controlling Complex Multi-Parametric Sound Synthesis}, pages = {401--406}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176040}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0079.pdf} }
Dominik Schlienger. 2016. Acoustic Localisation for Spatial Reproduction of Moving Sound Source: Application Scenarios & Proof of Concept. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 407–412. http://doi.org/10.5281/zenodo.1176116
Abstract
Download PDF DOI
Despite the near ubiquitous availability of interfaces for spatial interaction, standard audio spatialisation technology makes very little use of it. In fact, we find that audio technology often impedes spatial interaction: In the workshop on music, space and interaction we thus developed the idea of a real-time panning whereby a moving sound source is reproduced as a virtual source on a panning trajectory. We define a series of application scenarios where we describe in detail what functionality is required to inform an implementation. In our earlier work we showed that Acoustic Localisation (AL) potentially can provide a pervasive technique for spatially interactive audio applications. Playing through the application scenarios with AL in mind provides interesting approaches. For one scenario we show an example implementation as proof of concept.
@inproceedings{Schlienger2016, author = {Schlienger, Dominik}, title = {Acoustic Localisation for Spatial Reproduction of Moving Sound Source: Application Scenarios \& Proof of Concept}, pages = {407--412}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176116}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0080.pdf} }
Sean Soraghan, Alain Renaud, and Ben Supper. 2016. Towards a perceptual framework for interface design in digital environments for timbre manipulation. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 413–418. http://doi.org/10.5281/zenodo.1176129
Abstract
Download PDF DOI
Many commercial software applications for timbre creation and manipulation feature an engineering-focused, parametric layout. This paper argues the case for a perceptually motivated approach to interface design in such tools. ‘Perceptually motivated’ in this context refers to the use of common semantic timbre descriptors to influence the digital representation of timbre. A review is given of existing research into semantic descriptors of timbre, as well as corresponding acoustic features of timbre. Discussion is also given on existing interface design techniques. The perceptually motivated approach to interface design is demonstrated using an example system, which makes use of perceptually relevant mappings from acoustic timbre features to semantic timbre descriptors and visualises sounds as physical objects.
@inproceedings{Soraghan2016, author = {Soraghan, Sean and Renaud, Alain and Supper, Ben}, title = {Towards a perceptual framework for interface design in digital environments for timbre manipulation}, pages = {413--418}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176129}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0081.pdf} }
Sarah Reid, Ryan Gaston, Colin Honigman, and Ajay Kapur. 2016. Minimally Invasive Gesture Sensing Interface (MIGSI) for Trumpet. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 419–424. http://doi.org/10.5281/zenodo.1176106
Abstract
Download PDF DOI
This paper describes the design of a Minimally Invasive Gesture Sensing Interface (MIGSI) for trumpet. The interface attaches effortlessly to any B-flat or C trumpet and requires no permanent modifications to the host-instrument. It was designed first and foremost with accessibility in mind an approach that is uncommon in augmented instrument design and seeks to strike a balance between minimal design and robust control. MIGSI uses sensor technology to capture gestural data such as valve displacement, hand tension, and instrument position, to offer extended control and expressivity to trumpet players. Several streams of continuous data are transmitted wirelessly from MIGSI to the receiving computer, where MIGSI Mapping application (a simple graphical user interface) parses the incoming data into individually accessible variables. It is our hope that MIGSI will be adopted by trumpet players and composers, and that over time a new body of repertoire for the augmented trumpet will emerge.
@inproceedings{Reid2016, author = {Reid, Sarah and Gaston, Ryan and Honigman, Colin and Kapur, Ajay}, title = {Minimally Invasive Gesture Sensing Interface (MIGSI) for Trumpet}, pages = {419--424}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176106}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0082.pdf} }
Garth Paine. 2016. Now. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 425–426. http://doi.org/10.5281/zenodo.1176104
Abstract
Download PDF DOI
The question of sound as an experience of now, as a conduit to the quality of our belonging to the present, is challenging. Yet it is a crucial issue in discussions about ecological listening. I have come to think of sound as a viscous material, a vibrating field of energy that has texture and density and a physicality that is unlike most other media. Now suggests a desire of becoming present in the resonating sound field of our immediate environment. The energy in the field constantly modulates and drifts. I draw on voices and forces from the natural environment, humans and machines. The work seeks to draw the listeners into an inner space in which they can be both present and aware of their sonic environment and become immersed in it. Now is partly inspired by Samuel Beckett’s novel Watt, specifically Watt’s mysterious journey into to the unknown.
@inproceedings{Paine2016, author = {Paine, Garth}, title = {Now}, pages = {425--426}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176104}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0083.pdf} }
R. Benjamin Shapiro, Rebecca Fiebrink, Matthew Ahrens, and Annie Kelly. 2016. BlockyTalky: A Physical and Distributed Computer Music Toolkit for Kids. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 427–432. http://doi.org/10.5281/zenodo.1176120
Abstract
Download PDF DOI
NIME research realizes a vision of performance by means of computational expression, linking body and space to sound and imagery through eclectic forms of sensing and interaction. This vision could dramatically impact computer science education, simultaneously modernizing the field and drawing in diverse new participants. We describe our work creating a NIME-inspired computer music toolkit for kids called BlockyTalky; the toolkit enables users to create networks of sensing devices and synthesizers. We offer findings from our research on student learning through programming and performance. We conclude by suggesting a number of future directions for NIME researchers interested in education.
@inproceedings{Shapiro2016, author = {Shapiro, R. Benjamin and Fiebrink, Rebecca and Ahrens, Matthew and Kelly, Annie}, title = {BlockyTalky: A Physical and Distributed Computer Music Toolkit for Kids}, pages = {427--432}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176120}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0084.pdf} }
John Bowers, John Richards, Tim Shaw, et al. 2016. One Knob To Rule Them All: Reductionist Interfaces for Expansionist Research. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 433–438. http://doi.org/10.5281/zenodo.1175996
Abstract
Download PDF DOI
This paper describes an instance of what we call ‘curated research’, a concerted thinking, making and performance activity between two research teams with a dedicated interest in the creation of experimental musical instruments and the development of new performance practices. Our work builds theoretically upon critical work in philosophy, anthropology and aesthetics, and practically upon previous explorations of strategies for facilitating rapid, collaborative, publicly-oriented making in artistic settings. We explored an orientation to making which promoted the creation of a family of instruments and performance environments that were responses to the self-consciously provocative theme of ‘One Knob To Rule Them All’. A variety of design issues were explored including: mapping, physicality, the question of control in interface design, reductionist aesthetics and design strategies, and questions of gender and power in musical culture. We discuss not only the technologies which were made but also reflect on the value of such concerted, provocatively thematised, collective making activities for addressing foundational design issues. As such, our work is intended not just as a technical and practical contribution to NIME but also a reflective provocation into how we conduct research itself in a curated critical manner.
@inproceedings{Bowers2016, author = {Bowers, John and Richards, John and Shaw, Tim and Frieze, Jim and Freeth, Ben and Topley, Sam and Spowage, Neal and Jones, Steve and Patel, Amit and Rui, Li}, title = {One Knob To Rule Them All: Reductionist Interfaces for Expansionist Research}, pages = {433--438}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1175996}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0085.pdf} }
Alexander Refsum Jensenius and Michael J. Lyons. 2016. Trends at NIME—Reflections on Editing A NIME Reader. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 439–443. http://doi.org/10.5281/zenodo.1176044
Abstract
Download PDF DOI
This paper provides an overview of the process of editing the forthcoming anthology A NIME Reader—Fifteen years of New Interfaces for Musical Expression. The selection process is presented, and we reflect on some of the trends we have observed in re-discovering the collection of more than 1200 NIME papers published throughout the 15 yearlong history of the conference. An anthology is necessarily selective, and ours is no exception. As we present in this paper, the aim has been to represent the wide range of artistic, scientific, and technological approaches that characterize the NIME conference. The anthology also includes critical discourse, and through acknowledgment of the strengths and weaknesses of the NIME community, we propose activities which could further diversify and strengthen the field.
@inproceedings{Jensenius2016, author = {Jensenius, Alexander Refsum and Lyons, Michael J.}, title = {Trends at NIME---Reflections on Editing A NIME Reader}, pages = {439--443}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176044}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0086.pdf} }
Koray Tahiroglu, Juan Carlos Vasquez, and Johan Kildal. 2016. Non-intrusive Counter-actions: Maintaining Progressively Engaging Interactions for Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Queensland Conservatorium Griffith University, pp. 444–449. http://doi.org/10.5281/zenodo.1176131
Abstract
Download PDF DOI
In this paper we present the new development of a semi-autonomous response module for the NOISA system. NOISA is an interactive music system that predicts performer’s engagement levels, learns from the performer, decides what to do and does it at the right moment. As an improvement for the above, we implemented real-time adaptive features that respond to a detailed monitoring of the performer’s engagement and to overall sonic space, while evaluating the impact of its actions. Through these new features, the response module produces meaningful and non-intrusive counter actions, attempting to deepen and maintain the performer’s engagement in musical interaction. In a formative study we compared our designed response module against a random control system of events, in which the former performed consistently better than the latter.
@inproceedings{Tahironicode287lu2016, author = {Tahiroglu, Koray and Vasquez, Juan Carlos and Kildal, Johan}, title = {Non-intrusive Counter-actions: Maintaining Progressively Engaging Interactions for Music Performance}, pages = {444--449}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2016}, publisher = {Queensland Conservatorium Griffith University}, address = {Brisbane, Australia}, isbn = {978-1-925455-13-7}, issn = {2220-4806}, track = {Papers}, doi = {10.5281/zenodo.1176131}, url = {http://www.nime.org/proceedings/2016/nime2016_paper0087.pdf} }
2015
Chris Korda. 2015. ChordEase: A MIDI remapper for intuitive performance of non-modal music. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 322–324. http://doi.org/10.5281/zenodo.1179110
Abstract
Download PDF DOI
Improvising to non-modal chord progressions such as those found in jazz necessitates switching between the different scales implied by each chord. This work attempted to simplify improvisation by delegating the process of switching scales to a computer. An open-source software MIDI remapper called ChordEase was developed that dynamically alters the pitch of notes, in order to fit them to the chord scales of a predetermined song. ChordEase modifies the behavior of ordinary MIDI instruments, giving them new interfaces that permit non-modal music to be approached as if it were modal. Multiple instruments can be remapped simultaneously, using a variety of mapping functions, each optimized for a particular musical role. Harmonization and orchestration can also be automated. By facilitating the selection of scale tones, ChordEase enables performers to focus on other aspects of improvisation, and thus creates new possibilities for musical expression.
@inproceedings{ckorda2015, author = {Korda, Chris}, title = {ChordEase: A {MIDI} remapper for intuitive performance of non-modal music}, pages = {322--324}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179110}, url = {http://www.nime.org/proceedings/2015/nime2015_103.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/103/0103-file1.avi}, urlsuppl2 = {http://www.nime.org/proceedings/2015/103/0103-file2.avi} }
Mohammad Akbari and Howard Cheng. 2015. claVision: Visual Automatic Piano Music Transcription. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 313–314. http://doi.org/10.5281/zenodo.1179002
Abstract
Download PDF DOI
One important problem in Musical Information Retrieval is Automatic Music Transcription, which is an automated conversion process from played music to a symbolic notation such as sheet music. Since the accuracy of previous audio-based transcription systems is not satisfactory, we propose an innovative visual-based automatic music transcription system named claVision to perform piano music transcription. Instead of processing the music audio, the system performs the transcription only from the video performance captured by a camera mounted over the piano keyboard. claVision can be used as a transcription tool, but it also has other applications such as music education. The claVision software has a very high accuracy (over 95%) and a very low latency in real-time music transcription, even under different illumination conditions.
@inproceedings{makbari2015, author = {Akbari, Mohammad and Cheng, Howard}, title = {claVision: Visual Automatic Piano Music Transcription}, pages = {313--314}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179002}, url = {http://www.nime.org/proceedings/2015/nime2015_105.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/105/0105-file1.avi} }
Jan C. Schacher, Chikashi Miyama, and Daniel Bisig. 2015. Gestural Electronic Music using Machine Learning as Generative Device. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 347–350. http://doi.org/10.5281/zenodo.1179172
Abstract
Download PDF DOI
When performing with gestural devices in combination with machine learning techniques, a mode of high-level interaction can be achieved. The methods of machine learning and pattern recognition can be re-appropriated to serve as a generative principle. The goal is not classification but reaction from the system in an interactive and autonomous manner. This investigation looks at how machine learning algorithms fit generative purposes and what independent behaviours they enable. To this end we describe artistic and technical developments made to leverage existing machine learning algorithms as generative devices and discuss their relevance to the field of gestural interaction.
@inproceedings{jschacher2015, author = {Schacher, {Jan C.} and Miyama, Chikashi and Bisig, Daniel}, title = {Gestural Electronic Music using Machine Learning as Generative Device}, pages = {347--350}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179172}, url = {http://www.nime.org/proceedings/2015/nime2015_117.pdf} }
Stefano Papetti, Sébastien Schiesser, and Martin Fröhlich. 2015. Multi-point vibrotactile feedback for an expressive musical interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 235–240. http://doi.org/10.5281/zenodo.1179152
Abstract
Download PDF DOI
This paper describes the design of a hardware/software system for rendering multi-point, localized vibrotactile feedback in a multi-touch musical interface. A prototype was developed, based on the Madrona Labs Soundplane, which was chosen for it provides easy access to multi-touch data, including force, and its easily expandable layered construction. The proposed solution makes use of several piezo actuator discs, densely arranged in a honeycomb pattern on a thin PCB layer. Based on off-the-shelf components, custom amplifying and routing electronics were designed to drive each piezo element with standard audio signals. Features, as well as electronic and mechanical issues of the current prototype are discussed.
@inproceedings{spapetti2015, author = {Papetti, Stefano and Schiesser, S\'ebastien and Fr\''ohlich, Martin}, title = {Multi-point vibrotactile feedback for an expressive musical interface}, pages = {235--240}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179152}, url = {http://www.nime.org/proceedings/2015/nime2015_118.pdf} }
David Ramsay and Joseph Paradiso. 2015. GroupLoop: A Collaborative, Network-Enabled Audio Feedback Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 251–254. http://doi.org/10.5281/zenodo.1179158
Abstract
Download PDF DOI
GroupLoop is a browser-based, collaborative audio feedback control system for musical performance. GroupLoop users send their microphone stream to other participants while simultaneously controlling the mix of other users’ streams played through their speakers. Collaborations among users can yield complex feedback loops where feedback paths overlap and interact. Users are able to shape the feedback sounds in real-time by adjusting delay, EQ, and gain, as well as manipulating the acoustics of their portion of the audio feedback path. This paper outlines the basic principles underlying Grouploop, describes its design and feature-set, and discusses observations of GroupLoop in performances. It concludes with a look at future research and refinement.
@inproceedings{dramsay2015, author = {Ramsay, David and Paradiso, Joseph}, title = {GroupLoop: A Collaborative, Network-Enabled Audio Feedback Instrument}, pages = {251--254}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179158}, url = {http://www.nime.org/proceedings/2015/nime2015_119.pdf} }
Kazuhiko Yamamoto and Takeo Igarashi. 2015. LiVo: Sing a Song with a Vowel Keyboard. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 205–208. http://doi.org/10.5281/zenodo.1181414
Abstract
Download PDF DOI
We propose a novel user interface that enables control of a singing voice synthesizer at a live improvisational performance. The user first registers the lyrics of a song with the system before performance, and the system builds a probabilistic model that models the possible jumps within the lyrics. During performance, the user simultaneously inputs the lyrics of a song with the left hand using a vowel keyboard and the melodies with the right hand using a standard musical keyboard. Our system searches for a portion of the registered lyrics whose vowel sequence matches the current user input using the probabilistic model, and sends the matched lyrics to the singing voice synthesizer. The vowel input keys are mapped onto a standard musical keyboard, enabling experienced keyboard players to learn the system from a standard musical score. We examine the feasibility of the system through a series of evaluations and user studies.
@inproceedings{kyamamoto2015, author = {Yamamoto, Kazuhiko and Igarashi, Takeo}, title = {LiVo: Sing a Song with a Vowel Keyboard}, pages = {205--208}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1181414}, url = {http://www.nime.org/proceedings/2015/nime2015_120.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/120/0120-file1.mp4} }
Koray Tahiroglu, Thomas Svedström, and Valtteri Wikström. 2015. Musical Engagement that is Predicated on Intentional Activity of the Performer with NOISA Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 132–135. http://doi.org/10.5281/zenodo.1179182
Abstract
Download PDF DOI
This paper presents our current research in which we study the notion of performer engagement within the variance and diversities of the intentional activities of the performer in musical interaction. We introduce a user-test study with the aim to evaluate our system’s engagement prediction capability and to understand in detail the system’s response behaviour. The quantitative results indicate that our system recognises and monitors performer’s engagement successfully, although we found that the system’s response to maintain and deepen the performer’s engagement is perceived differently among participants. The results reported in this paper can be used to inform the design of interactive systems that enhance the quality of performer’s engagement in musical interaction with new interfaces.
@inproceedings{ktahiroglu2015, author = {Tahiroglu, Koray and Svedstr\''om, Thomas and Wikstr\''om, Valtteri}, title = {Musical Engagement that is Predicated on Intentional Activity of the Performer with NOISA Instruments}, pages = {132--135}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179182}, url = {http://www.nime.org/proceedings/2015/nime2015_121.pdf} }
Jason Long, Jim Murphy, Ajay Kapur, and Dale Carnegie. 2015. A Methodology for Evaluating Robotic Striking Mechanisms for Musical Contexts. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 404–407. http://doi.org/10.5281/zenodo.1179120
Abstract
Download PDF DOI
This paper presents a methodology for evaluating the performance of several types of striking mechanism commonly utilized in musical robotic percussion systems. The goal is to take steps towards standardizing methods of comparing the attributes of a range of devices to inform their design and application in various musical situations. A system for testing the latency, consistency, loudness and striking speed of these mechanisms is described and the methods are demonstrated by subjecting several new robotic percussion mechanisms to these tests. An analysis of the results of the evaluation is also presented and the advantages and disadvantages of each of the types of mechanism in various musical contexts is discussed.
@inproceedings{jlong2015, author = {Long, Jason and Murphy, Jim and Kapur, Ajay and Carnegie, Dale}, title = {A Methodology for Evaluating Robotic Striking Mechanisms for Musical Contexts}, pages = {404--407}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179120}, url = {http://www.nime.org/proceedings/2015/nime2015_130.pdf} }
Troy Rogers, Steven Kemper, and Scott Barton. 2015. MARIE: Monochord-Aerophone Robotic Instrument Ensemble. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 408–411. http://doi.org/10.5281/zenodo.1179166
Abstract
Download PDF DOI
The Modular Electro-Acoustic Robotic Instrument System (MEARIS) represents a new type of hybrid electroacoustic-electromechanical instrument model. Monochord-Aerophone Robotic Instrument Ensemble (MARIE), the first realization of a MEARIS, is a set of interconnected monochord and cylindrical aerophone robotic musical instruments created by Expressive Machines Musical Instruments (EMMI). MARIE comprises one or more matched pairs of Automatic Monochord Instruments (AMI) and Cylindrical Aerophone Robotic Instruments (CARI). Each AMI and CARI is a self-contained, independently operable robotic instrument with an acoustic element, a control system that enables automated manipulation of this element, and an audio system that includes input and output transducers coupled to the acoustic element. Each AMI-CARI pair can also operate as an interconnected hybrid instrument, allowing for effects that have heretofore been the domain of physical modeling technologies, such as a plucked air column or blown string. Since its creation, MARIE has toured widely, performed with dozens of human instrumentalists, and has been utilized by nine composers in the realization of more than twenty new musical works.
@inproceedings{skemper2015, author = {Rogers, Troy and Kemper, Steven and Barton, Scott}, title = {MARIE: Monochord-Aerophone Robotic Instrument Ensemble}, pages = {408--411}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179166}, url = {http://www.nime.org/proceedings/2015/nime2015_141.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/141/0141-file1.mov} }
Jiffer Harriman. 2015. Pd Poems and Teaching Tools. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 331–334. http://doi.org/10.5281/zenodo.1179074
Abstract
Download PDF DOI
Music offers an intriguing context to engage children in electronics, programming and more. Over the last year we been developing a hardware and software toolkit for music called modular-muse. Here we describe the design and goals for these tools and how they have been used in different settings to introduce children to concepts of interaction design for music and sound design. Two exploratory studies which used modular-muse are described here with different approaches; a two day build your own instrument workshop where participants learned how to use both hardware and software concurrently to control synthesized sounds and trigger solenoids, and a middle school music classroom where the focus was only on programming for sound synthesis using the modular-muse Pd library. During the second study, a project called Pd Poems, a teaching progression emerged we call Build-Play-Share-Focus which is also described.
@inproceedings{jharriman2015, author = {Harriman, Jiffer}, title = {Pd Poems and Teaching Tools}, pages = {331--334}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179074}, url = {http://www.nime.org/proceedings/2015/nime2015_145.pdf} }
Robin Hayward. 2015. The Hayward Tuning Vine: an interface for Just Intonation. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 209–214. http://doi.org/10.5281/zenodo.1179084
Abstract
Download PDF DOI
The Hayward Tuning Vine is a software interface for exploring the system of microtonal tuning known as Just Intonation. Based ultimately on prime number relationships, harmonic space in Just Intonation is inherently multidimensional, with each prime number tracing a unique path in space. Taking this multidimensionality as its point of departure, the Tuning Vine interface assigns a unique angle and colour to each prime number, along with aligning melodic pitch height to vertical height on the computer screen. These features allow direct and intuitive interaction with Just Intonation. The inclusion of a transposition function along each prime number axis also enables potentially unlimited exploration of harmonic space within prime limit 23. Currently available as desktop software, a prototype for a hardware version has also been constructed, and future tablet app and hardware versions of the Tuning Vine are planned that will allow tangible as well as audiovisual interaction with microtonal harmonic space.
@inproceedings{rhayward2015, author = {Hayward, Robin}, title = {The Hayward Tuning Vine: an interface for Just Intonation}, pages = {209--214}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179084}, url = {http://www.nime.org/proceedings/2015/nime2015_146.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/146/0146-file1.mov} }
Michael Krzyzaniak and Garth Paine. 2015. Realtime Classification of Hand-Drum Strokes. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 400–403. http://doi.org/10.5281/zenodo.1179112
Abstract
Download PDF DOI
Herein is presented a method of classifying hand-drum strokes in real-time by analyzing 50 milliseconds of audio signal as recorded by a contact-mic affixed to the body of the instrument. The classifier performs with an average accuracy of about 95% across several experiments on archetypical strokes, and 89% on uncontrived playing. A complete ANSI C implementation for OSX and Linux is available on the author’s website.
@inproceedings{mkrzyzaniak2015, author = {Krzyzaniak, Michael and Paine, Garth}, title = {Realtime Classification of Hand-Drum Strokes}, pages = {400--403}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179112}, url = {http://www.nime.org/proceedings/2015/nime2015_147.pdf} }
Robert Van Rooyen, Andrew Schloss, and George Tzanetakis. 2015. Snare Drum Motion Capture Dataset. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 329–330. http://doi.org/10.5281/zenodo.1179168
Abstract
Download PDF DOI
Comparative studies require a baseline reference and a documented process to capture new subject data. This paper combined with its principal reference [1] presents a definitive dataset in the context of snare drum performances along with a procedure for data acquisition, and a methodology for quantitative analysis. The dataset contains video, audio, and discrete two dimensional motion data for forty standardized percussive rudiments.
@inproceedings{rvanrooyen2015, author = {Rooyen, Robert Van and Schloss, Andrew and Tzanetakis, George}, title = {Snare Drum Motion Capture Dataset}, pages = {329--330}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179168}, url = {http://www.nime.org/proceedings/2015/nime2015_148.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/148/0148-file1.mp4} }
Rhushabh Bhandari, Avinash Parnandi, Eva Shipp, Beena Ahmed, and Ricardo Gutierrez-Osuna. 2015. Music-based respiratory biofeedback in visually-demanding tasks. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 78–82. http://doi.org/10.5281/zenodo.1179030
Abstract
Download PDF DOI
Biofeedback tools generally use visualizations to display physiological information to the user. As such, these tools are incompatible with visually demanding tasks such as driving. While auditory or haptic biofeedback may be used in these cases, the additional sensory channels can increase workload or act as a nuisance to the user. A number of studies, however, have shown that music can improve mood and concentration, while also reduce aggression and boredom. Here, we propose an intervention that combines the benefits of biofeedback and music to help users regulate their stress response while performing a visual task (driving a car simulator). Our approach encourages slow breathing by adjusting the quality of the music in response to the user’s breathing rate. We evaluate the intervention on a 2\times2 design with music and auditory biofeedback as independent variables. Our results indicate that our music-biofeedback intervention leads to lower arousal (reduced electrodermal activity and increased heart rate variability) than music alone, auditory biofeedback alone and a control condition.
@inproceedings{rbhandari2015, author = {Bhandari, Rhushabh and Parnandi, Avinash and Shipp, Eva and Ahmed, Beena and Gutierrez-Osuna, Ricardo}, title = {Music-based respiratory biofeedback in visually-demanding tasks}, pages = {78--82}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179030}, url = {http://www.nime.org/proceedings/2015/nime2015_149.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/149/0149-file1.mp4} }
Mikko Myllykoski, Kai Tuuri, Esa Viirret, Jukka Louhivuori, Antti Peltomaa, and Janne Kekäläinen. 2015. Prototyping hand-based wearable music education technology. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 182–183. http://doi.org/10.5281/zenodo.1179144
Abstract
Download PDF DOI
This paper discusses perspectives for conceptualizing and developing hand-based wearable musical interface. Previous implementations of such interfaces have not been targeted for music pedagogical use. We propose principles for pedagogically oriented ‘musical hand’ and outline its development through the process of prototyping, which involves a variety of methods. The current functional prototype, a touch-based musical glove, is presented.
@inproceedings{mmyllykoski2015, author = {Myllykoski, Mikko and Tuuri, Kai and Viirret, Esa and Louhivuori, Jukka and Peltomaa, Antti and Kek\''al\''ainen, Janne}, title = {Prototyping hand-based wearable music education technology}, pages = {182--183}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179144}, url = {http://www.nime.org/proceedings/2015/nime2015_151.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/151/0151-file1.m4v} }
Jiffer Harriman. 2015. Feedback Lapsteel : Exploring Tactile Transducers As String Actuators. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 178–179. http://doi.org/10.5281/zenodo.1179076
Abstract
Download PDF DOI
The Feedback Lap Steel is an actuated instrument which makes use of mechanical vibration of the instruments bridge to excite the strings. A custom bridge mounted directly to a tactile transducer enables the strings to be driven with any audio signal from a standard audio amplifier. The instrument can be played as a traditional lap steel guitar without any changes to playing technique as well as be used to create new sounds which blur the line between acoustic and electronic through a combination of acoustic and computer generated and controlled sounds. This introduces a new approach to string actuation using commonly available parts. This demonstration paper details the construction, uses and lessons learned in the making of the Feedback Lap Steel guitar.
@inproceedings{jharrimanb2015, author = {Harriman, Jiffer}, title = {Feedback Lapsteel : Exploring Tactile Transducers As String Actuators}, pages = {178--179}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179076}, url = {http://www.nime.org/proceedings/2015/nime2015_152.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/152/0152-file1.mp4} }
Romain Michon, Julius Orion Smith III, and Yann Orlarey. 2015. MobileFaust: a Set of Tools to Make Musical Mobile Applications with the Faust Programming Language. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 396–399. http://doi.org/10.5281/zenodo.1179140
Abstract
Download PDF DOI
This work presents a series of tools to turn Faust code into various elements ranging from fully functional applications to multi-platform libraries for real time audio signal processing on iOS and Android. Technical details about their use and function are provided along with audio latency and performance comparisons, and examples of applications.
@inproceedings{rmichon2015, author = {Michon, Romain and {Smith III}, {Julius Orion} and Orlarey, Yann}, title = {MobileFaust: a Set of Tools to Make Musical Mobile Applications with the Faust Programming Language}, pages = {396--399}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179140}, url = {http://www.nime.org/proceedings/2015/nime2015_153.pdf} }
Andrew Mercer-Taylor and Jaan Altosaar. 2015. Sonification of Fish Movement Using Pitch Mesh Pairs. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 28–29. http://doi.org/10.5281/zenodo.1179138
Abstract
Download PDF DOI
On a traditional keyboard, the actions required to play a consonant chord progression must be quite precise; accidentally strike a neighboring key, and a pleasant sonority is likely to become a jarring one. Inspired by the Tonnetz (a tonal diagram), we present a new layout of pitches defined using low-level harmonic notions. We demonstrate the potential of our system by mapping the random movements of fish in an aquarium to this layout. Qualitatively, we find that this captures the intuition behind mapping motion to sound. Similarly moving fish produce consonant chords, while fish moving in non-unison produce dissonant chords. We introduce an open source MATLAB library implementing these techniques, which can be used for sonifying multimodal streaming data.
@inproceedings{amercertaylor2015, author = {Mercer-Taylor, Andrew and Altosaar, Jaan}, title = {Sonification of Fish Movement Using Pitch Mesh Pairs}, pages = {28--29}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179138}, url = {http://www.nime.org/proceedings/2015/nime2015_155.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/155/0155-file1.mp4} }
Hans Anderson, Kin Wah Edward Lin, Natalie Agus, and Simon Lui. 2015. Major Thirds: A Better Way to Tune Your iPad. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 365–368. http://doi.org/10.5281/zenodo.1179006
Abstract
Download PDF DOI
Many new melodic instruments use a touch sensitive surface with notes arranged in a two-dimensional grid. Most of these arrange notes in chromatic half-steps along the horizontal axis and in intervals of fourths along the vertical axis. Although many alternatives exist, this arrangement, which resembles that of a bass guitar, is quickly becoming the de facto standard. In this study we present experimental evidence that grid based instruments are significantly easier to play when we tune adjacent rows in Major thirds rather than fourths. We have developed a grid-based instrument as an iPad app that has sold 8,000 units since 2012. To test our proposed alternative tuning, we taught a group twenty new users to play basic chords on our app, using both the standard tuning and our proposed alternative. Our results show that the Major thirds tuning is much easier to learn, even for users that have previous experience playing guitar.
@inproceedings{klin2015, author = {Anderson, Hans and Lin, Kin Wah Edward and Agus, Natalie and Lui, Simon}, title = {Major Thirds: A Better Way to Tune Your iPad}, pages = {365--368}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179006}, url = {http://www.nime.org/proceedings/2015/nime2015_157.pdf} }
Ethan Benjamin and Jaan Altosaar. 2015. MusicMapper: Interactive 2D representations of music samples for in-browser remixing and exploration. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 325–326. http://doi.org/10.5281/zenodo.1179018
Abstract
Download PDF DOI
Much of the challenge and appeal in remixing music comes from manipulating samples. Typically, identifying distinct samples of a song requires expertise in music production software. Additionally, it is difficult to visualize similarities and differences between all samples of a song simultaneously and use this to select samples. MusicMapper is a web application that allows nonexpert users to find and visualize distinctive samples from a song without any manual intervention, and enables remixing by having users play back clusterings of such samples. This is accomplished by splitting audio from the Soundcloud API into appropriately-sized spectrograms, and applying the t-SNE algorithm to visualize these spectrograms in two dimensions. Next, we apply k-means to guide the user’s eye toward related clusters and set k=26 to enable playback of the clusters by pressing keys on an ordinary keyboard. We present the source code (https://github.com/fatsmcgee/MusicMappr) and a demo video (http://youtu.be/mvD6e1uiO8k) of the MusicMapper web application that can be run in most modern browsers.
@inproceedings{jaltosaar2015, author = {Benjamin, Ethan and Altosaar, Jaan}, title = {MusicMapper: Interactive {2D} representations of music samples for in-browser remixing and exploration}, pages = {325--326}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179018}, url = {http://www.nime.org/proceedings/2015/nime2015_161.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/161/0161-file1.mp4} }
Javier Jaimovich and R. Benjamin Knapp. 2015. Creating Biosignal Algorithms for Musical Applications from an Extensive Physiological Database. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 1–4. http://doi.org/10.5281/zenodo.1179096
Abstract
Download PDF DOI
Previously the design of algorithms and parameter calibration for biosignal music performances has been based on testing with a small number of individuals — in fact usually the performer themselves. This paper uses the data collected from over 4000 people to begin to create a truly robust set of algorithms for heart rate and electrodermal activity measures, as well as the understanding of how the calibration of these vary by individual.
@inproceedings{jjaimovich2015, author = {Jaimovich, Javier and Knapp, {R. Benjamin}}, title = {Creating Biosignal Algorithms for Musical Applications from an Extensive Physiological Database}, pages = {1--4}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179096}, url = {http://www.nime.org/proceedings/2015/nime2015_163.pdf} }
Benjamin Knichel, Holger Reckter, and Peter Kiefer. 2015. resonate – a social musical installation which integrates tangible multiuser interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 111–115. http://doi.org/10.5281/zenodo.1179108
Abstract
Download PDF DOI
Resonate was a musical installation created with focus on interactivity and collaboration. In this paper we will focus on the design-process and the different steps involved. We describe and discuss the methods to create, synchronize and combine the aspects of space, object, music and interaction for the development of resonate. The realized space-filling tangible installation allowed visitors to interact with different interaction objects and change therefore the musical expression as well as the visual response and aesthetic. After a non-formal quality evaluation of this installation we changed some aspects which resulted in a more refined version which we will also discuss here.
@inproceedings{bknichel2015, author = {Knichel, Benjamin and Reckter, Holger and Kiefer, Peter}, title = {resonate -- a social musical installation which integrates tangible multiuser interaction}, pages = {111--115}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179108}, url = {http://www.nime.org/proceedings/2015/nime2015_164.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/164/0164-file1.mp4} }
Rebecca Kleinberger, Gershon Dublon, Joseph A. Paradiso, and Tod Machover. 2015. PHOX Ears: A Parabolic, Head-mounted, Orientable, eXtrasensory Listening Device. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 30–31. http://doi.org/10.5281/zenodo.1179106
Abstract
Download PDF DOI
The Electronic Fox Ears helmet is a listening device that changes its wearer’s experience of hearing. A pair of head-mounted, independently articulated parabolic microphones and built-in bone conduction transducers allow the wearer to sharply direct their attention to faraway sound sources. Joysticks in each hand control the orientations of the microphones, which are mounted on servo gimbals for precise targeting. Paired with a mobile device, the helmet can function as a specialized, wearable field recording platform. Field recording and ambient sound have long been a part of electronic music; our device extends these practices by drawing on a tradition of wearable technologies and prosthetic art that blur the boundaries of human perception.
@inproceedings{gdublon2015, author = {Kleinberger, Rebecca and Dublon, Gershon and Paradiso, {Joseph A.} and Machover, Tod}, title = {PHOX Ears: A Parabolic, Head-mounted, Orientable, eXtrasensory Listening Device}, pages = {30--31}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179106}, url = {http://www.nime.org/proceedings/2015/nime2015_165.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/165/0165-file1.mp4} }
Palle Dahlstedt. 2015. Mapping Strategies and Sound Engine Design for an Augmented Hybrid Piano. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 271–276. http://doi.org/10.5281/zenodo.1179046
Abstract
Download PDF DOI
Based on a combination of novel mapping techniques and carefully designed sound engines, I present an augmented hybrid piano specifically designed for improvisation. The mapping technique, originally developed for other control interfaces but here adapted to the piano keyboard, is based on a dynamic vectorization of control parameters, allowing both wild sonic exploration and minute intimate expression. The original piano sound is used as the sole sound source, subjected to processing techniques such as virtual resonance strings, dynamic buffer shuffling, and acoustic and virtual feedback. Thanks to speaker and microphone placement, the acoustic and processed sounds interact in both directions and blend into one new instrument. This also allows for unorthodox playing (knocking, plucking, shouting). Processing parameters are controlled from the keyboard playing alone, allowing intuitive control of complex processing by ear, integrating expressive musical playing with sonic exploration. The instrument is not random, but somewhat unpredictable. This feeds into the improvisation, defining a particular idiomatics of the instruments. Hence, the instrument itself is an essential part of the musical work. Performances include concerts in UK, Japan, Singapore, Australia and Sweden, in solos and ensembles, performed by several pianists. Variations of this hybrid instrument for digital keyboards are also presented.
@inproceedings{pdahlstedt2015, author = {Dahlstedt, Palle}, title = {Mapping Strategies and Sound Engine Design for an Augmented Hybrid Piano}, pages = {271--276}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179046}, url = {http://www.nime.org/proceedings/2015/nime2015_170.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/170/0170-file1.zip} }
Palle Dahlstedt, Per Anders Nilsson, and Gino Robair. 2015. The Bucket System — A computer mediated signalling system for group improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 317–318. http://doi.org/10.5281/zenodo.1179048
Abstract
Download PDF DOI
The Bucket System is a new system for computer-mediated ensemble improvisation, designed by improvisers for improvisers. Coming from a tradition of structured free ensemble improvisation practices (comprovisation), influenced by post-WW2 experimental music practices, it is a signaling system implemented with a set of McMillen QuNeo controllers as input and output interfaces, powered by custom software. It allows for a new kind of on-stage compositional/improvisation interaction.
@inproceedings{pdahlstedtb2015, author = {Dahlstedt, Palle and Nilsson, Per Anders and Robair, Gino}, title = {The Bucket System --- A computer mediated signalling system for group improvisation}, pages = {317--318}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179048}, url = {http://www.nime.org/proceedings/2015/nime2015_171.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/171/0171-file1.mp4} }
Simon Alexander-Adams and Michael Gurevich. 2015. A Flexible Platform for Tangible Graphic Scores. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 174–175. http://doi.org/10.5281/zenodo.1179004
Abstract
Download PDF DOI
This paper outlines the development of a versatile platform for the performance and composition of tangible graphic scores, providing technical details of the hardware and software design. The system is conceived as a touch surface facilitating modular textured plates, coupled with corresponding visual feedback.
@inproceedings{salexanderadams2015, author = {Alexander-Adams, Simon and Gurevich, Michael}, title = {A Flexible Platform for Tangible Graphic Scores}, pages = {174--175}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179004}, url = {http://www.nime.org/proceedings/2015/nime2015_172.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/172/0172-file1.mov} }
Robert Van Rooyen and George Tzanetakis. 2015. Pragmatic Drum Motion Capture System. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 339–342. http://doi.org/10.5281/zenodo.1181400
Abstract
Download PDF DOI
The ability to acquire and analyze a percussion performance in an efficient, affordable, and non-invasive manner has been made possible by a unique composite of off-the-shelf products. Through various methods of calibration and analysis, human motion as imparted on a striking implement can be tracked and correlated with traditional audio data in order to compare performances. Ultimately, conclusions can be drawn that drive pedagogical studies as well as advances in musical robots.
@inproceedings{rvanrooyenb2015, author = {{Van Rooyen}, Robert and Tzanetakis, George}, title = {Pragmatic Drum Motion Capture System}, pages = {339--342}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1181400}, url = {http://www.nime.org/proceedings/2015/nime2015_173.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/173/0173-file1.mp4} }
Qi Yang and Georg Essl. 2015. Representation-Plurality in Multi-Touch Mobile Visual Programming for Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 369–373. http://doi.org/10.5281/zenodo.1181416
Abstract
Download PDF DOI
Multi-touch mobile devices provide a fresh paradigm for interactions, as well as a platform for building rich musical applications. This paper presents a multi-touch mobile programming environment that supports the exploration of different representations in visual programming for music and audio interfaces. Using a common flow-based visual programming vocabulary, we implemented a system based on the urMus platform that explores three types of touch-based interaction representations: a text-based menu representation, a graphical icon-based representation, and a novel multi-touch gesture-based representation. We illustrated their use on interface design for musical controllers.
@inproceedings{qyang2015, author = {Yang, Qi and Essl, Georg}, title = {Representation-Plurality in Multi-Touch Mobile Visual Programming for Music}, pages = {369--373}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1181416}, url = {http://www.nime.org/proceedings/2015/nime2015_177.pdf} }
Alexander Refsum Jensenius. 2015. Microinteraction in Music/Dance Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 16–19. http://doi.org/10.5281/zenodo.1179100
Abstract
Download PDF DOI
This paper presents the scientific-artistic project Sverm, which has focused on the use of micromotion and microsound in artistic practice. Starting from standing still in silence, the artists involved have developed conceptual and experiential knowledge of microactions, microsounds and the possibilities of microinteracting with light and sound.
@inproceedings{ajensenius2015, author = {Jensenius, Alexander Refsum}, title = {Microinteraction in Music/Dance Performance}, pages = {16--19}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179100}, url = {http://www.nime.org/proceedings/2015/nime2015_178.pdf} }
Kristian Nymoen, Mari Romarheim Haugen, and Alexander Refsum Jensenius. 2015. MuMYO — Evaluating and Exploring the MYO Armband for Musical Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 215–218. http://doi.org/10.5281/zenodo.1179150
Abstract
Download PDF DOI
The MYO armband from Thalmic Labs is a complete and wireless motion and muscle sensing platform. This paper evaluates the armband’s sensors and its potential for NIME applications. This is followed by a presentation of the prototype instrument MuMYO. We conclude that, despite some shortcomings, the armband has potential of becoming a new “standard” controller in the NIME community.
@inproceedings{ajenseniusb2015, author = {Nymoen, Kristian and Haugen, Mari Romarheim and Jensenius, Alexander Refsum}, title = {MuMYO --- Evaluating and Exploring the MYO Armband for Musical Interaction}, pages = {215--218}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179150}, url = {http://www.nime.org/proceedings/2015/nime2015_179.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/179/0179-file1.mov} }
Eric Sheffield and Michael Gurevich. 2015. Distributed Mechanical Actuation of Percussion Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 11–15. http://doi.org/10.5281/zenodo.1179176
Abstract
Download PDF DOI
This paper describes a system for interactive mechanically actuated percussion. Design principles regarding seamless control and retention of natural acoustic properties are established. Performance patterns on a preliminary version are examined, including the potential for cooperative and distributed performance.
@inproceedings{esheffieldb2015, author = {Sheffield, Eric and Gurevich, Michael}, title = {Distributed Mechanical Actuation of Percussion Instruments}, pages = {11--15}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179176}, url = {http://www.nime.org/proceedings/2015/nime2015_183.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/183/DistributedActuationDemo.mp4} }
Jingyin He, Ajay Kapur, and Dale Carnegie. 2015. Developing A Physical Gesture Acquisition System for Guqin Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 187–190. http://doi.org/10.5281/zenodo.1179088
Abstract
Download PDF DOI
Motion-based musical interfaces are ubiquitous. With the plethora of sensing solutions and the possibility of developing custom designs, it is important that the new musical interface has the capability to perform any number of tasks. This paper presents the theoretical framework for defining, designing, and evaluation process of a physical gesture acquisition for Guqin performance. The framework is based on an iterative design process, and draws upon the knowledge in Guqin performance to develop a system to determine the interaction between a Guqin player and the computer. This paper emphasizes the definition, conception, and evaluation of the acquisition system.
@inproceedings{jhe2015, author = {He, Jingyin and Kapur, Ajay and Carnegie, Dale}, title = {Developing A Physical Gesture Acquisition System for Guqin Performance}, pages = {187--190}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179088}, url = {http://www.nime.org/proceedings/2015/nime2015_184.pdf} }
Richard Graham and John Harding. 2015. SEPTAR: Audio Breakout Design for Multichannel Guitar. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 241–244. http://doi.org/10.5281/zenodo.1179070
Abstract
Download PDF DOI
Multichannel (or divided) audio pickups are becoming increasingly ubiquitous in electric guitar and computer music communities. These systems allow performers to access signals for each string of their instrument independently and concurrently in real-time creative practice. This paper presents an open-source audio breakout circuit that provides independent audio outputs per string of any chordophone (stringed instrument) that is fitted with a multichannel audio pickup system. The following sections include a brief historical contextualization and discussion on the significance of multichannel audio technology in instrumental guitar music, an overview of our proposed impedance matching circuit for piezoelectric-based audio pickups, and a presentation of a new open-source PCB design (SEPTAR V2) that includes a mountable 13-pin DIN connection to improve compatibility with commercial multichannel pickup systems. This paper will also include a short summary of the potential creative applications and perceptual implications of this multichannel technology when used in creative practice.
@inproceedings{rgrahamb2015, author = {Graham, Richard and Harding, John}, title = {SEPTAR: Audio Breakout Design for Multichannel Guitar}, pages = {241--244}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179070}, url = {http://www.nime.org/proceedings/2015/nime2015_187.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/187/0187-file1.wav}, urlsuppl2 = {http://www.nime.org/proceedings/2015/187/0187-file2.wav} }
Florent Berthaut, Diego Martinez, Martin Hachet, and Sriram Subramanian. 2015. Reflets: Combining and Revealing Spaces for Musical Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 116–120. http://doi.org/10.5281/zenodo.1179028
Abstract
Download PDF DOI
We present Reflets, a mixed-reality environment for musical performances that allows for freely displaying virtual content on stage, such as 3D virtual musical interfaces or visual augmentations of instruments and performers. It relies on spectators and performers revealing virtual objects by slicing through them with body parts or objects, and on planar slightly reflective transparent panels that combine the stage and audience spaces. In this paper, we describe the approach and implementation challenges of Reflets. We then demonstrate that it matches the requirements of musical performances. It allows for placing virtual content anywhere on large stages, even overlapping with physical elements and provides a consistent rendering of this content for large numbers of spectators. It also preserves non-verbal communication between the audience and the performers, and is inherently engaging for the spectators. We finally show that Reflets opens musical performance opportunities such as augmented interaction between musicians and novel techniques for 3D sound shapes manipulation.
@inproceedings{fberthaut2015, author = {Berthaut, Florent and Martinez, Diego and Hachet, Martin and Subramanian, Sriram}, title = {Reflets: Combining and Revealing Spaces for Musical Performances}, pages = {116--120}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179028}, url = {http://www.nime.org/proceedings/2015/nime2015_190.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/190/0190-file1.mp4} }
Simon Lui. 2015. Generate expressive music from picture with a handmade multi-touch music table. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 374–377. http://doi.org/10.5281/zenodo.1179122
Abstract
Download PDF DOI
The multi-touch music table is a novel tabletop tangible interface for expressive musical performance. User touches the picture projected on the table glass surface to perform music. User can click, drag or use various multi-touch gestures with fingers to perform music expressively. The picture color, luminosity, size, finger gesture and pressure determine the music output. The table detects up to 10 finger touches with their touch pressure. We use a glass, a wood stand, a mini projector, a web camera and a computer to construct this music table. Hence this table is highly customizable. The table generates music via a re-interpretation of the artistic components of pictures. It is a cross modal inspiration of music from visual art on a tangible interface.
@inproceedings{slui2015, author = {Lui, Simon}, title = {Generate expressive music from picture with a handmade multi-touch music table}, pages = {374--377}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179122}, url = {http://www.nime.org/proceedings/2015/nime2015_191.pdf} }
Si Waite. 2015. Reimagining the Computer Keyboard as a Musical Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 168–169. http://doi.org/10.5281/zenodo.1179192
Abstract
Download PDF DOI
This paper discusses the use of typed text as a real-time input for interactive performance systems. A brief review of the literature discusses text-based generative systems, links between typing and playing percussion instruments and the use of typing gestures in contemporary performance practice. The paper then documents the author’s audio-visual system that is driven by the typing of text/lyrics in real-time. It is argued that the system promotes the sensation of liveness through clear, perceptible links between the performer’s gestures, the system’s audio outputs and the its visual outputs. The system also provides a novel approach to the use of generative techniques in the composition and live performance of songs. Future developments would include the use of dynamic text effects linked to sound generation and greater interaction between human performer and the visuals.
@inproceedings{swaite2015, author = {Waite, Si}, title = {Reimagining the Computer Keyboard as a Musical Interface}, pages = {168--169}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179192}, url = {http://www.nime.org/proceedings/2015/nime2015_193.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/193/0193-file1.mov}, urlsuppl2 = {http://www.nime.org/proceedings/2015/193/0193-file2.mp4} }
Masami Hirabayashi and Kazuomi Eshima. 2015. Sense of Space: The Audience Participation Music Performance with High-Frequency Sound ID. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 58–60. http://doi.org/10.5281/zenodo.1179092
Abstract
Download PDF DOI
We performed the musical work “Sense of Space” which uses the sound ID with high frequency sound DTMF. The IDs are embedded into the music, audiences’ smartphones and tablets at the venue reacted to the IDs and then they play music pieces. We considered the possibility for novel music experiences brought about through the participation of audiences and spreading sound at the music venue.
@inproceedings{mhirabayashi2015, author = {Hirabayashi, Masami and Eshima, Kazuomi}, title = {Sense of Space: The Audience Participation Music Performance with High-Frequency Sound ID}, pages = {58--60}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179092}, url = {http://www.nime.org/proceedings/2015/nime2015_195.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/195/0195-file1.mp4} }
Tim Shaw, Sébastien Piquemal, and John Bowers. 2015. Fields: An Exploration into the use of Mobile Devices as a Medium for Sound Diffusion. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 281–284. http://doi.org/10.5281/zenodo.1179174
Abstract
Download PDF DOI
In this paper we present Fields, a sound diffusion performance implemented with web technologies that run on the mobile devices of audience members. Both a technical system and bespoke composition, Fields allows for a range of sonic diffusions to occur, and therefore has the potential to open up new paradigms for spatialised music and media performances. The project introduces how handheld technology used as a collective array of speakers controlled live by a centralized performer can create alternative types of participation within musical performance. Fields not only offers a new technological approach to sound diffusion, it also provides an alternative way for audiences to participate in live events, and opens up unique forms of engagement within sonic media contexts.
@inproceedings{tshaw2015, author = {Shaw, Tim and Piquemal, S\'ebastien and Bowers, John}, title = {Fields: An Exploration into the use of Mobile Devices as a Medium for Sound Diffusion}, pages = {281--284}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179174}, url = {http://www.nime.org/proceedings/2015/nime2015_196.pdf} }
Dan Ringwalt, Roger Dannenberg, and Andrew Russell. 2015. Optical Music Recognition for Interactive Score Display. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 95–98. http://doi.org/10.5281/zenodo.1179162
Abstract
Download PDF DOI
Optical music recognition (OMR) is the task of recognizing images of musical scores. In this paper, improved algorithms for the fi rst steps of optical music recognition were developed, which facilitated bulk annotation of scanned scores for use in an interactive score display system. Creating an initial annotation by OMR and verifying by hand substantially reduced the manual eff ort required to process scanned scores to be used in a live performance setting.
@inproceedings{rdannenberg2015, author = {Ringwalt, Dan and Dannenberg, Roger and Russell, Andrew}, title = {Optical Music Recognition for Interactive Score Display}, pages = {95--98}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179162}, url = {http://www.nime.org/proceedings/2015/nime2015_198.pdf} }
Ali Momeni. 2015. Caress: An Electro-acoustic Percussive Instrument for Caressing Sounds. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 245–250. http://doi.org/10.5281/zenodo.1179142
Abstract
Download PDF DOI
This paper documents the development of Caress, an electroacoustic percussive instrument that blends drumming and audio synthesis in a small and portable form factor. Caress is an octophonic miniature drum-set for the fingertips that employs eight acoustically isolated piezo-microphones, coupled with eight independent signal chains that excite a unique resonance model with audio from the piezos. The hardware is designed to be robust and quickly reproducible (parametric design and machine fabrication), while the software aims to be light-weight (low-CPU requirements) and portable (multiple platforms, multiple computing architectures). Above all, the instrument aims for the level of control intimacy and tactile expressivity achieved by traditional acoustic percussive instruments, while leveraging real-time software synthesis and control to expand the sonic palette. This instrument as well as this document are dedicated to the memory of the late David Wessel, pioneering composer, performer, researcher, mentor and all-around Yoda of electroacoustic music.
@inproceedings{amomeni2015, author = {Momeni, Ali}, title = {Caress: An Electro-acoustic Percussive Instrument for Caressing Sounds}, pages = {245--250}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179142}, url = {http://www.nime.org/proceedings/2015/nime2015_199.pdf} }
Roger Dannenberg and Andrew Russell. 2015. Arrangements: Flexibly Adapting Music Data for Live Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 315–316. http://doi.org/10.5281/zenodo.1179050
Abstract
Download PDF DOI
Human-Computer Music Performance for popular music – where musical structure is important, but where musicians often decide on the spur of the moment exactly what the musical form will be – presents many challenges to make computer systems that are flexible and adaptable to human musicians. One particular challenge is that humans easily follow scores and chord charts, adapt these to new performance plans, and understand media locations in musical terms (beats and measures), while computer music systems often use rigid and even numerical representations that are difficult to work with. We present new formalisms and representations, and a corresponding implementation, where musical material in various media is synchronized, where musicians can quickly alter the performance order by specifying (re-)arrangements of the material, and where interfaces are supported in a natural way by music notation.
@inproceedings{rdannenbergb2015, author = {Dannenberg, Roger and Russell, Andrew}, title = {Arrangements: Flexibly Adapting Music Data for Live Performance}, pages = {315--316}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179050}, url = {http://www.nime.org/proceedings/2015/nime2015_200.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/200/0200-file1.mp4} }
Jamie Bullock and Ali Momeni. 2015. ml.lib: Robust, Cross-platform, Open-source Machine Learning for Max and Pure Data. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 265–270. http://doi.org/10.5281/zenodo.1179038
Abstract
Download PDF DOI
This paper documents the development of ml.lib: a set of open-source tools designed for employing a wide range of machine learning techniques within two popular real-time programming environments, namely Max and Pure Data. ml.lib is a cross-platform, lightweight wrapper around Nick Gillian’s Gesture Recognition Toolkit, a C++ library that includes a wide range of data processing and machine learning techniques. ml.lib adapts these techniques for real-time use within popular data-flow IDEs, allowing instrument designers and performers to integrate robust learning, classification and mapping approaches within their existing workflows. ml.lib has been carefully de-signed to allow users to experiment with and incorporate ma-chine learning techniques within an interactive arts context with minimal prior knowledge. A simple, logical and consistent, scalable interface has been provided across over sixteen exter-nals in order to maximize learnability and discoverability. A focus on portability and maintainability has enabled ml.lib to support a range of computing architectures—including ARM—and operating systems such as Mac OS, GNU/Linux and Win-dows, making it the most comprehensive machine learning implementation available for Max and Pure Data.
@inproceedings{amomenib2015, author = {Bullock, Jamie and Momeni, Ali}, title = {ml.lib: Robust, Cross-platform, Open-source Machine Learning for Max and Pure Data}, pages = {265--270}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179038}, url = {http://www.nime.org/proceedings/2015/nime2015_201.pdf} }
Guangyu Xia and Roger Dannenberg. 2015. Duet Interaction: Learning Musicianship for Automatic Accompaniment. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 259–264. http://doi.org/10.5281/zenodo.1179198
Abstract
Download PDF DOI
Computer music systems can interact with humans at different levels, including scores, phrases, notes, beats, and gestures. However, most current systems lack basic musicianship skills. As a consequence, the results of human-computer interaction are often far less musical than the interaction between human musicians. In this paper, we explore the possibility of learning some basic music performance skills from rehearsal data. In particular, we consider the piano duet scenario where two musicians expressively interact with each other. Our work extends previous automatic accompaniment systems. We have built an artificial pianist that can automatically improve its ability to sense and coordinate with a human pianist, learning from rehearsal experience. We describe different machine learning algorithms to learn musicianship for duet interaction, explore the properties of the learned models, such as dominant features, limits of validity, and minimal training size, and claim that a more human-like interaction is achieved.
@inproceedings{rdannenbergc2015, author = {Xia, Guangyu and Dannenberg, Roger}, title = {Duet Interaction: Learning Musicianship for Automatic Accompaniment}, pages = {259--264}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179198}, url = {http://www.nime.org/proceedings/2015/nime2015_202.pdf} }
James Leonard and Claude Cadoz. 2015. Physical Modelling Concepts for a Collection of Multisensory Virtual Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 150–155. http://doi.org/10.5281/zenodo.1179116
Abstract
Download PDF DOI
This paper discusses how haptic devices and physical modelling can be employed to design and simulate multisensory virtual musical instruments, providing the musician with joint audio, visual and haptic feedback. After briefly reviewing some of the main use-cases of haptics in Computer Music, we present GENESIS-RT, a software and hardware platform dedicated to the design and real-time haptic playing of virtual musical instruments using mass-interaction physical modelling. We discuss our approach and report on advancements in modelling various instrument categories instruments, including physical models of percussion, plucked and bowed instruments. Finally, we comment on the constraints, challenges and new possibilities opened by modelling haptic virtual instruments with our platform, and discuss common points and differences in regards to classical Digital Musical Instruments.
@inproceedings{jleonard2015, author = {Leonard, James and Cadoz, Claude}, title = {Physical Modelling Concepts for a Collection of Multisensory Virtual Musical Instruments}, pages = {150--155}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179116}, url = {http://www.nime.org/proceedings/2015/nime2015_203.pdf} }
Jérôme Villeneuve, Claude Cadoz, and Nicolas Castagné. 2015. Visual Representation in GENESIS as a tool for Physical Modeling, Sound Synthesis and Musical Composition. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 195–200. http://doi.org/10.5281/zenodo.1179190
Abstract
Download PDF DOI
The motivation of this paper is to highlight the importance of visual representations for artists when modeling and simulating mass-interaction physical networks in the context of sound synthesis and musical composition. GENESIS is a musician-oriented software environment for sound synthesis and musical composition. However, despite this orientation, a substantial amount of effort has been put into building a rich variety of tools based on static or dynamic visual representations of models and of abstractions of their properties. After a quick survey of these tools, we will illustrate the significant role they play in the creative process involved when going from a musical idea and exploration to the production of a complete musical piece. To that aim, our analysis will rely on the work and practice of several artists having used GENESIS in various ways.
@inproceedings{jvilleneuve2015, author = {Villeneuve, J\'er\^ome and Cadoz, Claude and Castagn\'e, Nicolas}, title = {Visual Representation in GENESIS as a tool for Physical Modeling, Sound Synthesis and Musical Composition}, pages = {195--200}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179190}, url = {http://www.nime.org/proceedings/2015/nime2015_204.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/204/0204-file1.mov} }
Jerônimo Barbosa, Filipe Calegario, João Tragtenberg, Giordano Cabral, Geber Ramalho, and Marcelo M. Wanderley. 2015. Designing DMIs for Popular Music in the Brazilian Northeast: Lessons Learned. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 277–280. http://doi.org/10.5281/zenodo.1179008
Abstract
Download PDF DOI
Regarding the design of new DMIs, it is possible to fit the majority of projects into two main cases: those developed by the academic research centers, which focus on North American and European contemporary classical and experimental music; and the DIY projects, in which the luthier also plays the roles of performer and/or composer. In both cases, the design process is not focused on creating DMIs for a community with a particular culture — with established instruments, repertoire and playing styles — outside European and North American traditions. This challenge motivated our research. In this paper, we discuss lessons learned during an one-year project called Batebit. Our approach was based on Design Thinking methodology, comprising cycles of inspiration, ideation and implementation. It resulted in two new DMIs developed collaboratively with musicians from the Brazilian Northeast.
@inproceedings{fcalegario2015, author = {Barbosa, Jer\^onimo and Calegario, Filipe and Tragtenberg, Jo\~ao and Cabral, Giordano and Ramalho, Geber and Wanderley, {Marcelo M.}}, title = {Designing {DMI}s for Popular Music in the {Brazil}ian Northeast: Lessons Learned}, pages = {277--280}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179008}, url = {http://www.nime.org/proceedings/2015/nime2015_207.pdf} }
Duncan Menzies and Andrew McPherson. 2015. Highland Piping Ornament Recognition Using Dynamic Time Warping. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 50–53. http://doi.org/10.5281/zenodo.1179136
Abstract
Download PDF DOI
This work uses a custom-built digital bagpipe chanter interface to assist in the process of learning the Great Highland Bagpipe (GHB). In this paper, a new algorithm is presented for the automatic recognition and evaluation of the various ornamentation techniques that are a central aspect of traditional Highland bagpipe music. The algorithm is evaluated alongside a previously published approach, and is shown to provide a significant improvement in performance. The ornament detection facility forms part of a complete hardware and software system for use in both tuition and solo practice situations, allowing details of ornamentation errors made by the player to be provided as visual and textual feedback. The system also incorporates new functionality for the identification and description of GHB fingering errors.
@inproceedings{dmenzies2015, author = {Menzies, Duncan and McPherson, Andrew}, title = {Highland Piping Ornament Recognition Using Dynamic Time Warping}, pages = {50--53}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179136}, url = {http://www.nime.org/proceedings/2015/nime2015_208.pdf} }
Asbjørn Blokkum Flø and Hans Wilmers. 2015. Doppelgänger: A solenoid-based large scale sound installation. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 61–64. http://doi.org/10.5281/zenodo.1179060
Abstract
Download PDF DOI
This paper presents the sound art installation Doppelgänger. In Doppelgänger, we combine an artistic concept on a large scale with a high degree of control over timbre and dynamics. This puts great demands on the technical aspects of the work. The installation consists of seven 3.5 meters-tall objects weighing a total of 1500 kilos. Doppelgänger transfers one soundscape into another using audio analysis, mapping, and computer-controlled acoustic sound objects. The technical realization is based on hammer mechanisms actuated by powerful solenoids, driven by a network of Arduino boards with high power PWM outputs, and a Max-patch running audio analysis and mapping. We look into the special requirements in mechanics for large-scale projects. Great care has been taken in the technical design to ensure that the resulting work is scalable both in numbers of elements and in physical dimensions. This makes our findings easily applicable to other projects of a similar nature.
@inproceedings{aflo2015, author = {Fl\o, {Asbj\o rn Blokkum} and Wilmers, Hans}, title = {Doppelg{\''a}nger: A solenoid-based large scale sound installation.}, pages = {61--64}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179060}, url = {http://www.nime.org/proceedings/2015/nime2015_212.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/212/0212-file1.mp4} }
Adrian Hazzard, Steve Benford, Alan Chamberlain, and Chris Greenhalgh. 2015. Considering musical structure in location-based experiences. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 378–381. http://doi.org/10.5281/zenodo.1179086
Abstract
Download PDF DOI
Locative music experiences are often non-linear and as such they are co-created, as the final arrangement of the music heard is guided by the movements of the user. We note an absence of principles and guidelines regarding how composers should approach the structuring of such locative soundtracks. For instance, how does one compose for a non-linear, indeterminate experience using linear pre-composed placed sounds, where fixed musical time is situated into the indeterminate time of the user’s experience? Furthermore, how does one create a soundtrack that is suitable for the location, but also functions as a coherent musical structure? We explore these questions by analyzing an existing ‘placed sound’ work from a traditional music theory perspective and in doing so reveal some structural principals from ‘fixed’ musical forms can also support the composition of contemporary locative music experiences.
@inproceedings{ahazzard2015, author = {Hazzard, Adrian and Benford, Steve and Chamberlain, Alan and Greenhalgh, Chris}, title = {Considering musical structure in location-based experiences}, pages = {378--381}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179086}, url = {http://www.nime.org/proceedings/2015/nime2015_214.pdf} }
Basheer Tome, Donald Derek Haddad, Tod Machover, and Joseph Paradiso. 2015. MMODM: Massively Multipler Online Drum Machine. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 285–288. http://doi.org/10.5281/zenodo.1179184
Abstract
Download PDF DOI
Twitter has provided a social platform for everyone to enter the previously exclusive world of the internet, enriching this online social tapestry with cultural diversity and enabling revolutions. We believe this same tool can be used to also change the world of music creation. Thus we present MMODM, an online drum machine based on the Twitter streaming API, using tweets from around the world to create and perform musical sequences together in real time. Users anywhere can express 16-beat note sequences across 26 different instruments using plain text tweets on their favorite device, in real-time. Meanwhile, users on the site itself can use the graphical interface to locally DJ the rhythm, filters, and sequence blending. By harnessing this duo of website and Twitter network, MMODM enables a whole new scale of synchronous musical collaboration between users locally, remotely, across a wide variety of computing devices, and across a variety of cultures.
@inproceedings{btome2015, author = {Tome, Basheer and Haddad, {Donald Derek} and Machover, Tod and Paradiso, Joseph}, title = {MMODM: Massively Multipler Online Drum Machine}, pages = {285--288}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179184}, url = {http://www.nime.org/proceedings/2015/nime2015_215.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/215/0215-file1.mp4} }
Natasha Barrett. 2015. Creating tangible spatial-musical images from physical performance gestures. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 191–194. http://doi.org/10.5281/zenodo.1179014
Abstract
Download PDF DOI
Electroacoustic music has a longstanding relationship with gesture and space. This paper marks the start of a project investigating acousmatic spatial imagery, real gestural behaviour and ultimately the formation of tangible acousmatic images. These concepts are explored experimentally using motion tracking in a source-sound recording context, interactive parameter-mapping sonification in three-dimensional high-order ambisonics, composition and performance. The spatio-musical role of physical actions in relation to instrument excitation is used as a point of departure for embodying physical spatial gestures in the creative process. The work draws on how imagery for music is closely linked with imagery for music-related actions.
@inproceedings{nbarrett2015, author = {Barrett, Natasha}, title = {Creating tangible spatial-musical images from physical performance gestures}, pages = {191--194}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179014}, url = {http://www.nime.org/proceedings/2015/nime2015_216.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/216/0216-file1.zip} }
Jiffer Harriman. 2015. Start ’em Young: Digital Music Instrument for Education. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 70–73. http://doi.org/10.5281/zenodo.1179078
Abstract
Download PDF DOI
Designing and building Digital Music Instruments (DMIs) is a promising context to engage children in technology design with parallels to hands on and project based learning educational approaches. Looking at tools and approaches used in STEM education we find much in common with the tools and approaches used in the creation of DMIs as well as opportunities for future development, in particular the use of scaffolded software and hardware toolkits. Current approaches to teaching and designing DMIs within the community suggest fruitful ideas for engaging novices in authentic design activities. Hardware toolkits and programming approaches are considered to identify productive approaches to teach technology design through building DMIs.
@inproceedings{jharrimanc2015, author = {Harriman, Jiffer}, title = {Start 'em Young: Digital Music Instrument for Education}, pages = {70--73}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179078}, url = {http://www.nime.org/proceedings/2015/nime2015_218.pdf} }
Dario Cazzani. 2015. Posture Identification of Musicians Using Non-Intrusive Low-Cost Resistive Pressure Sensors. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 54–57. http://doi.org/10.5281/zenodo.1179042
Abstract
Download PDF DOI
The following paper documents the creation of a prototype of shoe-soles designed to detect various postures of standing musicians using non-intrusive pressure sensors. In order to do so, flexible algorithms were designed with the capacity of working even with an imprecise placement of the sensors. This makes it easy and accessible for all potential users. At least 4 sensors are required: 2 for the front and 2 for the back; this prototype uses 6. The sensors are rather inexpensive, widening the economic availability. For each individual musician, the algorithms are capable of “personalising” postures in order to create consistent evaluations; the results of which may be, but are not limited to: new musical interfaces, educational analysis of technique, or music controllers. In building a prototype for the algorithms, data was acquired by wiring the sensors to a data-logger. The algorithms and tests were implemented using MATLAB. After designing the algorithms, various tests were run in order to prove their reliability. These determined that indeed the algorithms work to a sufficient degree of certainty, allowing for a reliable classification of a musician’s posture or position.
@inproceedings{dcazzani2015, author = {Cazzani, Dario}, title = {Posture Identification of Musicians Using Non-Intrusive Low-Cost Resistive Pressure Sensors}, pages = {54--57}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179042}, url = {http://www.nime.org/proceedings/2015/nime2015_220.pdf} }
Zeyu Jin, Reid Oda, Adam Finkelstein, and Rebecca Fiebrink. 2015. MalLo: A Distributed Synchronized Musical Instrument Designed For Internet Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 293–298. http://doi.org/10.5281/zenodo.1179102
Abstract
Download PDF DOI
The Internet holds a lot of potential as a music listening, collaboration, and performance space. It has become commonplace to stream music and video of musical performance over the web. However, the goal of playing rhythmically synchronized music over long distances has remained elusive due to the latency inherent in networked communication. The farther apart two artists are from one another, the greater the delay. Furthermore, latency times can change abruptly with no warning. In this paper, we demonstrate that it is possible to create a distributed, synchronized musical instrument that allows performers to play together over long distances, despite latency. We describe one such instrument, MalLo, which combats latency by predicting a musician’s action before it is completed. MalLo sends information about a predicted musical note over the Internet before it is played, and synthesizes this note at a collaborator’s location at nearly the same moment it is played by the performer. MalLo also protects against latency spikes by sending the prediction data across multiple network paths, with the intention of routing around latency.
@inproceedings{roda2015, author = {Jin, Zeyu and Oda, Reid and Finkelstein, Adam and Fiebrink, Rebecca}, title = {MalLo: A Distributed Synchronized Musical Instrument Designed For Internet Performance}, pages = {293--298}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179102}, url = {http://www.nime.org/proceedings/2015/nime2015_223.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/223/0223-file1.mp4} }
Lauren Hayes. 2015. Enacting Musical Worlds: Common Approaches to using NIMEs within both Performance and Person-Centred Arts Practices. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 299–302. http://doi.org/10.5281/zenodo.1179082
Abstract
Download PDF DOI
Live music making can be understood as an enactive process, whereby musical experiences are created through human action. This suggests that musical worlds coevolve with their agents through repeated sensorimotor interactions with the environment (where the music is being created), and at the same time cannot be separated from their sociocultural contexts. This paper investigates this claim by exploring ways in which technology, physiology, and context are bound up within two different musical scenarios: live electronic musical performance; and person-centred arts applications of NIMEs. In this paper I outline an ethnographic and phenomenological enquiry into my experiences as both a performer of live electronic and electro-instrumental music, as well as my extensive background in working with new technologies in various therapeutic and person-centred artistic situations. This is in order to explore the sociocultural and technological contexts in which these activities take place. I propose that by understanding creative musical participation as a highly contextualised practice, we may discover that the greatest impact of rapidly developing technological resources is their ability to afford richly diverse, personalised, and embodied forms of music making. I argue that this is applicable over a wide range of musical communities.
@inproceedings{lhayes2015, author = {Hayes, Lauren}, title = {Enacting Musical Worlds: Common Approaches to using NIMEs within both Performance and Person-Centred Arts Practices}, pages = {299--302}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179082}, url = {http://www.nime.org/proceedings/2015/nime2015_227.pdf} }
Nuno N. Correia and Atau Tanaka. 2015. Prototyping Audiovisual Performance Tools: A Hackathon Approach. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 319–321. http://doi.org/10.5281/zenodo.1179044
Abstract
Download PDF DOI
We present a user-centered approach for prototyping tools for performance with procedural sound and graphics, based on a hackathon. We also present the resulting prototypes. These prototypes respond to a challenge originating from earlier stages of the research: to combine ease-of-use with expressiveness and visibility of interaction in tools for audiovisual performance. We aimed to convert sketches, resulting from an earlier brainstorming session, into functional prototypes in a short period of time. The outcomes include open-source software base released online. The conclusions reflect on the methodology adopted and the effectiveness of the prototypes.
@inproceedings{ncorreia2015, author = {Correia, {Nuno N.} and Tanaka, Atau}, title = {Prototyping Audiovisual Performance Tools: A Hackathon Approach}, pages = {319--321}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179044}, url = {http://www.nime.org/proceedings/2015/nime2015_230.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/230/0230-file1.mp4} }
Peter Bennett, Jarrod Knibbe, Florent Berthaut, and Kirsten Cater. 2015. Resonant Bits: Controlling Digital Musical Instruments with Resonance and the Ideomotor Effect. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 176–177. http://doi.org/10.5281/zenodo.1179020
Abstract
Download PDF DOI
Resonant Bits proposes giving digital information resonant dynamic properties, requiring skill and concerted effort for interaction. This paper applies resonant interaction to musical control, exploring musical instruments that are controlled through both purposeful and subconscious resonance. We detail three exploratory prototypes, the first two illustrating the use of resonant gestures and the third focusing on the detection and use of the ideomotor (subconscious micro-movement) effect.
@inproceedings{pbennett2015, author = {Bennett, Peter and Knibbe, Jarrod and Berthaut, Florent and Cater, Kirsten}, title = {Resonant Bits: Controlling Digital Musical Instruments with Resonance and the Ideomotor Effect}, pages = {176--177}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179020}, url = {http://www.nime.org/proceedings/2015/nime2015_235.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/235/0235-file1.mp4} }
Antonio Deusany de Carvalho Junior. 2015. Indoor localization during installations using WiFi. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 40–41. http://doi.org/10.5281/zenodo.1179052
Abstract
Download PDF DOI
The position of a participant during an installation is a valuable data. One may want to start some sample when someone cross a line or stop the music automatically whenever there is nobody inside the main area. GPS is a good solution for localization, but it usually loses its capabilities inside buildings. This paper discuss the use of Wi-Fi signal strength during an installation as an alternative to GPS.
@inproceedings{adecarvalhojunior2015, author = {de Carvalho Junior, Antonio Deusany}, title = {Indoor localization during installations using {WiFi}}, pages = {40--41}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179052}, url = {http://www.nime.org/proceedings/2015/nime2015_237.pdf} }
Charles Martin, Henry Gardner, and Ben Swift. 2015. Tracking Ensemble Performance on Touch-Screens with Gesture Classification and Transition Matrices. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 359–364. http://doi.org/10.5281/zenodo.1179130
Abstract
Download PDF DOI
We present and evaluate a novel interface for tracking ensemble performances on touch-screens. The system uses a Random Forest classifier to extract touch-screen gestures and transition matrix statistics. It analyses the resulting gesture-state sequences across an ensemble of performers. A series of specially designed iPad apps respond to this real-time analysis of free-form gestural performances with calculated modifications to their musical interfaces. We describe our system and evaluate it through cross-validation and profiling as well as concert experience.
@inproceedings{cmartin2015, author = {Martin, Charles and Gardner, Henry and Swift, Ben}, title = {Tracking Ensemble Performance on Touch-Screens with Gesture Classification and Transition Matrices}, pages = {359--364}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179130}, url = {http://www.nime.org/proceedings/2015/nime2015_242.pdf} }
Beste Filiz Yuksel, Daniel Afergan, Evan Peck, et al. 2015. BRAAHMS: A Novel Adaptive Musical Interface Based on Users’ Cognitive State. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 136–139. http://doi.org/10.5281/zenodo.1181418
Abstract
Download PDF DOI
We present a novel brain-computer interface (BCI) integrated with a musical instrument that adapts implicitly (with no extra effort from user) to users’ changing cognitive state during musical improvisation. Most previous musical BCI systems use either a mapping of brainwaves to create audio signals or use explicit brain signals to control some aspect of the music. Such systems do not take advantage of higher level semantically meaningful brain data which could be used in adaptive systems or without detracting from the attention of the user. We present a new type of real-time BCI that assists users in musical improvisation by adapting to users’ measured cognitive workload implicitly. Our system advances the state of the art in this area in three ways: 1) We demonstrate that cognitive workload can be classified in real-time while users play the piano using functional near-infrared spectroscopy. 2) We build a real-time, implicit system using this brain signal that musically adapts to what users are playing. 3) We demonstrate that users prefer this novel musical instrument over other conditions and report that they feel more creative.
@inproceedings{byuksel2015, author = {Yuksel, {Beste Filiz} and Afergan, Daniel and Peck, Evan and Griffin, Garth and Harrison, Lane and Chen, Nick and Chang, Remco and Jacob, Robert}, title = {BRAAHMS: A Novel Adaptive Musical Interface Based on Users' Cognitive State}, pages = {136--139}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1181418}, url = {http://www.nime.org/proceedings/2015/nime2015_243.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/243/0243-file1.mp4} }
Abram Hindle. 2015. Orchestrating Your Cloud Orchestra. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 121–125. http://doi.org/10.5281/zenodo.1179090
Abstract
Download PDF DOI
Cloud computing potentially ushers in a new era of computer music performance with exceptionally large computer music instruments consisting of 10s to 100s of virtual machines which we propose to call a ‘cloud-orchestra’. Cloud computing allows for the rapid provisioning of resources, but to deploy such a complicated and interconnected network of software synthesizers in the cloud requires a lot of manual work, system administration knowledge, and developer/operator skills. This is a barrier to computer musicians whose goal is to produce and perform music, and not to administer 100s of computers. This work discusses the issues facing cloud-orchestra deployment and offers an abstract solution and a concrete implementation. The abstract solution is to generate cloud-orchestra deployment plans by allowing computer musicians to model their network of synthesizers and to describe their resources. A model optimizer will compute near-optimal deployment plans to synchronize, deploy, and orchestrate the start-up of a complex network of synthesizers deployed to many computers. This model driven development approach frees computer musicians from much of the hassle of deployment and allocation. Computer musicians can focus on the configuration of musical components and leave the resource allocation up to the modelling software to optimize.
@inproceedings{ahindle2015, author = {Hindle, Abram}, title = {Orchestrating Your Cloud Orchestra}, pages = {121--125}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179090}, url = {http://www.nime.org/proceedings/2015/nime2015_244.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/244/0244-file1.mp4} }
Andrew Piepenbrink and Matthew Wright. 2015. The Bistable Resonator Cymbal: An Actuated Acoustic Instrument Displaying Physical Audio Effects. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 227–230. http://doi.org/10.5281/zenodo.1179154
Abstract
Download PDF DOI
We present the Bistable Resonator Cymbal, a type of actuated acoustic instrument which augments a conventional cymbal with feedback-induced resonance. The system largely employs standard, commercially-available sound reinforcement and signal processing hardware and software, and no permanent modifications to the cymbal are needed. Several types of cymbals may be used, each capable of producing a number of physical audio effects. Cymbal acoustics, implementation, stability issues, interaction behavior, and sonic results are discussed.
@inproceedings{apiepenbrink2015, author = {Piepenbrink, Andrew and Wright, Matthew}, title = {The Bistable Resonator Cymbal: An Actuated Acoustic Instrument Displaying Physical Audio Effects}, pages = {227--230}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179154}, url = {http://www.nime.org/proceedings/2015/nime2015_245.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/245/0245-file1.mov}, urlsuppl2 = {http://www.nime.org/proceedings/2015/245/0245-file2.zip} }
Andreas Bergsland and Robert Wechsler. 2015. Composing Interactive Dance Pieces for the MotionComposer, a device for Persons with Disabilities. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 20–23. http://doi.org/10.5281/zenodo.1179024
Abstract
Download PDF DOI
The authors have developed a new hardware/software device for persons with disabilities (the MotionComposer), and in the process created a number of interactive dance pieces for non-disabled professional dancers. The paper briefly describes the hardware and motion tracking software of the device before going into more detail concerning the mapping strategies and sound design applied to three interactive dance pieces. The paper concludes by discussing a particular philosophy championing transparency and intuitiveness (clear causality) in the interactive relationship, which the authors apply to both the device and to the pieces that came from it.
@inproceedings{abergsland2015, author = {Bergsland, Andreas and Wechsler, Robert}, title = {Composing Interactive Dance Pieces for the MotionComposer, a device for Persons with Disabilities}, pages = {20--23}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179024}, url = {http://www.nime.org/proceedings/2015/nime2015_246.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/246/0246-file2.mp4}, urlsuppl2 = {http://www.nime.org/proceedings/2015/246/La_Danse_II.mp4}, urlsuppl3 = {http://www.nime.org/proceedings/2015/246/SongShanMountain-SD.mp4} }
Brendan McCloskey, Brian Bridges, and Frank Lyons. 2015. Accessibility and dimensionalty: enhanced real-time creative independence for digital musicians with quadriplegic cerebral palsy. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 24–27. http://doi.org/10.5281/zenodo.1179132
Abstract
Download PDF DOI
Inclusive music activities for people with physical disabilities commonly emphasise facilitated processes, based both on constrained gestural capabilities, and on the simplicity of the available interfaces. Inclusive music processes employ consumer controllers, computer access tools and/or specialized digital musical instruments (DMIs). The first category reveals a design ethos identified by the authors as artefact multiplication – many sliders, buttons, dials and menu layers; the latter types offer ergonomic accessibility through artefact magnification. We present a prototype DMI that eschews artefact multiplication in pursuit of enhanced real time creative independence. We reconceptualise the universal click-drag interaction model via a single sensor type, which affords both binary and continuous performance control. Accessibility is optimized via a familiar interaction model and through customized ergonomics, but it is the mapping strategy that emphasizes transparency and sophistication in the hierarchical correspondences between the available gesture dimensions and expressive musical cues. Through a participatory and progressive methodology we identify an ostensibly simple targeting gesture rich in dynamic and reliable features: (1) contact location; (2) contact duration; (3) momentary force; (4) continuous force, and; (5) dyad orientation. These features are mapped onto dynamic musical cues, most notably via new mappings for vibrato and arpeggio execution.
@inproceedings{bmccloskey2015, author = {McCloskey, Brendan and Bridges, Brian and Lyons, Frank}, title = {Accessibility and dimensionalty: enhanced real-time creative independence for digital musicians with quadriplegic cerebral palsy}, pages = {24--27}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179132}, url = {http://www.nime.org/proceedings/2015/nime2015_250.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/250/0250-file1.zip} }
Ajit Nath and Samson Young. 2015. VESBALL: A ball-shaped instrument for music therapy. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 387–391. http://doi.org/10.5281/zenodo.1179146
Abstract
Download PDF DOI
In this paper the authors describe the VESBALL, which is a ball-shaped musical interface designed for group music therapy. Therapy sessions take the form of “musical ensembles” comprised of individuals with Autism Spectrum Disorder (ASD), typically led by one or more certified music therapists. VESBALL had been developed in close consultation with therapists, clients, and other stakeholders, and had undergone several phases of trials at a music therapy facility over a period of 6 months. VESBALL has an advantage over other related work in terms of its robustness, ease of operation and setup (for clients and therapists), sound source integration, and low cost of production. The authors hope VESBALL would positively impact the conditions of individuals with ASD, and pave way for new research in custom-designed NIME for communities with specific therapeutic needs.
@inproceedings{anath2015, author = {Nath, Ajit and Young, Samson}, title = {VESBALL: A ball-shaped instrument for music therapy}, pages = {387--391}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179146}, url = {http://www.nime.org/proceedings/2015/nime2015_252.pdf} }
Simon Waloschek and Aristotelis Hadjakos. 2015. Sensors on Stage: Conquering the Requirements of Artistic Experiments and Live Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 351–354. http://doi.org/10.5281/zenodo.1179194
Abstract
Download PDF DOI
With the rapid evolution of technology, sensor aided performances and installations have gained popularity. We identified a number of important criteria for stage usage and artistic experimentation. These are partially met by existing approaches, oftentimes trading off programmability for ease of use. We propose our new sensor interface SPINE-2 that presents a comprehensive solution to these stage requirements without that trade-off.
@inproceedings{swaloschek2015, author = {Waloschek, Simon and Hadjakos, Aristotelis}, title = {Sensors on Stage: Conquering the Requirements of Artistic Experiments and Live Performances}, pages = {351--354}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179194}, url = {http://www.nime.org/proceedings/2015/nime2015_254.pdf} }
Andrew McPherson and Victor Zappi. 2015. Exposing the Scaffolding of Digital Instruments with Hardware-Software Feedback Loops. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 162–167. http://doi.org/10.5281/zenodo.1179134
Abstract
Download PDF DOI
The implementation of digital musical instruments is often opaque to the performer. Even when the relationship between action and sound is readily understandable, the internal hardware or software operations that create that relationship may be inaccessible to scrutiny or modification. This paper presents a new approach to digital instrument design which lets the performer alter and subvert the instrument’s internal operation through circuit-bending techniques. The approach uses low-latency feedback loops between software and analog hardware to expose the internal working of the instrument. Compared to the standard control voltage approach used on analog synths, alterations to the feedback loops produce distinctive and less predictable changes in behaviour with original artistic applications. This paper discusses the technical foundations of the approach, its roots in hacking and circuit bending, and case studies of its use in live performance with the D-Box hackable instrument.
@inproceedings{amcpherson2015, author = {McPherson, Andrew and Zappi, Victor}, title = {Exposing the Scaffolding of Digital Instruments with Hardware-Software Feedback Loops}, pages = {162--167}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179134}, url = {http://www.nime.org/proceedings/2015/nime2015_258.pdf} }
Tommy Feldt, Sarah Freilich, Shaun Mendonsa, Daniel Molin, and Andreas Rau. 2015. Puff, Puff, Play: A Sip-And-Puff Remote Control for Music Playback. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 34–35. http://doi.org/10.5281/zenodo.1179058
Abstract
Download PDF DOI
We introduce the Peripipe, a tangible remote control for a music player that comes in the shape of a wooden tobacco pipe. The design is based on breath control, using sips and puffs as control commands. An atmospheric pressure sensor in the Peripipe senses changes in the air pressure. Based on these changes, the pipe determines when the user performs a puff, double-puff, sip, double-sip or a long puff or long sip action, and wirelessly sends commands to a smartphone running the music player. Additionally, the Peripipe provides fumeovisual feedback, using color-illuminated smoke to display the system status. With the form factor, the materials used, the interaction through breath, and the ephemeral feedback we aim to emphasize the emotional component of listening to music that, in our eyes, is not very well reflected in traditional remote controls.
@inproceedings{arau2015, author = {Feldt, Tommy and Freilich, Sarah and Mendonsa, Shaun and Molin, Daniel and Rau, Andreas}, title = {Puff, Puff, Play: A Sip-And-Puff Remote Control for Music Playback}, pages = {34--35}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179058}, url = {http://www.nime.org/proceedings/2015/nime2015_260.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/260/0260-file1.mp4} }
Nicolas d’Alessandro, Joëlle Tilmanne, Ambroise Moreau, and Antonin Puleo. 2015. AirPiano: A Multi-Touch Keyboard with Hovering Control. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 255–258. http://doi.org/10.5281/zenodo.1181434
Abstract
Download PDF DOI
In this paper, we describe the prototyping of two musical interfaces that use the LeapMotion camera in conjunction with two different touch surfaces: a Wacom tablet and a transparent PVC sheet. In the Wacom use case, the camera is between the hand and the surface. In the PVC use case, the camera is under the transparent sheet and tracks the hand through it. The aim of this research is to explore hovering motion surrounding the touch interaction on the surface and include properties of such motion in the musical interaction. We present our unifying software, called AirPiano, that discretises the 3D space into ’keys’ and proposes several mapping strategies with the available dimensions. These control dimensions are mapped onto a modified HandSketch sound engine that achieves multitimbral pitch-synchronous point cloud granulation.
@inproceedings{ndalessandro2015, author = {d'Alessandro, Nicolas and Tilmanne, Jo\''elle and Moreau, Ambroise and Puleo, Antonin}, title = {AirPiano: A Multi-Touch Keyboard with Hovering Control}, pages = {255--258}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1181434}, url = {http://www.nime.org/proceedings/2015/nime2015_261.pdf} }
Steve Benford, Adrian Hazzard, Alan Chamberlain, and Liming Xu. 2015. Augmenting a Guitar with its Digital Footprint. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 303–306. http://doi.org/10.5281/zenodo.1179016
Abstract
Download PDF DOI
We explore how to digitally augment musical instruments by connecting them to their social histories. We describe the use of Internet of Things technologies to connect an acoustic guitar to its digital footprint – a record of how it was designed, built and played. We introduce the approach of crafting interactive decorative inlay into the body of an instrument that can then be scanned using mobile devices to reveal its digital footprint. We describe the design and construction of an augmented acoustic guitar called Carolan alongside activities to build its digital footprint through documented encounters with twenty-seven players in a variety of settings. We reveal the design challenge of mapping the different surfaces of the instrument to various facets of its footprint so as to afford appropriate experiences to players, audiences and technicians. We articulate an agenda for further research on the topic of connecting instruments to their social histories, including capturing and performing digital footprints and creating personalized and legacy experiences.
@inproceedings{ahazzardb2015, author = {Benford, Steve and Hazzard, Adrian and Chamberlain, Alan and Xu, Liming}, title = {Augmenting a Guitar with its Digital Footprint}, pages = {303--306}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179016}, url = {http://www.nime.org/proceedings/2015/nime2015_264.pdf} }
Sair Sinan Kestelli. 2015. Motor Imagery: What Does It Offer for New Digital Musical Instruments? Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 107–110. http://doi.org/10.5281/zenodo.1179104
Abstract
Download PDF DOI
There have been more interest and research towards multisensory aspects of sound as well as vision and movement, especially in the last two decades. An emerging research field related with multi-sensory research is ’motor imagery’, which could be defined as the mental representation of a movement without actual production of muscle activity necessary for its execution. Emphasizing its close relationship and potential future use in new digital musical instruments (DMI) practice and reviewing literature, this paper will introduce fundamental concepts about motor imagery (MI), various methods of measuring MI in different configurations and summarize some important findings about MI in various studies. Following, it will discuss how this research area is related to DMI practice and propose potential uses of MI in this field.
@inproceedings{skestelli2015, author = {Kestelli, {Sair Sinan}}, title = {Motor Imagery: What Does It Offer for New Digital Musical Instruments?}, pages = {107--110}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179104}, url = {http://www.nime.org/proceedings/2015/nime2015_265.pdf} }
Mikkel Jörgensen, Aske Knudsen, Thomas Wilmot, Kasper Lund, Stefania Serafin, and Hendrik Purwins. 2015. A Mobile Music Museum Experience for Children. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 36–37. http://doi.org/10.5281/zenodo.1178997
Abstract
Download PDF DOI
An interactive music instrument museum experience for children of 10-12 years is presented. Equipped with tablet devices, the children are sent on a treasure hunt. Based on given sound samples, the participants have to identify the right musical instrument (harpsichord, double bass, viola) out of an instrument collection. As the right instrument is located, a challenge of playing an application on the tablet is initiated. This application is an interactive digital representation of the found instrument, mimicking some of its key playing techniques, using a simplified scrolling on screen musical notation. The musical performance of the participant is graded on a point scale. After completion of the challenge, the participants’ performances of the three instruments are played back simultaneously, constituting a composition. A qualitative evaluation of the application in a focus group interview with school children revealed that the children were more engaged when playing with the interactive application than when only watching a music video.
@inproceedings{hpurwins2015, author = {J\''{o}rgensen, Mikkel and Knudsen, Aske and Wilmot, Thomas and Lund, Kasper and Serafin, Stefania and Purwins, Hendrik}, title = {A Mobile Music Museum Experience for Children}, pages = {36--37}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1178997}, url = {http://www.nime.org/proceedings/2015/nime2015_267.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/267/0267-file1.mov} }
Thomas Resch. 2015. RWA – A Game Engine for Real World Audio Games. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 392–395. http://doi.org/10.5281/zenodo.1179160
Abstract
Download PDF DOI
Audio guides and (interactive) sound walks have existed for decades. Even smartphone games taking place in the real world are no longer a novelty. But due to the lack of a sufficient middleware which fulfills the requirements for creating this software genre, artists, game developers and institutions such as museums are forced to implement rather similar functionality over and over again. This paper describes the basic principles of Real World Audio (RWA), an extendable audio game engine for targeting smartphone operating systems, which rolls out all functionality for the generation of the above-mentioned software genres. It combines the ability for building location-based audio walks and -guides with the components necessary for game development. Using either the smartphone sensors or an external sensor board for head tracking and gesture recognition, RWA allows developers to create audio walks, audio adventures and audio role playing games (RPG) outside in the real world.
@inproceedings{tresch2015, author = {Resch, Thomas}, title = {RWA -- A Game Engine for Real World Audio Games}, pages = {392--395}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179160}, url = {http://www.nime.org/proceedings/2015/nime2015_269.pdf} }
Arvid Jense and Hans Leeuw. 2015. WamBam: A case study in design for an electronic musical instrument for severely intellectually disabled users. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 74–77. http://doi.org/10.5281/zenodo.1179098
Abstract
Download PDF DOI
This paper looks at the design process of the WamBam; a self-contained electronic hand-drum meant for music therapy sessions with severely intellectually disabled clients. Using co-reflection with four musical therapists and literature research, design guidelines related to this specific user-group and context are formed. This leads to a concept of which the most relevant aspects are discussed, before describing the user studies. Finally, the plan for the redesign is discussed. The WamBam has unique possibilities to deal with the low motor skills and cognitive abilities of severely intellectually disabled users while music therapists benefit from the greater versatility and portability of this design compared to other musical instruments. A prototype was tested with twenty users. Participants proved to be triggered positively by the WamBam, but three limiting usability issues were found. These issues were used as the fundamentals for a second prototype. Music therapists confirm the value of the WamBam for their practice.
@inproceedings{ajense2015, author = {Jense, Arvid and Leeuw, Hans}, title = {WamBam: A case study in design for an electronic musical instrument for severely intellectually disabled users}, pages = {74--77}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179098}, url = {http://www.nime.org/proceedings/2015/nime2015_270.pdf} }
Florent Berthaut, David Coyle, James Moore, and Hannah Limerick. 2015. Liveness Through the Lens of Agency and Causality. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 382–386. http://doi.org/10.5281/zenodo.1179026
Abstract
Download PDF DOI
Liveness is a well-known problem with Digital Musical Instruments (DMIs). When used in performances, DMIs provide less visual information than acoustic instruments, preventing the audience from understanding how the musicians influence the music. In this paper, we look at this issue through the lens of causality. More specifically, we investigate the attribution of causality by an external observer to a performer, relying on the theory of apparent mental causation. We suggest that the perceived causality between a performer’s gestures and the musical result is central to liveness. We present a framework for assessing attributed causality and agency to a performer, based on a psychological theory which suggests three criteria for inferred causality. These criteria then provide the basis of an experimental study investigating the effect of visual augmentations on audience’s inferred causality. The results provide insights on how the visual component of performances with DMIs impacts the audience’s causal inferences about the performer. In particular we show that visual augmentations help highlight the influence of the musician when parts of the music are automated, and help clarify complex mappings between gestures and sounds. Finally we discuss the potential wider implications for assessing liveness in the design of new musical interfaces.
@inproceedings{hlimerick2015, author = {Berthaut, Florent and Coyle, David and Moore, James and Limerick, Hannah}, title = {Liveness Through the Lens of Agency and Causality}, pages = {382--386}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179026}, url = {http://www.nime.org/proceedings/2015/nime2015_272.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/272/0272-file1.mp4} }
Dianne Verdonk. 2015. Visible Excitation Methods: Energy and Expressiveness in Electronic Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 42–43. http://doi.org/10.5281/zenodo.1179188
Abstract
Download PDF DOI
In electronic music performance, a good relationship between what is visible and what is audible can contribute to a more succesful way of conveying thought or feeling. This connection can be enhanced by putting visible energy into an electronic interface or instrument. This paper discusses the advantages and implementations of visible excitation methods, and how these could reinforce the bridge between the performance of acoustic and electronic instruments concerning expressiveness.
@inproceedings{dverdonk2015, author = {Verdonk, Dianne}, title = {Visible Excitation Methods: Energy and Expressiveness in Electronic Music Performance}, pages = {42--43}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179188}, url = {http://www.nime.org/proceedings/2015/nime2015_273.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/273/0273-file1.m4v}, urlsuppl2 = {http://www.nime.org/proceedings/2015/273/0273-file2.m4v} }
Jeff Snyder, Ryan Johns, Charles Avis, Gene Kogan, and Axel Kilian. 2015. Machine Yearning: An Industrial Robotic Arm as a Performance Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 184–186. http://doi.org/10.5281/zenodo.1179180
Abstract
Download PDF DOI
This paper describes a project undertaken in the Spring of 2014 that sought to create an audio-visual performance using an industrial robotic arm. Some relevant examples of previous robotic art are discussed, and the design challenges posed by the unusual situation are explored. The resulting design solutions for the sound, robotic motion, and video projection mapping involved in the piece are explained, as well as the artistic reasoning behind those solutions. Where applicable, links to open source code developed for the project are provided.
@inproceedings{snyder2015, author = {Snyder, Jeff and Johns, Ryan and Avis, Charles and Kogan, Gene and Kilian, Axel}, title = {Machine Yearning: An Industrial Robotic Arm as a Performance Instrument}, pages = {184--186}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179180}, url = {http://www.nime.org/proceedings/2015/nime2015_275.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/275/0275-file1.mp3}, urlsuppl2 = {http://www.nime.org/proceedings/2015/275/0275-file2.mp4} }
Edgar Berdahl and Denis Huber. 2015. The Haptic Hand. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 303–306. http://doi.org/10.5281/zenodo.1179022
Abstract
Download PDF DOI
The haptic hand is a greatly simplified robotic hand that is designed to mirror the human hand and provide haptic force feedback for applications in music. The fingers of the haptic hand device are laid out to align with four of the fingers of the human hand. A key is placed on each of the fingers so that a human hand can perform music by interacting with the keys. The haptic hand is distinguished from other haptic keyboards in the sense that each finger is meant to stay with a particular key. The haptic hand promotes unencumbered interaction with the keys. The user can easily position a finger over a key and press downward to activate it—the user does not need to insert his or her fingers into an unwieldy exoskeleton or set of thimbles. An example video demonstrates some musical ideas afforded by this open-source software and hardware project.
@inproceedings{eberdahl2015, author = {Berdahl, Edgar and Huber, Denis}, title = {The Haptic Hand}, pages = {303--306}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179022}, url = {http://www.nime.org/proceedings/2015/nime2015_281.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/281/0281-file1.mov}, urlsuppl2 = {http://www.nime.org/proceedings/2015/281/0281-file2.mov} }
Sang Won Lee and Georg Essl. 2015. Web-Based Temporal Typography for Musical Expression and Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 65–69. http://doi.org/10.5281/zenodo.1179114
Abstract
Download PDF DOI
This paper introduces programmable text rendering that enables temporal typography in web browsers. Typing is seen not only as a dynamic but interactive process facilitating both scripted and live musical expression in various contexts such as audio-visual performance using keyboards and live coding visualization. With the programmable text animation , we turn plain text into a highly audiovisual medium and a musical interface which is visually expressive. We describe a concrete technical realization of the concept using Web Audio API, WebGL and GLSL shaders. We further show a number of examples that illustrate instances of the concept in various scenarios ranging from simple textual visualization to live coding environments. Lastly, we present an audiovisual music piece that involves live writing augmented by the visualization technique.
@inproceedings{slee2015, author = {Lee, Sang Won and Essl, Georg}, title = {Web-Based Temporal Typography for Musical Expression and Performance}, pages = {65--69}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179114}, url = {http://www.nime.org/proceedings/2015/nime2015_283.pdf} }
Eric Sheffield, Sile O’Modhrain, Michael Gould, and Brent Gillespie. 2015. The Pneumatic Practice Pad. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 231–234. http://doi.org/10.5281/zenodo.1179178
Abstract
Download PDF DOI
The Pneumatic Practice Pad is a commercially available 10” practice pad that has been modified to allow for tension changes in a matter of seconds using a small electric air pump. In this paper, we examine the rebound characteristics of the Pneumatic Practice Pad at various pressure presets and compare them to a sample of acoustic drums. We also review subjective feedback from participants in a playing test.
@inproceedings{esheffield2015, author = {Sheffield, Eric and O'Modhrain, Sile and Gould, Michael and Gillespie, Brent}, title = {The Pneumatic Practice Pad}, pages = {231--234}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179178}, url = {http://www.nime.org/proceedings/2015/nime2015_286.pdf} }
William Marley and Nicholas Ward. 2015. Gestroviser: Toward Collaborative Agency in Digital Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 140–143. http://doi.org/10.5281/zenodo.1179124
Abstract
Download PDF DOI
This paper describes a software extension to the Reactable entitled Gestroviser that was developed to explore musician machine collaboration at the control signal level. The system functions by sampling a performers input, processing or reshaping this sampled input, and then repeatedly replaying it. The degree to which the sampled control signal is processed during replay is adjustable in real-time by the manipulation of a continuous finger slider function. The reshaping algorithm uses stochastic methods commonly used for MIDI note generation from a provided dataset. The reshaped signal therefore varies in an unpredictable manner. In this way the Gestroviser is a device to capture, reshape and replay an instrumental gesture. We describe the result of initial user testing of the system and discuss possible further development.
@inproceedings{wmarley2015, author = {Marley, William and Ward, Nicholas}, title = {Gestroviser: Toward Collaborative Agency in Digital Musical Instruments.}, pages = {140--143}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179124}, url = {http://www.nime.org/proceedings/2015/nime2015_287.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/287/0287-file1.mp4} }
Warren Enström, Josh Dennis, Brian Lynch, and Kevin Schlei. 2015. Musical Notation for Multi-Touch Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 83–86. http://doi.org/10.5281/zenodo.1179056
Abstract
Download PDF DOI
This paper explores the creation and testing of a new system for notating physical actions on a surface. This notation is conceptualized through the medium of, and initially tested on, multi-touch interfaces. Existing methods of notating movement are reviewed, followed by a detailed explanation of our notation. User trials were carried out in order to test how effective this notation was, the results of which be explained. An analysis of the collected data follows, as well as criticisms of the notation and testing process.
@inproceedings{kschlei2015, author = {Enstr\''om, Warren and Dennis, Josh and Lynch, Brian and Schlei, Kevin}, title = {Musical Notation for Multi-Touch Interfaces}, pages = {83--86}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179056}, url = {http://www.nime.org/proceedings/2015/nime2015_289.pdf} }
Brennon Bortz, Javier Jaimovich, and R. Benjamin Knapp. 2015. Emotion in Motion: A Reimagined Framework for Biomusical/Emotional Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 44–49. http://doi.org/10.5281/zenodo.1179034
Abstract
Download PDF DOI
Our experiment, Emotion in Motion, has amassed the world’s largest database of human physiology associated with emotion in response to the presentation of various selections of musical works. What began as a doctoral research study has grown to include the emotional responses to musical experience from over ten thousand participants across the world, from installations in Dublin, New York City, Norway, Singapore, the Philippines, and Taiwan. The most recent iteration of is currently underway in Taipei City, Taiwan. Preparation for this installation provided an opportunity to reimagine the architecture of , allowing for a wider range of potential applications than were originally possible with the original tools that drove the experiment. Now more than an experiment, is a framework for developing myriad emotional/musical/biomusical interactions with multiple co-located or remote participants. This paper describes the development of this open-source framework and includes discussion of its various components: hardware agnostic sensor inputs, refined physiological signal processing tools, and a public database of data collected during various instantiations of applications built on the framework. We also discuss our ongoing work with this tool, and provide the reader with other potential applications that they might realize in using .
@inproceedings{bbortz2015, author = {Bortz, Brennon and Jaimovich, Javier and Knapp, {R. Benjamin}}, title = {Emotion in Motion: A Reimagined Framework for Biomusical/Emotional Interaction}, pages = {44--49}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179034}, url = {http://www.nime.org/proceedings/2015/nime2015_291.pdf} }
Hsin-Ming Lin and Chin-Ming Lin. 2015. Harmonic Intonation Trainer: An Open Implementation in Pure Data. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 38–39. http://doi.org/10.5281/zenodo.1179118
Abstract
Download PDF DOI
Pedagogical research demonstrates theories and practices of perception or production of melodic or harmonic “intonation”, i.e. the realization of pitch accuracy. There are software and hardware to help students improve intonation. Those tools have various functions. Nevertheless, they still miss something which could benefit users very much. Even worse, they are not easy to be revised. Most importantly, there should be more amusing and engaging interaction between a tuning trainer and a user which is able to exchange roles of tuner and player. In this research, we implement an open-source program named “Harmonic Intonation Trainer” in Pure Data. It includes most of essential elements of a smart tuner. A user can tune his pitch while optionally hearing (through earphones) the target pitch and other harmonic intervals in respective octaves. Moreover, in its interactive accompanist mode, a user’s input pitch serves as the reference frequency; the program follows his intonation to generate corresponding harmonic intervals. Additionally, user can straightforwardly edit all parameters and patches by Pure Data. Any adoption or revision is absolutely welcome. Finally, we will initiate another research to test and to inspect experimental results from student orchestras so that its future version is expected to be more sophisticated.
@inproceedings{hlin2015, author = {Lin, Hsin-Ming and Lin, Chin-Ming}, title = {Harmonic Intonation Trainer: An Open Implementation in Pure Data}, pages = {38--39}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179118}, url = {http://www.nime.org/proceedings/2015/nime2015_300.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/300/0300-file1.mp4} }
Jeronimo Barbosa, Joseph Malloch, Marcelo Wanderley, and Stéphane Huot. 2015. What does ’Evaluation’ mean for the NIME community? Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 156–161. http://doi.org/10.5281/zenodo.1179010
Abstract
Download PDF DOI
Evaluation has been suggested to be one of the main trends in current NIME research. However, the meaning of the term for the community may not be as clear as it seems. In order to explore this issue, we have analyzed all papers and posters published in the proceedings of the NIME conference from 2012 to 2014. For each publication that explicitly mentioned the term evaluation, we looked for: a) What targets and stakeholders were considered? b) What goals were set? c) What criteria were used? d) What methods were used? e) How long did the evaluation last? Results show different understandings of evaluation, with little consistency regarding the usage of the word. Surprisingly in some cases, not even basic information such as goal, criteria and methods were provided. In this paper, we attempt to provide an idea of what evaluation means for the NIME community, pushing the discussion towards how could we make a better use of evaluation on NIME design and what criteria should be used regarding each goal.
@inproceedings{jbarbosa2015, author = {Barbosa, Jeronimo and Malloch, Joseph and Wanderley, Marcelo and Huot, St\'ephane}, title = {What does 'Evaluation' mean for the NIME community?}, pages = {156--161}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179010}, url = {http://www.nime.org/proceedings/2015/nime2015_301.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/301/0301-file1.xlsx} }
Ian Hattwick and Marcelo Wanderley. 2015. Interactive Lighting in the Pearl: Considerations and Implementation. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 201–204. http://doi.org/10.5281/zenodo.1179080
Abstract
Download PDF DOI
The Pearl is a multi-modal computer interface initially conceived as an interactive prop for a multi-artistic theatrical performance. It is a spherical hand-held wireless controller embedded with various sensor technologies and interactive lighting. The lighting was a key conceptual component in the instrument’s creation both as a theatrical prop and also as an interface for musical performance as it helps to address conceptual challenges and opportunities posed by the instrument’s spherical form. This paper begins by providing a brief description of the Pearl and its use as a spherical instrument. We then discuss mapping the Pearl both to generate sound and control its interactive lighting, and identify different strategies for its use. Strategies we identify include feedback regarding performer gesture, information about the state of the instrument, and use as an aesthetic performance component.
@inproceedings{ihattwick2015, author = {Hattwick, Ian and Wanderley, Marcelo}, title = {Interactive Lighting in the Pearl: Considerations and Implementation}, pages = {201--204}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179080}, url = {http://www.nime.org/proceedings/2015/nime2015_302.pdf} }
Richard Graham and Brian Bridges. 2015. Managing Musical Complexity with Embodied Metaphors. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 103–106. http://doi.org/10.5281/zenodo.1179066
Abstract
Download PDF DOI
This paper presents the ideas and mapping strategies behind a performance system that uses a combination of motion tracking and feature extraction tools to manage complex multichannel audio materials for real-time music composition. The use of embodied metaphors within these mappings is seen as a means of managing the complexity of a musical performance across multiple modalities. In particular, we will investigate how these mapping strategies may facilitate the creation of performance systems whose accessibility and richness are enhanced by common integrating bases. A key focus for this work is the investigation of the embodied image schema theories of Lakoff and Johnson alongside similarly embodied metaphorical models within Smalley’s influential theory of electroacoustic music (spectromorphology). These metaphors will be investigated for their use as grounding structural components and dynamics for creative practices and musical interaction design. We argue that pairing metaphorical models of forces with environmental forms may have particular significance for the design of complex mappings for digital music performance.
@inproceedings{rgraham2015, author = {Graham, Richard and Bridges, Brian}, title = {Managing Musical Complexity with Embodied Metaphors}, pages = {103--106}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179066}, url = {http://www.nime.org/proceedings/2015/nime2015_303.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/303/0303-file1.mov}, urlsuppl2 = {http://www.nime.org/proceedings/2015/303/0303-file2.wav} }
Aura Pon, Johnty Wang, Laurie Radford, and Sheelagh Carpendale. 2015. Womba: A Musical Instrument for an Unborn Child. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 87–90. http://doi.org/10.5281/zenodo.1179156
Abstract
Download PDF DOI
This paper describes the motivation and process of developing a musical instrument for an unborn child. Well established research shows a fetus in the womb can respond to and benefit from stimuli from the outside world. A musical instrument designed for this unique context can leverage the power of this interaction. Two prototypes were constructed and tested during separate pregnancies and the experiences are presented, and the limitation of the sensor technology identified. We discuss our discoveries about design considerations and challenges for such an instrument, and project thought-provoking questions that arise from its potential applications.
@inproceedings{apon2015, author = {Pon, Aura and Wang, Johnty and Radford, Laurie and Carpendale, Sheelagh}, title = {Womba: A Musical Instrument for an Unborn Child}, pages = {87--90}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179156}, url = {http://www.nime.org/proceedings/2015/nime2015_304.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/304/0304-file1.mp4} }
Adnan Marquez-Borbon and Paul Stapleton. 2015. Fourteen Years of NIME: The Value and Meaning of ‘Community’ in Interactive Music Research. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 307–312. http://doi.org/10.5281/zenodo.1179128
Abstract
Download PDF DOI
This paper examines the notion of community as commonly employed within NIME discourses. Our aim is to clarify and define the term through the community of practice framework. We argue that through its formal use and application, the notion of community becomes a significant space for the examination of emergent musical practices that could otherwise be overlooked. This paper defines community of practice, as originally developed in the social sciences by Lave and Wegener, and applies it within the NIME context through the examination of existing communities of practice such as the laptop performance community, laptop orchestras, as well as the Satellite CCRMA and Patchblocks communities.
@inproceedings{amarquezborbon2015, author = {Marquez-Borbon, Adnan and Stapleton, Paul}, title = {Fourteen Years of NIME: The Value and Meaning of `Community' in Interactive Music Research}, pages = {307--312}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179128}, url = {http://www.nime.org/proceedings/2015/nime2015_308.pdf} }
Charles Roberts, Matthew Wright, and JoAnn Kuchera-Morin. 2015. Beyond Editing: Extended Interaction with Textual Code Fragments. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 126–131. http://doi.org/10.5281/zenodo.1179164
Abstract
Download PDF DOI
We describe research extending the interactive affordances of textual code fragments in creative coding environments. In particular we examine the potential of source code both to display the state of running processes and also to alter state using means other than traditional text editing. In contrast to previous research that has focused on the inclusion of additional interactive widgets inside or alongside text editors, our research adds a parsing stage to the runtime evaluation of code fragments and imparts additional interactive capabilities on the source code itself. After implementing various techniques in the creative coding environment Gibber, we evaluate our research through a survey on the various methods of visual feedback provided by our research. In addition to results quantifying preferences for certain techniques over others, we found near unanimous support among survey respondents for including similar techniques in other live coding environments.
@inproceedings{croberts2015, author = {Roberts, Charles and Wright, Matthew and Kuchera-Morin, JoAnn}, title = {Beyond Editing: Extended Interaction with Textual Code Fragments}, pages = {126--131}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179164}, url = {http://www.nime.org/proceedings/2015/nime2015_310.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/310/0310-file1.mov} }
Alberto Novello and Antoni Rayzhekov. 2015. A prototype for pitched gestural sonification of surfaces using two contact microphones. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 170–173. http://doi.org/10.5281/zenodo.1179148
Abstract
Download PDF DOI
We present the prototype of a hybrid instrument, which uses two contact microphones to sonify the gestures of a player on a generic surface, while a gesture localization algorithm controls the pitch of the sonified output depending on the position of the gestures. To achieve the gesture localization we use a novel approach combining attack parametrization and template matching across the two microphone channels. With this method we can correctly localize 80 \pm 9 % of the percussive gestures. The user can assign determined pitches to specific positions and change the pitch palette in real time. The tactile feedback characteristic of every surface opens a set of new playing strategies and possibilities specific to any chosen object. The advantages of such a system are the affordable production, flexibility of concert location, object-specific musical instruments, portability, and easy setup.
@inproceedings{anovello2015, author = {Novello, Alberto and Rayzhekov, Antoni}, title = {A prototype for pitched gestural sonification of surfaces using two contact microphones}, pages = {170--173}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179148}, url = {http://www.nime.org/proceedings/2015/nime2015_311.pdf} }
Ozgur Izmirli. 2015. Framework for Exploration of Performance Space. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 99–102. http://doi.org/10.5281/zenodo.1179094
Abstract
Download PDF DOI
This paper presents a framework for the analysis and exploration of performance space. It enables the user to visualize performances in relation to other performances of the same piece based on a set of features extracted from audio. A performance space is formed from a set of performances through spectral analysis, alignment, dimensionality reduction and visualization. Operation of the system is demonstrated initially with synthetic MIDI performances and then with a case study of recorded piano performances.
@inproceedings{oizmirli2015, author = {Izmirli, Ozgur}, title = {Framework for Exploration of Performance Space}, pages = {99--102}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179094}, url = {http://www.nime.org/proceedings/2015/nime2015_312.pdf} }
Timothy J. Barraclough, Dale A. Carnegie, and Ajay Kapur. 2015. Musical Instrument Design Process for Mobile Technology. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 289–292. http://doi.org/10.5281/zenodo.1179012
Abstract
Download PDF DOI
This paper presents the iterative design process based upon multiple rounds of user studies that guided the the design of a novel social music application, Pyxis Minor. The application was designed based on the concept of democratising electronic music creation and performance. This required the development to be based upon user studies to inform and drive the development process in order to create a novel musical interface that can be enjoyed by users of any prior musicianship training.
@inproceedings{tbarraclough2015, author = {Barraclough, {Timothy J.} and Carnegie, {Dale A.} and Kapur, Ajay}, title = {Musical Instrument Design Process for Mobile Technology}, pages = {289--292}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179012}, url = {http://www.nime.org/proceedings/2015/nime2015_313.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/313/0313-file1.mp4} }
Rhys Duindam, Diemo Schwarz, and Hans Leeuw. 2015. Tingle: A Digital Music Controller Re-Capturing the Acoustic Instrument Experience. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 219–222. http://doi.org/10.5281/zenodo.1179054
Abstract
Download PDF DOI
Tingle is a new digital music controller that attempts to recapture the acoustic touch and feel, and also gives new opportunities for expressive play. Tingle resembles a pin-art toy which has been made interactive through a new sensing technology, with added haptic feedback and motion control. It pushes back, vibrates, and warps the sound through the musicians nuanced input. In this article Tingle will be discussed in combination with CataRT.
@inproceedings{rduindam2015, author = {Duindam, Rhys and Schwarz, Diemo and Leeuw, Hans}, title = {Tingle: A Digital Music Controller Re-Capturing the Acoustic Instrument Experience}, pages = {219--222}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179054}, url = {http://www.nime.org/proceedings/2015/nime2015_319.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/319/0319-file1.mp4} }
Steven Gelineck, Dannie Korsgaard, and Morten Büchert. 2015. Stage- vs. Channel-strip Metaphor — Comparing Performance when Adjusting Volume and Panning of a Single Channel in a Stereo Mix. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 343–346. http://doi.org/10.5281/zenodo.1179064
Abstract
Download PDF DOI
This study compares the stage metaphor and the channel strip metaphor in terms of performance. Traditionally, music mixing consoles employ a channels strip control metaphor for adjusting parameters such as volume and panning of each track. An alternative control metaphor, the so-called stage metaphor lets the user adjust volume and panning by positioning tracks relative to a virtual listening position. In this study test participants are given the task to adjust volume and panning of one channel (in mixes consisting of three channels) in order to replicate a series of simple pre-rendered mixes. They do this using (1) a small physical mixing controller and (2) using an iPad app, which implements a simple stage metaphor interface. We measure how accurately they are able to replicate mixes in terms of volume and panning and how fast they are at doing so. Results reveal that performance is surprisingly similar and thus we are not able to detect any significant difference in performance between the two interfaces. Qualitative data however, suggests that the stage metaphor is largely favoured for its intuitive interaction — confirming earlier studies.
@inproceedings{sgelineck2015, author = {Gelineck, Steven and Korsgaard, Dannie and B\''uchert, Morten}, title = {Stage- vs. Channel-strip Metaphor --- Comparing Performance when Adjusting Volume and Panning of a Single Channel in a Stereo Mix}, pages = {343--346}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179064}, url = {http://www.nime.org/proceedings/2015/nime2015_320.pdf} }
J. Cecilia Wu, Yoo Hsiu Yeh, Romain Michon, Nathan Weitzner, Jonathan Abel, and Matthew Wright. 2015. Tibetan Singing Prayer Wheel: A Hybrid Musical- Spiritual Instrument Using Gestural Control. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 91–94. http://doi.org/10.5281/zenodo.1179196
Abstract
Download PDF DOI
This paper presents the Tibetan Singing Prayer Wheel, a hand-held, wireless, sensor-based musical instrument with a human-computer interface that simultaneously processes vocals and synthesizes sound based on the performer’s hand gestures with a one-to-many mapping strategy. A physical model simulates the singing bowl, while a modal reverberator and a delay-and-window effect process the performer’s vocals. This system is designed for an electroacoustic vocalist interested in using a solo instrument to achieve performance goals that would normally require multiple instruments and activities.
@inproceedings{jwu2015, author = {Wu, {J. Cecilia} and Yeh, {Yoo Hsiu} and Michon, Romain and Weitzner, Nathan and Abel, Jonathan and Wright, Matthew}, title = {Tibetan Singing Prayer Wheel: A Hybrid Musical- Spiritual Instrument Using Gestural Control}, pages = {91--94}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179196}, url = {http://www.nime.org/proceedings/2015/nime2015_322.pdf} }
Ivan Franco and Marcelo Wanderley. 2015. Pratical Evaluation of Synthesis Performance on the Beaglebone Black. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 223–226. http://doi.org/10.5281/zenodo.1179062
Abstract
Download PDF DOI
The proliferation and easy access to a new breed of ARM-based single-board computers has promoted an increased usage of these platforms in the creation of self-contained Digital Music Instruments. These directly incorporate all of the necessary processing power for tasks such as sensor signal acquisition, control data processing and audio synthesis. They can also run full Linux operating systems, through which domain-specific languages for audio computing facilitate a low entry barrier for the community. In computer music the adoption of these computing platforms will naturally depend on their ability to withstand the demanding computing tasks associated to high-quality audio synthesis. In the context of computer music practice there are few reports about this quantification for practical purposes. This paper aims at presenting the results of performance tests of SuperCollider running on the BeagleBone Black, a popular mid-tier single-board computer, while performing commonly used audio synthesis techniques.
@inproceedings{ifranco2015, author = {Franco, Ivan and Wanderley, Marcelo}, title = {Pratical Evaluation of Synthesis Performance on the Beaglebone Black}, pages = {223--226}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179062}, url = {http://www.nime.org/proceedings/2015/nime2015_323.pdf} }
Courtney Brown, Sharif Razzaque, and Garth Paine. 2015. Rawr! A Study in Sonic Skulls: Embodied Natural History. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 5–10. http://doi.org/10.5281/zenodo.1179036
Abstract
Download PDF DOI
Lambeosaurine hadrosaurs are duck-billed dinosaurs known for their large head crests, which researchers hypothesize were resonators for vocal calls. This paper describes the motivation and process of iteratively designing a musical instrument and interactive sound installation based on imagining the sounds of this extinct dinosaur. We used scientific research as a starting point to create a means of sound production and resonator, using a 3D model obtained from Computed Topology (CT) scans of a Corythosaurus skull and an endocast of its crest and nasal passages. Users give voice to the dinosaur by blowing into a mouthpiece, exciting a larynx mechanism and resonating the sound through the hadrosaur’s full-scale nasal cavities and skull. This action allows an embodied glimpse into an ancient past. Users know the dinosaur through the controlled exhalation of their breath, how the compression of the lungs leads to a whisper or a roar.
@inproceedings{cbrown2015, author = {Brown, Courtney and Razzaque, Sharif and Paine, Garth}, title = {Rawr! A Study in Sonic Skulls: Embodied Natural History}, pages = {5--10}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179036}, url = {http://www.nime.org/proceedings/2015/nime2015_325.pdf} }
Muhammad Hafiz Wan Rosli, Karl Yerkes, Matthew Wright, et al. 2015. Ensemble Feedback Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 144–149. http://doi.org/10.5281/zenodo.1179170
Abstract
Download PDF DOI
We document results from exploring ensemble feedback in loosely-structured electroacoustic improvisations. A conceptual justification for the explorations is provided, in addition to discussion of tools and methodologies. Physical configurations of intra-ensemble feedback networks are documented, along with qualitative analysis of their effectiveness.
@inproceedings{kyerkes2015, author = {Rosli, {Muhammad Hafiz Wan} and Yerkes, Karl and Wright, Matthew and Wood, Timothy and Wolfe, Hannah and Roberts, Charlie and Haron, Anis and Estrada, {Fernando Rincon}}, title = {Ensemble Feedback Instruments}, pages = {144--149}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179170}, url = {http://www.nime.org/proceedings/2015/nime2015_329.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/329/0329-file1.mp4} }
Jeff Gregorio, David Rosen, Michael Caro, and Youngmoo E. Kim. 2015. Descriptors for Perception of Quality in Jazz Piano Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 327–328. http://doi.org/10.5281/zenodo.1179072
Abstract
Download PDF DOI
Quality assessment of jazz improvisation is a multi-faceted, high-level cognitive task routinely performed by educators in university jazz programs and other discriminating music listeners. In this pilot study, we present a novel dataset of 88 MIDI jazz piano improvisations with ratings of creativity, technical proficiency, and aesthetic appeal provided by four jazz experts, and we detail the design of a feature set that can represent some of the rhythmic, melodic, harmonic, and other expressive attributes humans recognize as salient in assessment of performance quality. Inherent subjectivity in these assessments is inevitable, yet the recognition of performance attributes by which humans perceive quality has wide applicability to related tasks in the music information retrieval (MIR) community and jazz pedagogy. Preliminary results indicate that several musiciologically-informed features of relatively low computational complexity perform reasonably well in predicting performance quality labels via ordinary least squares regression.
@inproceedings{jgregorio2015, author = {Gregorio, Jeff and Rosen, David and Caro, Michael and Kim, {Youngmoo E.}}, title = {Descriptors for Perception of Quality in Jazz Piano Improvisation}, pages = {327--328}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179072}, url = {http://www.nime.org/proceedings/2015/nime2015_331.pdf} }
Adnan Marquez-Borbon. 2015. But Does it Float? Reflections on a Sound Art Ecological Intervention. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 335–338. http://doi.org/10.5281/zenodo.1179126
Abstract
Download PDF DOI
This paper discusses the particular aesthetic and contextual considerations emergent from the design process of a site-specific sound art installation, the Wave Duet. The main point of this paper proposes that beyond the initial motivation produced by new technologies and their artistic potential, there are many profound artistic considerations that drive the development and design of a work in unique ways. Thus, in the case of the Wave Duet, the produced buoys were prompted by investigating the relationship between sonic objects and natural phenomena. As a result, the mappings, physical and sound designs directly reflect these issues. Finally, it is also suggested that during the course of development, unintended issues may emerge and further inform how the work is perceived in a broader sense.
@inproceedings{amarquezborbonb2015, author = {Marquez-Borbon, Adnan}, title = {But Does it Float? Reflections on a Sound Art Ecological Intervention}, pages = {335--338}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179126}, url = {http://www.nime.org/proceedings/2015/nime2015_333.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/333/0333-file1.mp4} }
Matthew Blessing and Edgar Berdahl. 2015. Textural Crossfader. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 180–181. http://doi.org/10.5281/zenodo.1179032
Abstract
Download PDF DOI
A LapBox derivative, the Textural Crossfader is a keyboard-based embedded acoustic instrument, which sits comfortably across the performer’s lap and radiates sound out of integrated stereo speakers. The performer controls the sound by manipulating the keys on a pair of mini-keyboard interfaces. A unique one-to-one mapping enables the performer to precisely crossfade among a set of looped audio wave files, creating a conveniently portable system for navigating through a complex timbre space. The axes of the timbre space can be reconfigured by replacing the wave files stored in the flash memory.
@inproceedings{mblessing2015, author = {Blessing, Matthew and Berdahl, Edgar}, title = {Textural Crossfader}, pages = {180--181}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179032}, url = {http://www.nime.org/proceedings/2015/nime2015_337.pdf}, urlsuppl1 = {http://www.nime.org/proceedings/2015/337/0337-file1.mp4}, urlsuppl2 = {http://www.nime.org/proceedings/2015/337/0337-file2.mov} }
Andres Cabrera. 2015. Serverless and Peer-to-peer distributed interfaces for musical control. Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, pp. 355–358. http://doi.org/10.5281/zenodo.1179040
Abstract
Download PDF DOI
This paper presents the concept and implementation of a decentralized, server-less and peer-to-peer network for the interchange of musical control interfaces and data using the OSC protocol. Graphical control elements that form the control interface can be freely edited and exchanged to and from any device in the network, doing away with the need for a separate server or editing application. All graphical elements representing the same parameter will have their value synchronized through the network mechanisms. Some practical considerations surrounding the implementation of this idea like automatic layout of control, editing interfaces on mobile touch-screen devices and auto-discovery of network nodes are discussed. Finally, GoOSC, a mobile application implementing these ideas is presented.
@inproceedings{acabrera2015, author = {Cabrera, Andres}, title = {Serverless and Peer-to-peer distributed interfaces for musical control}, pages = {355--358}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Berdahl, Edgar and Allison, Jesse}, year = {2015}, month = may, publisher = {Louisiana State University}, address = {Baton Rouge, Louisiana, USA}, issn = {2220-4806}, doi = {10.5281/zenodo.1179040}, url = {http://www.nime.org/proceedings/2015/nime2015_351.pdf} }
2014
Luisa Pereira Hors. 2014. The Well-Sequenced Synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 88–89. http://doi.org/10.5281/zenodo.1178806
Abstract
Download PDF DOI
The Well–Sequenced Synthesizer is a series of sequencers that create music in dialog with the user. Through the sequencers’ physical interfaces, users can control music theory-based generative algorithms. This series –a work-in-progress-is composed by three sequencers at this time. The first one, called The Counterpointer, takes a melody input from the user and responds by generating voices based on the rules of eighteenth–century counterpoint. The second one is based on a recent treatise on harmony and counterpoint by music theorist Dmitri Tymoczco: El Ordenador lets users explore a set of features of tonality by constraining randomly generated music according to one or more of them. El Ordenador gives the user less control than The Counterpointer, but more than La Mecánica, the third sequencer in the series. La Mecánica plays back the sequences generated by El Ordenador using a punch-card reading music box mechanism. It makes the digital patterns visible and tactile, and links them back to the physical world.
@inproceedings{lpereira2014, author = {Hors, Luisa Pereira}, title = {The Well-Sequenced Synthesizer}, pages = {88--89}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178806}, url = {http://www.nime.org/proceedings/2014/nime2014_2.pdf} }
Timothy Polashek and Brad Meyer. 2014. Engravings for Prepared Snare Drum, iPad, and Computer. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 82–83. http://doi.org/10.5281/zenodo.1178907
Abstract
Download PDF DOI
This paper describes the technologies, collaborative processes, and artistic intents of the musical composition Engravings for Prepared Snare Drum, iPad, and Computer, which was composed by Timothy Polashek for percussionist Brad Meyer using a jointly created electroacoustic and interactive musical instrument. During performance, the percussionist equally manipulates and expresses through two surfaces, an iPad displaying an interactive touch screen and a snare drum augmented with various foreign objects, including a contact microphone adhered to the drumhead’s surface. A computer program created for this composition runs on a laptop computer in front of the percussionist. The software captures sound from the contact microphone and transforms this sound through audio signal processing controlled by the performer’s gestures on the iPad. The computer screen displays an animated graphic score, as well as the current states of iPad controls and audio signal processing, for the performer. Many compositional and technological approaches used in this project pay tribute to composer John Cage, since the premiere performance of Engravings for Prepared Snare Drum, iPad, and Computer took place in 2012, the centennial celebration of Cage’s birth year.
@inproceedings{ptimothy2014, author = {Polashek, Timothy and Meyer, Brad}, title = {Engravings for Prepared Snare Drum, iPad, and Computer}, pages = {82--83}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178907}, url = {http://www.nime.org/proceedings/2014/nime2014_254.pdf} }
Mo Zareei, Ajay Kapur, and Dale A. Carnegie. 2014. Rasper: a Mechatronic Noise-Intoner. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 473–478. http://doi.org/10.5281/zenodo.1178995
Abstract
Download PDF DOI
Over the past few decades, there has been an increasing number of musical instruments and works of sound art that incorporate robotics and mechatronics. This paper proposes a new approach in classification of such works and focuses on those whose ideological roots can be sought in Luigi Russolo’s noise-intoners (intonarumori). It presents a discussion on works in which mechatronics is used to investigate new and traditionally perceived as “extra-musical” sonic territories, and introduces Rasper: a new mechatronic noise-intoner that features an electromechanical apparatus to create noise physically, while regulating it rhythmically and timbrally.
@inproceedings{mzareei2014, author = {Zareei, Mo and Kapur, Ajay and Carnegie, Dale A.}, title = {Rasper: a Mechatronic Noise-Intoner}, pages = {473--478}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178995}, url = {http://www.nime.org/proceedings/2014/nime2014_268.pdf} }
Chet Udell and James Paul Sain. 2014. eMersion | Sensor-controlled Electronic Music Modules & Digital Data Workstation. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 130–133. http://doi.org/10.5281/zenodo.1178971
Abstract
Download PDF DOI
In our current era, where smartphones are commonplace and buzzwords like “the internet of things,” “wearable tech,” and “augmented reality” are ubiquitous, translating performance gestures into data and intuitively mapping it to control musical/visual parameters in the realm of computing should be trivial; but it isn’t. Technical barriers still persist that limit this activity to exclusive groups capable of learning skillsets far removed from one’s musical craft. These skills include programming, soldering, microprocessors, wireless protocols, and circuit design. Those of us whose creative activity is centered in NIME have to become polyglots of many disciplines to achieve our work. In the NIME community, it’s unclear that we should even draw distinctions between ’artist’ and ’technician’, because these skillsets have become integral to our creative practice. However, what about the vast communities of musicians, composers, and artists who want to leverage sensing to take their craft into new territory with no background in circuits, soldering, embedded programming, and sensor function? eMersion, a plug-and-play, modular, wireless alternative solution for creating NIMEs will be presented. It enables one to bypass the technical hurdles listed above in favor of immediate experimentation with musical practice and wireless sensing. A unique software architecture will also be unveiled that enables one to quickly and intuitively process and map unpredictable numbers and types of wireless data streams, the Digital Data Workstation.
@inproceedings{cudell2014, author = {Udell, Chet and Sain, James Paul}, title = {eMersion | Sensor-controlled Electronic Music Modules \& Digital Data Workstation}, pages = {130--133}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178971}, url = {http://www.nime.org/proceedings/2014/nime2014_272.pdf} }
Tim Murray-Browne and Mark Plumbley. 2014. Harmonic Motion: A Toolkit for Processing Gestural Data for Interactive Sound. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 213–216. http://doi.org/10.5281/zenodo.1178887
Abstract
Download PDF DOI
We introduce Harmonic Motion, a free open source toolkit for artists, musicians and designers working with gestural data. Extracting musically useful features from captured gesture data can be challenging, with projects often requiring bespoke processing techniques developed through iterations of tweaking equations involving a number of constant values -sometimes referred to as ‘magic numbers’. Harmonic Motion provides a robust interface for rapid prototyping of patches to process gestural data and a framework through which approaches may be encapsulated, reused and shared with others. In addition, we describe our design process in which both personal experience and a survey of potential users informed a set of specific goals for the software.
@inproceedings{tmurraybrowne2014, author = {Murray-Browne, Tim and Plumbley, Mark}, title = {Harmonic Motion: A Toolkit for Processing Gestural Data for Interactive Sound}, pages = {213--216}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178887}, url = {http://www.nime.org/proceedings/2014/nime2014_273.pdf} }
Simon Lui. 2014. A Real Time Common Chord Progression Guide on the Smartphone for Jamming Pop Song on the Music Keyboard. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 98–101. http://doi.org/10.5281/zenodo.1178855
Abstract
Download PDF DOI
Pop music jamming on the keyboard requires massive music knowledge. Musician needs to understand and memorize the behavior of each chord in different keys. However, most simple pop music follows a common chord progression pattern. This pattern applies to most simple pop music on all the 12 keys. We designed an app that can reduce the difficulty of music jamming on the keyboard by using this pattern. The app displays the current chord in the Roman numeral and suggests the expected next chord in an easy to understand way on a smartphone. This work investigates into the human computer interaction perspective of music performance. We use a smartphone app as a bridge, which assists musician to react faster in music jamming by transforming the complex music knowledge into a simple, unified and easy to understand format. Experiment result shows that this app can help the non-keyboardist musician to learn pop music jamming. It also shows that the app is useful to assist keyboardist in making key transpose and playing music in the key with many sharps and flats. We will use the same interface design to guide user on playing other chord progressions such as the jazz chord progression.
@inproceedings{slui2014, author = {Lui, Simon}, title = {A Real Time Common Chord Progression Guide on the Smartphone for Jamming Pop Song on the Music Keyboard}, pages = {98--101}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178855}, url = {http://www.nime.org/proceedings/2014/nime2014_275.pdf} }
Thor Magnusson. 2014. Improvising with the Threnoscope: Integrating Code, Hardware, GUI, Network, and Graphic Scores. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 19–22. http://doi.org/10.5281/zenodo.1178857
Abstract
Download PDF DOI
Live coding emphasises improvisation. It is an art practice that merges the act of musical composition and performance into a public act of projected writing. This paper introduces the Threnoscope system, which includes a live coding micro-language for drone-based microtonal composition. The paper discusses the aims and objectives of the system, elucidates the design decisions, and introduces in particular the code score feature present in the Threnoscope. The code score is a novel element in the design of live coding systems allowing for improvisation through a graphic score, rendering a visual representation of past and future events in a real-time performance. The paper demonstrates how the system’s methods can be mapped ad hoc to GUIor hardware-based control.
@inproceedings{tmagnusson2014, author = {Magnusson, Thor}, title = {Improvising with the Threnoscope: Integrating Code, Hardware, GUI, Network, and Graphic Scores}, pages = {19--22}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178857}, url = {http://www.nime.org/proceedings/2014/nime2014_276.pdf} }
Sebastian Trump and Jamie Bullock. 2014. Orphion: A Gestural Multi-Touch Instrument for the iPad. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 159–162. http://doi.org/10.5281/zenodo.1178963
Abstract
Download PDF DOI
This paper describes the concept and design of Orphion, a new digital musical instrument based on the Apple iPad. We begin by outlining primary challenges associated with DMI design, focussing on the specific problems Orphion seeks to address such as requirements for haptic feedback from the device. Orphion achieves this by incorporating an interaction model based on tonally tuned virtual “pads” in user-configurable layouts, where the pitch and timbre associated with each pad depends on the initial point of touch, touch point size and size variation, and position after the initial touch. These parameters control a physical model for sound generation with visual feedback provided via the iPad display. We present findings from the research and development process including design revisions made in response to user testing. Finally, conclusions are made about the effectiveness of the instrument based on large-scale user feedback.
@inproceedings{strump2014, author = {Trump, Sebastian and Bullock, Jamie}, title = {Orphion: A Gestural Multi-Touch Instrument for the iPad}, pages = {159--162}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178963}, url = {http://www.nime.org/proceedings/2014/nime2014_277.pdf} }
Michael Krzyzaniak, Julie Akerly, Matthew Mosher, Muharrem Yildirim, and Garth Paine. 2014. Separation: Short Range Repulsion. Implementation of an Automated Aesthetic Synchronization System for a Dance Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 303–306. http://doi.org/10.5281/zenodo.1178841
Abstract
Download PDF DOI
This paper describes the implementation of a digital audio / visual feedback system for an extemporaneous dance performance. The system was designed to automatically synchronize aesthetically with the dancers. The performance was premiered at the Slingshot festival in Athens Georgia on March 9, 2013.
@inproceedings{mkrzyzaniak2014, author = {Krzyzaniak, Michael and Akerly, Julie and Mosher, Matthew and Yildirim, Muharrem and Paine, Garth}, title = {Separation: Short Range Repulsion. Implementation of an Automated Aesthetic Synchronization System for a Dance Performance.}, pages = {303--306}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178841}, url = {http://www.nime.org/proceedings/2014/nime2014_279.pdf} }
Yang Kyu Lim and Woon Seung Yeo. 2014. Smartphone-based Music Conducting. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 573–576. http://doi.org/10.5281/zenodo.1178851
Abstract
Download PDF DOI
Smartphone-based music conducting is a convenient and effective approach to conducting practice that aims to overcome the practical limitations of traditional conducting practice and provide enhanced user experience compared to those of previous virtual conducting examples. This work introduces the v-Maestro, a smartphone application for music conducting. Powered by the Gyroscope of the device, the v-Maestro analyzes conducting motions that allows the user to not only control the tempo but also simulate “cueing” for different instruments. Results from user tests show that, in spite of certain ergonomic problems, new conducting practice with the v-Maestro is more satisfactory than traditional methods and has a strong potential as a conducting practice tool.
@inproceedings{ylim2014, author = {Lim, Yang Kyu and Yeo, Woon Seung}, title = {Smartphone-based Music Conducting}, pages = {573--576}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178851}, url = {http://www.nime.org/proceedings/2014/nime2014_281.pdf} }
Jun-qi Deng, Francis Chi Moon Lau, Ho-Cheung Ng, Yu-Kwong Kwok, Hung-Kwan Chen, and Yu-heng Liu. 2014. WIJAM: A Mobile Collaborative Improvisation Platform under Master-players Paradigm. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 407–410. http://doi.org/10.5281/zenodo.1178746
Abstract
Download PDF DOI
Music jamming is an extremely difficult task for musical novices. Trying to extend this meaningful activity, which can be highly enjoyable, to a larger recipient group, we present WIJAM, a mobile application for an ad-hoc group of musical novices to perform improvisation along with a music master. In this “master-players” paradigm, the master offers a music backing, orchestrates the musical flow, and gives feedbacks to the players; the players improvise by tapping and sketching on their smartphones. We believe that this paradigm can be a significant contribution to the possibility of music playing by a group of novices with no instrumental training leading to decent musical results.
@inproceedings{jdeng2014, author = {Deng, Jun-qi and Lau, Francis Chi Moon and Ng, Ho-Cheung and Kwok, Yu-Kwong and Chen, Hung-Kwan and Liu, Yu-heng}, title = {WIJAM: A Mobile Collaborative Improvisation Platform under Master-players Paradigm}, pages = {407--410}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178746}, url = {http://www.nime.org/proceedings/2014/nime2014_284.pdf} }
Tim Murray-Browne, Dom Aversano, Susanna Garcia, et al. 2014. The Cave of Sounds: An Interactive Installation Exploring How We Create Music Together. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 307–310. http://doi.org/10.5281/zenodo.1178885
Abstract
Download PDF DOI
The Cave of Sounds is an interactive sound installation made up of new musical instruments. Exploring what it means to create instruments together within the context of NIME and the maker scene, each instrument was created by an individual but with the aim of forming a part of this new ensemble over ten months, with the final installation debuting at the Barbican in London in August 2013. In this paper, we describe how ideas of prehistoric collective music making inspired and guided this participatory musical work, both in terms of how it was created and the audience experience of musical collaboration we aimed to create in the final installation. Following a detailed description of the installation itself, we reflect on the successes, lessons and future challenges of encouraging creative musical collaboration among members of an audience.
@inproceedings{tmurraybrowne12014, author = {Murray-Browne, Tim and Aversano, Dom and Garcia, Susanna and Hobbes, Wallace and Lopez, Daniel and Sendon, Tadeo and Tigas, Panagiotis and Ziemianin, Kacper and Chapman, Duncan}, title = {The Cave of Sounds: An Interactive Installation Exploring How We Create Music Together}, pages = {307--310}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178885}, url = {http://www.nime.org/proceedings/2014/nime2014_288.pdf} }
Kristian Nymoen, Sichao Song, Yngve Hafting, and Jim Torresen. 2014. Funky Sole Music: Gait Recognition and Adaptive Mapping. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 299–302. http://doi.org/10.5281/zenodo.1178895
Abstract
Download PDF DOI
We present Funky Sole Music, a musical interface employing a sole embedded with three force sensitive resistors in combination with a novel algorithm for continuous movement classification. A heuristics-based music engine has been implemented, allowing users to control high-level parameters of the musical output. This provides a greater degree of control to users without musical expertise compared to what they get with traditional media playes. By using the movement classification result not as a direct control action in itself, but as a way to change mapping spaces and musical sections, the control possibilities offered by the simple interface are greatly increased.
@inproceedings{knymoen12014, author = {Nymoen, Kristian and Song, Sichao and Hafting, Yngve and Torresen, Jim}, title = {Funky Sole Music: Gait Recognition and Adaptive Mapping}, pages = {299--302}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178895}, url = {http://www.nime.org/proceedings/2014/nime2014_289.pdf} }
Florian Heller and Jan Borchers. 2014. Visualizing Song Structure on Timecode Vinyls. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 66–69. http://doi.org/10.5281/zenodo.1178796
Abstract
Download PDF DOI
Although an analog technology, many DJs still value the turntable as an irreplaceable performance tool. Digital vinyl systems combine the distinct haptic nature of the analog turntable with the advantages of digital media. They use special records containing a digital timecode which is then processed by a computer and mapped to properties like playback speed and direction. These records, however, are generic and, in contrast to traditional vinyl, do not provide visual cues representing the structure of the track. We present a system that augments the timecode record with a visualization of song information such as artist, title, and track length, but also with a waveform that allows to visually navigate to a certain beat. We conducted a survey examining the acceptance of such tools in the DJ community and conducted a user study with professional DJs. The system was widely accepted as a tool in the DJ community and received very positive feedback during observational mixing sessions with four professional DJs.
@inproceedings{fheller2014, author = {Heller, Florian and Borchers, Jan}, title = {Visualizing Song Structure on Timecode Vinyls}, pages = {66--69}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178796}, url = {http://www.nime.org/proceedings/2014/nime2014_290.pdf} }
Martin Marier. 2014. Designing Mappings for the Sponge: Towards Spongistic Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 525–528. http://doi.org/10.5281/zenodo.1178863
Abstract
Download PDF DOI
The development of the cushion-like musical interface called the sponge started about seven years ago. Since then, it was extensively used to perform in various settings. The sponge itself is described, but the main focus is on the evolution of the mapping strategies that are used. The author reviews the guidelines proposed by other researchers and explains how they were concretely applied with the sponge. He concludes that no single strategy constitutes a solution to the issue of mapping and that musical compositions are complex entities that require the use of a multitude of mapping strategies in parallel. It is hoped that the mappings described combined with new strategies will eventually lead to the emergence of a musical language that is idiomatic to the sponge.
@inproceedings{mmarier2014, author = {Marier, Martin}, title = {Designing Mappings for the Sponge: Towards Spongistic Music}, pages = {525--528}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178863}, url = {http://www.nime.org/proceedings/2014/nime2014_292.pdf} }
Joachim Goßmann and Max Neupert. 2014. Musical Interface to Audiovisual Corpora of Arbitrary Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 151–154. http://doi.org/10.5281/zenodo.1178772
Abstract
Download PDF DOI
We present an instrument for audio-visual performance that allows to recombine sounds from a collection of sampled media through concatenative synthesis. A three-dimensional distribution derived from feature-analysis becomes accessible through a theremin-inspired interface, allowing the player to shift from exploration and intuitive navigation toward embodied performance on a granular level. In our example we illustrate this concept by using the audiovisual recording of an instrumental performance as a source. Our system provides an alternative interface to the musical instrument’s audiovisual corpus: as the instrument’s sound and behavior is accessed in ways that are not possible on the instrument itself, the resulting non-linear playback of the grains generates an instant remix in a cut-up aesthetic. The presented instrument is a human-computer interface that employs the structural outcome of machine analysis accessing audiovisual corpora in the context of a musical performance.
@inproceedings{mneupert2014, author = {Go{\ss}mann, Joachim and Neupert, Max}, title = {Musical Interface to Audiovisual Corpora of Arbitrary Instruments}, pages = {151--154}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178772}, url = {http://www.nime.org/proceedings/2014/nime2014_296.pdf} }
Ilias Bergstrom and Joan Llobera. 2014. OSC-Namespace and OSC-State: Schemata for Describing the Namespace and State of OSC-Enabled Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 311–314. http://doi.org/10.5281/zenodo.1178712
Abstract
Download PDF DOI
We introduce two complementary OSC schemata for two contexts of use. The first is for the complete description of an OSC namespace: detailing the full set of messages each OSC-enabled system can receive or send, alongside choice metadata we deem necessary to make full use of each system’s description. The second context of use is a snapshot (partial or full) of the system’s state. We also relate our proposed schemata to the current state of the art, and how using these resolves issues that were left pending with previous research.
@inproceedings{ibergstrom2014, author = {Bergstrom, Ilias and Llobera, Joan}, title = {OSC-Namespace and OSC-State: Schemata for Describing the Namespace and State of OSC-Enabled Systems}, pages = {311--314}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178712}, url = {http://www.nime.org/proceedings/2014/nime2014_300.pdf} }
Emily Robertson and Enrico Bertelli. 2014. Conductive Music: Teaching Innovative Interface Design and Composition Techniques with Open-Source Hardware. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 517–520. http://doi.org/10.5281/zenodo.1178921
Abstract
Download PDF DOI
Through examining the decisions and sequences of presenting a multi-media instrument fabrication program to students, this paper seeks to uncover practical elements of best practice and possible improvements in science and music education. The Conductive Music program incorporates public engagement principles, open-source hardware, DIY ethos, contemporary composition techniques, and educational activities for creative and analytical thinking. These activities impart positive skills through multi-media content delivery for all learning types. The program is designed to test practices for engaging at-risk young people from urban areas in the construction and performance of new electronic instruments. The goal is to open up the world of electronic music performance to a new generation of young digital artists and to replace negative social behaviours with creative outlets for expression through technology and performance. This paper highlights the key elements designed to deliver the program’s agenda and examines the ways in which these aims were realised or tested in the classroom.
@inproceedings{ebertelli2014, author = {Robertson, Emily and Bertelli, Enrico}, title = {Conductive Music: Teaching Innovative Interface Design and Composition Techniques with Open-Source Hardware}, pages = {517--520}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178921}, url = {http://www.nime.org/proceedings/2014/nime2014_301.pdf} }
Tom Mudd, Simon Holland, Paul Mulholland, and Nick Dalton. 2014. Dynamical Interactions with Electronic Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 126–129. http://doi.org/10.5281/zenodo.1178881
Abstract
Download PDF DOI
This paper examines electronic instruments that are based on dynamical systems, where the behaviour of the instrument depends not only upon the immediate input to the instrument, but also on the past input. Five instruments are presented as case studies: Michel Waisvisz’ Cracklebox, Dylan Menzies’ Spiro, no-input mixing desk, the author’s Feedback Joypad, and microphone-loudspeaker feedback. Links are suggested between the sonic affordances of each instrument and the dynamical mechanisms embedded in them. This is discussed in the context of contemporary, materialoriented approaches to composition and particularly to free improvisation where elements such as unpredictability and instability are often of interest, and the process of exploration and discovery is an important part of the practice. Links are also made with the use of dynamical interactions in computer games to produce situations in which slight variations in the timing and ordering of inputs can lead to very different outcomes, encouraging similarly explorative approaches.
@inproceedings{tmudd2014, author = {Mudd, Tom and Holland, Simon and Mulholland, Paul and Dalton, Nick}, title = {Dynamical Interactions with Electronic Instruments}, pages = {126--129}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178881}, url = {http://www.nime.org/proceedings/2014/nime2014_302.pdf} }
Mason Bretan and Gil Weinberg. 2014. Chronicles of a Robotic Musical Companion. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 315–318. http://doi.org/10.5281/zenodo.1178724
Abstract
Download PDF DOI
As robots become more pervasive in the world we think about how this might influence the way in which people experience music. We introduce the concept of a "robotic musical companion" (RMC) in the form of Shimi, a smart-phone enabled five degree-of-freedom (DoF) robotic platform. We discuss experiences individuals tend to have with music as consumers and performers and explore how these experiences can be modified, aided, or improved by the inherent synergies between a human and robot. An overview of several applications developed for Shimi is provided. These applications place Shimi in various roles and enable human-robotic interactions (HRIs) that are highlighted by more personable social communications using natural language and other forms of communication.
@inproceedings{mbretan2014, author = {Bretan, Mason and Weinberg, Gil}, title = {Chronicles of a Robotic Musical Companion}, pages = {315--318}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178724}, url = {http://www.nime.org/proceedings/2014/nime2014_303.pdf} }
Stefania Serafin, Stefano Trento, Francesco Grani, Hannah Perner-Wilson, Seb Madgwick, and Tom Mitchell. 2014. Controlling Physically Based Virtual Musical Instruments Using The Gloves. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 521–524. http://doi.org/10.5281/zenodo.1178937
Abstract
Download PDF DOI
In this paper we propose an empirical method to develop mapping strategies between a gestural based interface (the Gloves) and physically based sound synthesis models. An experiment was performed in order to investigate which kind of gestures listeners associate to synthesised sounds produced using physical models, corresponding to three categories of sound: sustained, iterative and impulsive. The results of the experiment show that listeners perform similar gestures when controlling sounds from the different categories. We used such gestures in order to create the mapping strategy between the Gloves and the physically based synthesis engine.
@inproceedings{sserafin2014, author = {Serafin, Stefania and Trento, Stefano and Grani, Francesco and Perner-Wilson, Hannah and Madgwick, Seb and Mitchell, Tom}, title = {Controlling Physically Based Virtual Musical Instruments Using The Gloves}, pages = {521--524}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178937}, url = {http://www.nime.org/proceedings/2014/nime2014_307.pdf} }
Chet Gnegy. 2014. CollideFx: A Physics-Based Audio Effects Processor. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 427–430. http://doi.org/10.5281/zenodo.1178770
Abstract
Download PDF DOI
CollideFx is a real-time audio effects processor that integrates the physics of real objects into the parameter space of the signal chain. Much like a traditional signal chain, the user can choose a series of effects and offer realtime control to their various parameters. In this work, we introduce a means of creating tree-like signal graphs that dynamically change their routing in response to changes in the location of the unit generators in a virtual space. Signals are rerouted using a crossfading scheme that avoids the harsh clicks and pops associated with amplitude discontinuities. The unit generators are easily controllable using a click and drag interface that responds using familiar physics. CollideFx brings the interactivity of a video game together with the purpose of creating interesting and complex audio effects. With little difficulty, users can craft custom effects, or alternatively, can fling a unit generator into a cluster of several others to obtain more surprising results, letting the physics engine do the decision making.
@inproceedings{cgnegy12014, author = {Gnegy, Chet}, title = {CollideFx: A Physics-Based Audio Effects Processor}, pages = {427--430}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178770}, url = {http://www.nime.org/proceedings/2014/nime2014_308.pdf} }
Timothy J Barraclough, Jim Murphy, and Ajay Kapur. 2014. New Open-Source Interfaces for Group Based Participatory Performance of Live Electronic Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 155–158. http://doi.org/10.5281/zenodo.1178708
Abstract
Download PDF DOI
This paper describes the Modulome System, a new hardware interface set for group-based electronic music performance and installation. Taking influence from a variety of established interfaces, the Modulome is a modular controller with application dependant use cases.
@inproceedings{tbarraclough2014, author = {Barraclough, Timothy J and Murphy, Jim and Kapur, Ajay}, title = {New Open-Source Interfaces for Group Based Participatory Performance of Live Electronic Music}, pages = {155--158}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178708}, url = {http://www.nime.org/proceedings/2014/nime2014_309.pdf} }
Otso Lähdeoja. 2014. Structure-Borne Sound and Aurally Active Spaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 319–322. http://doi.org/10.5281/zenodo.1178843
Abstract
Download PDF DOI
This paper provides a report of a research effort to transform architectural and scenographic surfaces into sound sources and use them in artistic creation. Structure-borne sound drivers are employed to induce sound into the solid surfaces, making them vibrate and emit sound. The sound waves can be perceived both via the aural (airborne diffusion) as well as the tactile (structure-borne diffusion) senses. The paper describes the main challenges encountered in the use of structure-borne sound technology, as well as the current results in overcoming them. Two completed artistic projects are presented in order to illustrate the creative possibilities enabled by the research.
@inproceedings{olahdeoja2014, author = {L\''ahdeoja, Otso}, title = {Structure-Borne Sound and Aurally Active Spaces}, pages = {319--322}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178843}, url = {http://www.nime.org/proceedings/2014/nime2014_310.pdf} }
D. J. Valtteri Wikström. 2014. Musical Composition by Regressional Mapping of Physiological Responses to Acoustic Features. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 549–552. http://doi.org/10.5281/zenodo.1178981
Abstract
Download PDF DOI
In this paper an emotionally justified approach for controlling sound with physiology is presented. Measurements of listeners’ physiology, while they are listening to recorded music of their own choosing, are used to create a regression model that predicts features extracted from music with the help of the listeners’ physiological response patterns. This information can be used as a control signal to drive musical composition and synthesis of new sounds an approach involving concatenative sound synthesis is suggested. An evaluation study was conducted to test the feasibility of the model. A multiple linear regression model and an artificial neural network model were evaluated against a constant regressor, or dummy model. The dummy model outperformed the other models in prediction accuracy, but the artificial neural network model achieved significant correlations between predictions and target values for many acoustic features.
@inproceedings{dwikstrom2014, author = {Wikstr\''om, D. J. Valtteri}, title = {Musical Composition by Regressional Mapping of Physiological Responses to Acoustic Features}, pages = {549--552}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178981}, url = {http://www.nime.org/proceedings/2014/nime2014_311.pdf} }
Jason Long. 2014. The Robotic Taishogoto: A New Plug ’n Play Desktop Performance Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 479–482. http://doi.org/10.5281/zenodo.1178853
Abstract
Download PDF DOI
This paper describes the Robotic Taishogoto, a new robotic musical instrument for performance, musical installations, and educational purposes. The primary goals of its creation is to provide an easy to use, cost effective, compact and integrated acoustic instrument which is fully automated and controllable via standard MIDI commands. This paper describes the technical details of its design and implementation including the mechanics, electronics and firmware. It also outlines various control methodologies and use cases for the instrument.
@inproceedings{jlong2014, author = {Long, Jason}, title = {The Robotic Taishogoto: A New Plug 'n Play Desktop Performance Instrument}, pages = {479--482}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178853}, url = {http://www.nime.org/proceedings/2014/nime2014_313.pdf} }
Paul Mathews, Ness Morris, Jim Murphy, Ajay Kapur, and Dale Carnegie. 2014. Tangle: a Flexible Framework for Performance with Advanced Robotic Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 187–190. http://doi.org/10.5281/zenodo.1178867
Abstract
Download PDF DOI
Networked musical performance using networks of computers for live performance of electronic music has evolved over a number of decades but has tended to rely upon customized and highly specialized software designed specifically for particular artistic goals. This paper presents Tangle, a flexible software framework designed to provide a basis for performance on any number of distinct instruments. The network includes features to simplify the control of robotic instruments, such as automated latency compensation and self-testing, while being simple to extend in order to implement device-specific logic and failsafes. Tangle has been tested on two diverse systems incorporating a number of unique and complex mechatronic instruments.
@inproceedings{pmathews2014, author = {Mathews, Paul and Morris, Ness and Murphy, Jim and Kapur, Ajay and Carnegie, Dale}, title = {Tangle: a Flexible Framework for Performance with Advanced Robotic Musical Instruments}, pages = {187--190}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178867}, url = {http://www.nime.org/proceedings/2014/nime2014_314.pdf} }
Ohad Fried, Zeyu Jin, Reid Oda, and Adam Finkelstein. 2014. AudioQuilt: 2D Arrangements of Audio Samples using Metric Learning and Kernelized Sorting. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 281–286. http://doi.org/10.5281/zenodo.1178766
Abstract
Download PDF DOI
The modern musician enjoys access to a staggering number of audio samples. Composition software can ship with many gigabytes of data, and there are many more to be found online. However, conventional methods for navigating these libraries are still quite rudimentary, and often involve scrolling through alphabetical lists. We present a system for sample exploration that allows audio clips to be sorted according to user taste, and arranged in any desired 2D formation such that similar samples are located near each other. Our method relies on two advances in machine learning. First, metric learning allows the user to shape the audio feature space to match their own preferences. Second, kernelized sorting finds an optimal arrangement for the samples in 2D. We demonstrate our system with two new interfaces for exploring audio samples, and evaluate the technology qualitatively and quantitatively via a pair of user studies.
@inproceedings{ofried2014, author = {Fried, Ohad and Jin, Zeyu and Oda, Reid and Finkelstein, Adam}, title = {AudioQuilt: {2D} Arrangements of Audio Samples using Metric Learning and Kernelized Sorting}, pages = {281--286}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178766}, url = {http://www.nime.org/proceedings/2014/nime2014_315.pdf} }
Daniel Gábana Arellano and Andrew McPherson. 2014. Radear: A Tangible Spinning Music Sequencer. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 84–85. http://doi.org/10.5281/zenodo.1178704
Abstract
Download PDF DOI
This paper presents a new circular tangible interface where one or multiple users can collaborate and interact in real time by placing and moving passive wooden pucks on a transparent tabletop in order to create music. The design encourages physical intuition and visual feedback on the music being created. An arm with six optical sensors rotates beneath a transparent surface, triggering sounds based on the objects placed above. The interface’s simplicity and tangibility make it easy to learn and suitable for a broad range of users.
@inproceedings{dgabanaarellano2014, author = {Arellano, Daniel G\'abana and McPherson, Andrew}, title = {Radear: A Tangible Spinning Music Sequencer}, pages = {84--85}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178704}, url = {http://www.nime.org/proceedings/2014/nime2014_324.pdf} }
Axel Berndt, Nadia Al-Kassab, and Raimund Dachselt. 2014. TouchNoise: A Particle-based Multitouch Noise Modulation Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 323–326. http://doi.org/10.5281/zenodo.1178714
Abstract
Download PDF DOI
We present the digital musical instrument TouchNoise that is based on multitouch interaction with a particle system. It implements a novel interface concept for modulating noise spectra. Each particle represents a sine oscillator that moves through the two-dimensional frequency and stereo panning domain via Brownian motion. Its behavior can be affected by multitouch gestures allowing the shaping of the resulting sound in many different ways. Particles can be dragged, attracted, repelled, accentuated, and their autonomous behavior can be manipulated. In this paper we introduce the concepts behind this instrument, describe its implementation and discuss the sonic design space emerging from it.
@inproceedings{aberndt2014, author = {Berndt, Axel and Al-Kassab, Nadia and Dachselt, Raimund}, title = {TouchNoise: A Particle-based Multitouch Noise Modulation Interface}, pages = {323--326}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178714}, url = {http://www.nime.org/proceedings/2014/nime2014_325.pdf} }
Yoshihito Nakanishi, Seiichiro Matsumura, and Chuichi Arakawa. 2014. B.O.M.B. -Beat Of Magic Box -: Stand-Alone Synthesizer Using Wireless Synchronization System For Musical Session and Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 80–81. http://doi.org/10.5281/zenodo.1178889
Abstract
Download PDF DOI
In this paper, the authors introduce a stand-alone synthesizer, “B.O.M.B. -Beat Of Magic Box –” for electronic music sessions and live performances. “B.O.M.B.” has a wireless communication system that synchronizes musical scale and tempo (BPM) between multiple devices. In addition, participants can change master/slave role between performers immediately. Our primary motivation is to provide musicians and nonmusicians with opportunities to experience a collaborative electronic music performance. Here, the hardware and interaction design of the device is presented. To date, numerous collaborative musical instruments have been developed in electronic music field [1][2][3]. The authors are interested in formations of musical sessions using stand-alone devices and leader/follower relationship in musical sessions. The authors specify three important requirements of instrument design for musical session. They are as follows: (1) Simple Interface: Interface that enables performers to control three sound elements (pitch, timbre, and amplitude) with simple interaction. (2) Portable Stand-alone System: System that runs standalone (with sound generators, speakers, and butteries). Because musical sessions can be improvised at any place and time, the authors consider that portability is essential in designing musical instruments for sessions. (3) Wireless Synchronization: System that supports ensembles by automatically synchronizing tempo (BPM) and tonality between multiple devices by air because of portability. In addition, performers can switch master/slave roles smoothly such as leader/follower relationship during a musical session. The authors gave ten live performances using this device at domestic and international events. In these events, the authors confirmed that our proposed wireless synchronization system worked stable. It is suggested that our system demonstrate the practicality of wireless synchronization. In future, the authors will evaluate the device in terms of its stability in multi-performer musical sessions.
@inproceedings{ynakanishi2014, author = {Nakanishi, Yoshihito and Matsumura, Seiichiro and Arakawa, Chuichi}, title = {B.O.M.B. -Beat Of Magic Box -: Stand-Alone Synthesizer Using Wireless Synchronization System For Musical Session and Performance}, pages = {80--81}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178889}, url = {http://www.nime.org/proceedings/2014/nime2014_327.pdf} }
Graham Wakefield, Charlie Roberts, Matthew Wright, Timothy Wood, and Karl Yerkes. 2014. Collaborative Live-Coding with an Immersive Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 505–508. http://doi.org/10.5281/zenodo.1178975
Abstract
Download PDF DOI
We discuss live coding audio-visual worlds for large-scale virtual reality environments. We describe Alive, an instrument allowing multiple users to develop sonic and visual behaviors of agents in a virtual world, through a browserbased collaborative code interface, accessible while being immersed through spatialized audio and stereoscopic display. The interface adds terse syntax for query-based precise or stochastic selections and declarative agent manipulations, lazily-evaluated expressions for synthesis and behavior, event handling, and flexible scheduling.
@inproceedings{gwakefield2014, author = {Wakefield, Graham and Roberts, Charlie and Wright, Matthew and Wood, Timothy and Yerkes, Karl}, title = {Collaborative Live-Coding with an Immersive Instrument}, pages = {505--508}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178975}, url = {http://www.nime.org/proceedings/2014/nime2014_328.pdf} }
Sangwon Suh, Jeong-seob Lee, and Woon Seung Yeo. 2014. A Gesture Detection with Guitar Pickup and Earphones. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 90–93. http://doi.org/10.5281/zenodo.1178949
Abstract
Download PDF DOI
For the electric guitar, which takes a large proportion in modern pop music, effects unit (or effector) is no longer optional. Many guitarists already ‘play’ their effects with their instrument. However, it is not easy to control these effects during the play, so lots of new controllers and interfaces have been devised; one example is a pedal type effects that helps players to control effects with a foot while their hands are busy. Some players put a controller on their guitars. However, our instruments are so precious to drill a hole, and the stage is too big for the player who is just kneeling behind the pedals and turning the knobs. In this paper, we designed a new control system for electric guitar and bass. This paper is about a gesture-based sound control system that controls the electric guitar effects (like delay time, reverberation or pitch) with the player’s hand gesture. This system utilizes TAPIR signal to trace player’s hand motion. TAPIR signal is an acoustic signal that can rarely be received by most people, because its frequency exists between 18 kHz to 22 kHz [TAPIR article]. This system consists of a signal generator, an electric guitar and a sound processor. From the generator that is attached on the player’s hand, the TAPIR signal transfers to the magnetic pickup equipped on the electric guitar. Player’s gesture is captured as a Doppler shift and the processor calculates the value as the sound effect parameter. In this paper, we focused on the demonstration of the signal transfer on aforementioned system.
@inproceedings{ssuh2014, author = {Suh, Sangwon and Lee, Jeong-seob and Yeo, Woon Seung}, title = {A Gesture Detection with Guitar Pickup and Earphones}, pages = {90--93}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178949}, url = {http://www.nime.org/proceedings/2014/nime2014_333.pdf} }
Florent Berthaut and Jarrod Knibbe. 2014. Wubbles: A Collaborative Ephemeral Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 499–500. http://doi.org/10.5281/zenodo.1178716
Abstract
Download PDF DOI
This paper presents a collaborative digital musical instrument that uses the ephemeral and physical properties of soap bubbles to explore the complexity layers and oscillating parameters of electronic (bass) music. This instrument, called Wubbles, aims at encouraging both individual and collaborative musical manipulations.
@inproceedings{fberthaut2014, author = {Berthaut, Florent and Knibbe, Jarrod}, title = {Wubbles: A Collaborative Ephemeral Musical Instrument}, pages = {499--500}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178716}, url = {http://www.nime.org/proceedings/2014/nime2014_334.pdf} }
Laurel Pardue, Dongjuan Nian, Christopher Harte, and Andrew McPherson. 2014. Low-Latency Audio Pitch Tracking: A Multi-Modal Sensor-Assisted Approach. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 54–59. http://doi.org/10.5281/zenodo.1178899
Abstract
Download PDF DOI
This paper presents a multi-modal approach to musical instrument pitch tracking combining audio and position sensor data. Finger location on a violin fingerboard is measured using resistive sensors, allowing rapid detection of approximate pitch. The initial pitch estimate is then used to restrict the search space of an audio pitch tracking algorithm. Most audio-only pitch tracking algorithms face a fundamental tradeoff between accuracy and latency, with longer analysis windows producing better pitch estimates at the cost of noticeable lag in a live performance environment. Conversely, sensor-only strategies struggle to achieve the fine pitch accuracy a human listener would expect. By combining the two approaches, high accuracy and low latency can be simultaneously achieved.
@inproceedings{lpardue2014, author = {Pardue, Laurel and Nian, Dongjuan and Harte, Christopher and McPherson, Andrew}, title = {Low-Latency Audio Pitch Tracking: A Multi-Modal Sensor-Assisted Approach}, pages = {54--59}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178899}, url = {http://www.nime.org/proceedings/2014/nime2014_336.pdf} }
Niklas Klügel, Timo Becker, and Georg Groh. 2014. Designing Sound Collaboratively Perceptually Motivated Audio Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 327–330. http://doi.org/10.5281/zenodo.1178833
Abstract
Download PDF DOI
In this contribution, we will discuss a prototype that allows a group of users to design sound collaboratively in real time using a multi-touch tabletop. We make use of a machine learning method to generate a mapping from perceptual audio features to synthesis parameters. This mapping is then used for visualization and interaction. Finally, we discuss the results of a comparative evaluation study.
@inproceedings{nklugel12014, author = {Kl\''ugel, Niklas and Becker, Timo and Groh, Georg}, title = {Designing Sound Collaboratively Perceptually Motivated Audio Synthesis}, pages = {327--330}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178833}, url = {http://www.nime.org/proceedings/2014/nime2014_339.pdf} }
Yehiel Amo, Gil Zissu, Shaltiel Eloul, Eran Shlomi, Dima Schukin, and Almog Kalifa. 2014. A Max/MSP Approach for Incorporating Digital Music via Laptops in Live Performances of Music Bands. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 94–97. http://doi.org/10.5281/zenodo.1178700
Abstract
Download PDF DOI
We use Max/MSP framework to create a reliable but flexible approach for managing live performances of music bands who rely on live playing with digital music. This approach utilizes Max/MSP to allow any player an easy and low cost way to apply and experiment innovative music interfaces for live performance, without losing the professionalism required on stage. In that approach, every 1-3 players is plugged to a unit consisting of a standard sound-card and laptop. This unit is controlled by an interface that schedules and manages all the digital sounds made by each player (VST effects, VST instruments and ’home-made’ interactive interfaces). All the player’s units are then remotely controlled by a conductor patch which is in charge of the synchronization of all the players and background samples in real time, as well as providing sensitive metronome and scheduling visual enhancement. Moreover, and not less important, we can take the advantage of using virtual instruments and virtual effects in Max environment to manage the mix, and routing the audio. This providing monitors and metronome to the players ears, and virtual mixing via Max/MSP patch. This privilege almost eliminates the dependency in the venue’s equipment, and in that way, the sound quality and music ideas can be taken out directly from the studio to the stage.
@inproceedings{seloul2014, author = {Amo, Yehiel and Zissu, Gil and Eloul, Shaltiel and Shlomi, Eran and Schukin, Dima and Kalifa, Almog}, title = {A Max/MSP Approach for Incorporating Digital Music via Laptops in Live Performances of Music Bands}, pages = {94--97}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178700}, url = {http://www.nime.org/proceedings/2014/nime2014_340.pdf} }
Adriana Sa. 2014. Repurposing Video Game Software for Musical Expression: A Perceptual Approach. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 331–334. http://doi.org/10.5281/zenodo.1178925
Abstract
Download PDF DOI
The text exposes a perceptual approach to instrument design and composition, and it introduces an instrument that outputs acoustic sound, digital sound, and digital image. We explore disparities between human perception and digital analysis as creative material. Because the instrument repurposes software intended to create video games, we establish a distinction between the notion of “flow” in music and gaming, questioning how it may substantiate in interaction design. Furthermore, we extrapolate from cognition/attention research to describe how the projected image creates a reactive stage scene without deviating attention from the music.
@inproceedings{asa2014, author = {Sa, Adriana}, title = {Repurposing Video Game Software for Musical Expression: A Perceptual Approach}, pages = {331--334}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178925}, url = {http://www.nime.org/proceedings/2014/nime2014_343.pdf} }
Jim Murphy, Paul Mathews, Ajay Kapur, and Dale Carnegie. 2014. Robot: Tune Yourself! Automatic Tuning for Musical Robotics. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 565–568. http://doi.org/10.5281/zenodo.1178883
Abstract
Download PDF DOI
This paper presents a method for a self-tuning procedure for musical robots capable of continuous pitch-shifting. Such a technique is useful for robots consisting of many strings: the ability to self-tune allows for long-term installation without human intervention as well as on-the-fly tuning scheme changes. The presented method consists of comparing a detuned string’s pitch at runtime to a pre-compiled table of string responses at varying tensions. The behavior of the current detuned string is interpolated from the two nearest pre-characterized neighbors, and the desired virtual fret positions are added to the interpolated model. This method allows for rapid tuning at runtime, requiring only a single string actuation to determine the pitch. After a detailed description of the self-tuning technique and implementation, the results will be evaluated on the new Swivel 2 robotic slide guitar. The paper concludes with a discussion of performance applications and ideas for subsequent work on self-tuning musical robotic systems.
@inproceedings{jmurphy2014, author = {Murphy, Jim and Mathews, Paul and Kapur, Ajay and Carnegie, Dale}, title = {Robot: Tune Yourself! Automatic Tuning for Musical Robotics}, pages = {565--568}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178883}, url = {http://www.nime.org/proceedings/2014/nime2014_345.pdf} }
Adam Stark. 2014. Sound Analyser: A Plug-In for Real-Time Audio Analysis in Live Performances and Installations. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 183–186. http://doi.org/10.5281/zenodo.1178945
Abstract
Download PDF DOI
Real-time audio analysis has great potential for being used to create musically responsive applications in live performances. There have been many examples of such use, including sound-responsive visualisations, adaptive audio effects and machine musicianship. However, at present, using audio analysis algorithms in live performance requires either some detailed knowledge about the algorithms themselves, or programming or both. Those wishing to use audio analysis in live performances may not have either of these as their strengths. Rather, they may instead wish to focus upon systems that respond to audio analysis data, such as visual projections or sound generators. In response, this paper introduces the Sound Analyser an audio plug-in allowing users to a) select a custom set of audio analyses to be performed in real-time and b) send that information via OSC so that it can easily be used by other systems to develop responsive applications for live performances and installations. A description of the system architecture and audio analysis algorithms implemented in the plug-in is presented before moving on to two case studies where the plug-in has been used in the field with artists.
@inproceedings{astark2014, author = {Stark, Adam}, title = {Sound Analyser: A Plug-In for Real-Time Audio Analysis in Live Performances and Installations}, pages = {183--186}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178945}, url = {http://www.nime.org/proceedings/2014/nime2014_348.pdf} }
Bridget Johnson, Michael Norris, and Ajay Kapur. 2014. The Development Of Physical Spatial Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 335–338. http://doi.org/10.5281/zenodo.1178820
Abstract
Download PDF DOI
This paper introduces recent developments in the Chronus series, a family of custom controllers that afford a performer gestural interaction with surround sound systems that can be easily integrated into their personal performance systems. The controllers are built with the goal of encouraging more electronic musicians to include the creation of dynamic pantophonic fields in performance. The paper focuses on technical advances of the Chronus 2.0 prototype that extend the interface to control both radial and angular positional data, and the controllers’ ease of integration into electronic performance configurations, both for diffusion and for performance from the wider electronic music community.
@inproceedings{bjohnson2014, author = {Johnson, Bridget and Norris, Michael and Kapur, Ajay}, title = {The Development Of Physical Spatial Controllers}, pages = {335--338}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178820}, url = {http://www.nime.org/proceedings/2014/nime2014_349.pdf} }
Alexander Refsum Jensenius. 2014. To gesture or Not? An Analysis of Terminology in NIME Proceedings 2001–2013. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 217–220. http://doi.org/10.5281/zenodo.1178816
Abstract
Download PDF DOI
The term ’gesture’ has represented a buzzword in the NIME community since the beginning of its conference series. But how often is it actually used, what is it used to describe, and how does its usage here differ from its usage in other fields of study? This paper presents a linguistic analysis of the motion-related terminology used in all of the papers published in the NIME conference proceedings to date (2001-2013). The results show that ’gesture’ is in fact used in 62 % of all NIME papers, which is a significantly higher percentage than in other music conferences (ICMC and SMC), and much more frequently than it is used in the HCI and biomechanics communities. The results from a collocation analysis support the claim that ’gesture’ is used broadly in the NIME community, and indicate that it ranges from the description of concrete human motion and system control to quite metaphorical applications.
@inproceedings{ajensenius2014, author = {Jensenius, Alexander Refsum}, title = {To gesture or Not? {A}n Analysis of Terminology in {NIME} Proceedings 2001--2013}, pages = {217--220}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178816}, url = {http://www.nime.org/proceedings/2014/nime2014_351.pdf} }
Enrique Tomás and Martin Kaltenbrunner. 2014. Tangible Scores: Shaping the Inherent Instrument Score. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 609–614. http://doi.org/10.5281/zenodo.1178953
Abstract
Download PDF DOI
Tangible Scores are a new paradigm for musical instrument design with a physical configuration inspired by graphic scores. In this paper we will focus on the design aspects of this new interface as well as on some of the related technical details. Creating an intuitive, modular and expressive instrument for textural music was the primary driving force. Following these criteria, we literally incorporated a musical score onto the surface of the instrument as a way of continuously controlling several parameters of the sound synthesis. Tangible Scores are played with both hands and they can adopt multiple physical forms. Complex and expressive sound textures can be easily played over a variety of timbres, enabling precise control in a natural manner.
@inproceedings{etomas12014, author = {Tom\'as, Enrique and Kaltenbrunner, Martin}, title = {Tangible Scores: Shaping the Inherent Instrument Score}, pages = {609--614}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178953}, url = {http://www.nime.org/proceedings/2014/nime2014_352.pdf} }
Evan Morgan, Hatice Gunes, and Nick Bryan-Kinns. 2014. Instrumenting the Interaction: Affective and Psychophysiological Features of Live Collaborative Musical Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 23–28. http://doi.org/10.5281/zenodo.1178877
Abstract
Download PDF DOI
New technologies have led to the design of exciting interfaces for collaborative music making. However we still have very little understanding of the underlying affective and communicative processes which occur during such interactions. To address this issue, we carried out a pilot study where we collected continuous behavioural, physiological, and performance related measures from pairs of improvising drummers. This paper presents preliminary findings, which could be useful for the evaluation and design of user-centred collaborative interfaces for musical creativity and expression.
@inproceedings{emorgan2014, author = {Morgan, Evan and Gunes, Hatice and Bryan-Kinns, Nick}, title = {Instrumenting the Interaction: Affective and Psychophysiological Features of Live Collaborative Musical Improvisation}, pages = {23--28}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178877}, url = {http://www.nime.org/proceedings/2014/nime2014_353.pdf} }
Blake Johnston, Henry Dengate Thrush, Ajay Kapur, Jim Murphy, and Tane Moleta. 2014. Polus: The Design and Development of a New, Mechanically Bowed String Instrument Ensemble. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 557–560. http://doi.org/10.5281/zenodo.1178822
Abstract
Download PDF DOI
This paper details the creation, design, implementation and uses of a series of new mechanically bowed string instruments. These instruments have been designed with the objective of allowing for multiple parameters of musical expressivity, as well as including the physical and spatial features of the instruments to be integral aspects of their perception as instruments and sonic objects. This paper focuses on the hardware design, software implementation, and present musical uses of the ensemble.
@inproceedings{bjohnston2014, author = {Johnston, Blake and Thrush, Henry Dengate and Kapur, Ajay and Murphy, Jim and Moleta, Tane}, title = {Polus: The Design and Development of a New, Mechanically Bowed String Instrument Ensemble}, pages = {557--560}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178822}, url = {http://www.nime.org/proceedings/2014/nime2014_355.pdf} }
Ozgur Izmirli and Jake Faris. 2014. Imitation Framework for Percussion. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 483–486. http://doi.org/10.5281/zenodo.1178814
Abstract
Download PDF DOI
We present a framework for imitation of percussion performances with parameter-based learning for accurate reproduction. We constructed a robotic setup involving pull-solenoids attached to drum sticks which communicate with a computer through an Arduino microcontroller. The imitation framework allows for parameter adaptation to different mechanical constructions by learning the capabilities of the overall system being used. For the rhythmic vocabulary, we have considered regular stroke, flam and drag styles. A learning and calibration system was developed to efficiently perform grace notes for the drag rudiment as well as the single stroke and the flam rudiment. A second pre-performance process is introduced to minimize the latency difference between individual drum sticks in our mechanical setup. We also developed an off-line onset detection method to reliably recognize onsets from the microphone input. Once these pre-performance steps are taken, our setup will then listen to a human drummer’s performance pattern, analyze for onsets, loudness, and rudiment pattern, and then play back using the learned parameters for the particular system. We conducted three different evaluations of our constructed system.
@inproceedings{oizmirli2014, author = {Izmirli, Ozgur and Faris, Jake}, title = {Imitation Framework for Percussion}, pages = {483--486}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178814}, url = {http://www.nime.org/proceedings/2014/nime2014_360.pdf} }
Alon Ilsar, Mark Havryliv, and Andrew Johnston. 2014. Evaluating the Performance of a New Gestural Instrument Within an Ensemble. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 339–342. http://doi.org/10.5281/zenodo.1178812
Abstract
Download PDF DOI
This paper discusses one particular mapping for a new gestural instrument called the AirSticks. This mapping was designed to be used for improvised or rehearsed duos and restricts the performer to only utilising the sound source of one other musician playing an acoustic instrument. Several pieces with different musicians were performed and documented, musicians were observed and interviews with these musicians were transcribed. In this paper we will examine the thoughts of these musicians to gather a better understanding of how to design effective ensemble instruments of this type.
@inproceedings{ailsar2014, author = {Ilsar, Alon and Havryliv, Mark and Johnston, Andrew}, title = {Evaluating the Performance of a New Gestural Instrument Within an Ensemble}, pages = {339--342}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178812}, url = {http://www.nime.org/proceedings/2014/nime2014_363.pdf} }
Adrián Barenca and Milos Corak. 2014. The Manipuller II: Strings within a Force Sensing Ring. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 589–592. http://doi.org/10.5281/zenodo.1178706
Abstract
Download PDF DOI
The Manipuller is a musical interface based on strings and multi-dimensional force sensing. This paper presents a new architectural approach to the original interface design which has been materialized with the implementation of the Manipuller II system prototype. Besides the short paper we would like to do a poster presentation plus a demo of the new prototype where the public will be invited to play with the new musical interface.
@inproceedings{abarenca2014, author = {Barenca, Adri{\'a}n and Corak, Milos}, title = {The Manipuller II: Strings within a Force Sensing Ring}, pages = {589--592}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178706}, url = {http://www.nime.org/proceedings/2014/nime2014_364.pdf} }
Ozan Sarier. 2014. Rub Synth : A Study of Implementing Intentional Physical Difficulty Into Touch Screen Music Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 179–182. http://doi.org/10.5281/zenodo.1178931
Abstract
Download PDF DOI
In the recent years many touch screen interfaces have been designed and used for musical control. When compared with their physical counterparts, current control paradigms employed in touch screen musical interfaces do not require the same level of physical labor and this negatively affects the user experience in terms of expressivity, engagement and enjoyment. This lack of physicality can be remedied by using interaction elements, which are designed for the exertion of the user. Employing intentionally difficult and inefficient interaction design can enhance the user experience by allowing greater bodily expression, kinesthetic feedback, more apparent skill acquisition, and performer satisfaction. Rub Synth is a touch screen musical instrument with an exertion interface. It was made for creating and testing exertion strategies that are possible by only using 2d touch coordinates as input and evaluating the outcomes of implementing intentional difficulty. This paper discusses the strategies that can be employed to model effort on touch screens, the benefits of having physical difficulty, Rub Synth’s interaction design, and user experience results of using such an interface.
@inproceedings{osarier2014, author = {Sarier, Ozan}, title = {Rub Synth : A Study of Implementing Intentional Physical Difficulty Into Touch Screen Music Controllers}, pages = {179--182}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178931}, url = {http://www.nime.org/proceedings/2014/nime2014_367.pdf} }
Lawrence Fyfe, Adam Tindale, and Sheelagh Carpendale. 2014. Extending the Nexus Data Exchange Format (NDEF) Specification. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 343–346. http://doi.org/10.5281/zenodo.1178768
Abstract
Download PDF DOI
The Nexus Data Exchange Format (NDEF) is an Open Sound Control (OSC) namespace specification designed to make connection and message management tasks easier for OSC-based networked performance systems. New extensions to the NDEF namespace improve both connection and message management between OSC client and server nodes. Connection management between nodes now features human-readable labels for connections and a new message exchange for pinging connections to determine their status. Message management now has improved namespace synchronization via a message count exchange and by the ability to add, remove, and replace messages on connected nodes.
@inproceedings{lfyfe2014, author = {Fyfe, Lawrence and Tindale, Adam and Carpendale, Sheelagh}, title = {Extending the Nexus Data Exchange Format (NDEF) Specification}, pages = {343--346}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178768}, url = {http://www.nime.org/proceedings/2014/nime2014_368.pdf} }
Ian Hattwick, Preston Beebe, Zachary Hale, Marcelo Wanderley, Philippe Leroux, and Fabrice Marandola. 2014. Unsounding Objects: Audio Feature Extraction for the Control of Sound Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 597–600. http://doi.org/10.5281/zenodo.1178790
Abstract
Download PDF DOI
This paper presents results from the development of a digital musical instrument which uses audio feature extraction for the control of sound synthesis. Our implementation utilizes multi-band audio analysis to generate control signals. This technique is well-suited to instruments for which the gestural interface is intentionally weakly defined. We present a percussion instrument utilizing this technique in which the timbral characteristics of found objects are the primary source of audio for analysis.
@inproceedings{ihattwick12014, author = {Hattwick, Ian and Beebe, Preston and Hale, Zachary and Wanderley, Marcelo and Leroux, Philippe and Marandola, Fabrice}, title = {Unsounding Objects: Audio Feature Extraction for the Control of Sound Synthesis}, pages = {597--600}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178790}, url = {http://www.nime.org/proceedings/2014/nime2014_369.pdf} }
Ian Hattwick, Joseph Malloch, and Marcelo Wanderley. 2014. Forming Shapes to Bodies: Design for Manufacturing in the Prosthetic Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 443–448. http://doi.org/10.5281/zenodo.1178792
Abstract
Download PDF DOI
Moving new DMIs from the research lab to professional artistic contexts places new demands on both their design and manufacturing. Through a discussion of the Prosthetic Instruments, a family of digital musical instruments we designed for use in an interactive dance performance, we discuss four different approaches to manufacturing -artisanal, building block, rapid prototyping, and industrial. We discuss our use of these different approaches as we strove to reconcile the many conflicting constraints placed upon the instruments’ design due to their use as hypothetical prosthetic extensions to dancers’ bodies, as aesthetic objects, and as instruments used in a professional touring context. Experiences and lessons learned during the design and manufacturing process are discussed in relation both to these manufacturing approaches as well as to Bill Buxton’s concept of artist-spec design.
@inproceedings{ihattwick2014, author = {Hattwick, Ian and Malloch, Joseph and Wanderley, Marcelo}, title = {Forming Shapes to Bodies: Design for Manufacturing in the Prosthetic Instruments}, pages = {443--448}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178792}, url = {http://www.nime.org/proceedings/2014/nime2014_370.pdf} }
Chris Nash. 2014. Manhattan: End-User Programming for Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 221–226. http://doi.org/10.5281/zenodo.1178891
Abstract
Download PDF DOI
This paper explores the concept of end-user programming languages in music composition, and introduces the Manhattan system, which integrates formulas with a grid-based style of music sequencer. Following the paradigm of spreadsheets, an established model of end-user programming, Manhattan is designed to bridge the gap between traditional music editing methods (such as MIDI sequencing and typesetting) and generative and algorithmic music -seeking both to reduce the learning threshold of programming and support flexible integration of static and dynamic musical elements in a single work. Interaction draws on rudimentary knowledge of mathematics and spreadsheets to augment the sequencer notation with programming concepts such as expressions, built-in functions, variables, pointers and arrays, iteration (for loops), branching (goto), and conditional statements (if-then-else). In contrast to other programming tools, formulas emphasise the visibility of musical data (e.g. notes), rather than code, but also allow composers to interact with notated music from a more abstract perspective of musical processes. To illustrate the function and use cases of the system, several examples of traditional and generative music are provided, the latter drawing on minimalism (process-based music) as an accessible introduction to algorithmic composition. Throughout, the system and approach are evaluated using the cognitive dimensions of notations framework, together with early feedback for use by artists.
@inproceedings{cnash2014, author = {Nash, Chris}, title = {Manhattan: End-User Programming for Music}, pages = {221--226}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178891}, url = {http://www.nime.org/proceedings/2014/nime2014_371.pdf} }
Charlie Roberts, Matthew Wright, JoAnn Kuchera-Morin, and Tobias Höllerer. 2014. Rapid Creation and Publication of Digital Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 239–242. http://doi.org/10.5281/zenodo.1178919
Abstract
Download PDF DOI
We describe research enabling the rapid creation of digital musical instruments and their publication to the Internet. This research comprises both high-level abstractions for making continuous mappings between audio, interactive, and graphical elements, as well as a centralized database for storing and accessing instruments. Published instruments run in most devices capable of running a modern web browser. Notation of instrument design is optimized for readability and expressivity.
@inproceedings{croberts2014, author = {Roberts, Charlie and Wright, Matthew and Kuchera-Morin, JoAnn and H{\''o}llerer, Tobias}, title = {Rapid Creation and Publication of Digital Musical Instruments}, pages = {239--242}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178919}, url = {http://www.nime.org/proceedings/2014/nime2014_373.pdf} }
Ivica Bukvic. 2014. Pd-L2Ork Raspberry Pi Toolkit as a Comprehensive Arduino Alternative in K-12 and Production Scenarios. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 163–166. http://doi.org/10.5281/zenodo.1178726
Abstract
Download PDF DOI
The following paper showcases new integrated Pd-L2Ork system and its K12 educational counterpart running on Raspberry Pi hardware. A collection of new externals and abstractions in conjunction with the Modern Device LOP shield transforms Raspberry Pi into a cost-efficient sensing hub providing Arduino-like connectivity with 10 digital I/O pins (including both software and hardware implementations of pulse width modulation) and 8 analog inputs, while offering a number of integrated features, including audio I/O, USB and Ethernet connectivity and video output.
@inproceedings{ibukvic2014, author = {Bukvic, Ivica}, title = {Pd-L2Ork Raspberry Pi Toolkit as a Comprehensive Arduino Alternative in K-12 and Production Scenarios}, pages = {163--166}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178726}, url = {http://www.nime.org/proceedings/2014/nime2014_377.pdf} }
Charles Holbrow, Elena Jessop, and Rebecca Kleinberger. 2014. Vocal Vibrations: A Multisensory Experience of the Voice. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 431–434. http://doi.org/10.5281/zenodo.1178800
Abstract
Download PDF DOI
Vocal Vibrations is a new project by the Opera of the Future group at the MIT Media Lab that seeks to engage the public in thoughtful singing and vocalizing, while exploring the relationship between human physiology and the resonant vibrations of the voice. This paper describes the motivations, the technical implementation, and the experience design of the Vocal Vibrations public installation. This installation consists of a space for reflective listening to a vocal composition (the Chapel) and an interactive space for personal vocal exploration (the Cocoon). In the interactive experience, the participant also experiences a tangible exteriorization of his voice by holding the ORB, a handheld device that translates his voice and singing into tactile vibrations. This installation encourages visitors to explore the physicality and expressivity of their voices in a rich musical context.
@inproceedings{rkleinberger2014, author = {Holbrow, Charles and Jessop, Elena and Kleinberger, Rebecca}, title = {Vocal Vibrations: A Multisensory Experience of the Voice}, pages = {431--434}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178800}, url = {http://www.nime.org/proceedings/2014/nime2014_378.pdf} }
Gershon Dublon and Joseph A. Paradiso. 2014. FingerSynth: Wearable Transducers for Exploring the Environment and Playing Music Everywhere. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 134–135. http://doi.org/10.5281/zenodo.1178754
Abstract
Download PDF DOI
We present the FingerSynth, a wearable musical instrument made up of a bracelet and set of rings that enable its player to produce sound by touching nearly any surface in their environment. Each ring contains a small, independently controlled exciter transducer commonly used for auditory bone conduction. The rings sound loudly when they touch a hard object, and are practically silent otherwise. When a wearer touches their own (or someone else’s) head, the contacted person hears the sound through bone conduction, inaudible to others. The bracelet contains a microcontroller, a set of FET transistors, an accelerometer, and a battery. The microcontroller generates a separate audio signal for each ring, switched through the FETs, and can take user input through the accelerometer in the form of taps, flicks, and other gestures. The player controls the envelope and timbre of the sound by varying the physical pressure and the angle of their finger on the surface, or by touching differently resonant surfaces. Because its sound is shaped by direct, physical contact with objects and people, the FingerSynth encourages players to experiment with the materials around them and with one another, making music with everything they touch.
@inproceedings{gdublon2014, author = {Dublon, Gershon and Paradiso, Joseph A.}, title = {FingerSynth: Wearable Transducers for Exploring the Environment and Playing Music Everywhere}, pages = {134--135}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178754}, url = {http://www.nime.org/proceedings/2014/nime2014_379.pdf} }
Fumito Hashimoto and Motoki Miura. 2014. Operating Sound Parameters Using Markov Model and Bayesian Filters in Automated Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 347–350. http://doi.org/10.5281/zenodo.1178788
Abstract
Download PDF DOI
In recent years, there has been an increase in the number of artists who make use of automated music performances in their music and live concerts. Automated music performance is a form of music production using programmed musical notes. Some artists who introduce automated music performance operate parameters of the sound in their performance for production of their music. In this paper, we focus on the music production aspects and describe a method that realizes operation of the sound parameters via computer. Further, in this study, the probability distribution of the action (i.e., variation of parameters) is obtained within the music, using Bayesian filters. The probability distribution of each piece of music is transformed by passing through a Markov model. After the probability distribution is obtained, sound parameters can be automatically controlled. We have developed a system to reproduce the musical expressions of humans and confirmed the possibilities of our method.
@inproceedings{fhashimoto2014, author = {Hashimoto, Fumito and Miura, Motoki}, title = {Operating Sound Parameters Using {Markov} Model and {Bayes}ian Filters in Automated Music Performance}, pages = {347--350}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178788}, url = {http://www.nime.org/proceedings/2014/nime2014_380.pdf} }
Reinhard Gupfinger and Martin Kaltenbrunner. 2014. SOUND TOSSING Audio Devices in the Context of Street Art. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 577–580. http://doi.org/10.5281/zenodo.1178778
Abstract
Download PDF DOI
Street art opens a new, broad research field in the context of urban communication and sound aesthetics in public space. The primary focus of this article is the relevance and effects of using sound technologies and audio devices to shape urban landscape and soundscape. This paper examines the process of developing an alternative type of street art that uses sound as its medium. It represents multiple audio device prototypes, which encourage new chances for street artists and activists to contribute their messages and signs in public spaces. Furthermore, it documents different approaches to establishing this alternative urban practice within the street art and new media art field. The findings also expose a research space for sound and technical interventions in the context of street art.
@inproceedings{rgupfinger2014, author = {Gupfinger, Reinhard and Kaltenbrunner, Martin}, title = {SOUND TOSSING Audio Devices in the Context of Street Art}, pages = {577--580}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178778}, url = {http://www.nime.org/proceedings/2014/nime2014_385.pdf} }
Thomas Mitchell, Sebastian Madgwick, Simon Rankine, Geoffrey Hilton, Adrian Freed, and Andrew Nix. 2014. Making the Most of Wi-Fi: Optimisations for Robust Wireless Live Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 251–256. http://doi.org/10.5281/zenodo.1178875
Abstract
Download PDF DOI
Wireless technology is growing increasingly prevalent in the development of new interfaces for live music performance. However, with a number of different wireless technologies operating in the 2.4 GHz band, there is a high risk of interference and congestion, which has the potential to severely disrupt live performances. With its high transmission power, channel bandwidth and throughput, Wi-Fi (IEEE 802.11) presents an opportunity for highly robust wireless communications. This paper presents our preliminary work optimising the components of a Wi-Fi system for live performance scenarios. We summarise the manufacture and testing of a prototype directional antenna that is designed to maximise sensitivity to a performer’s signal while suppressing interference from elsewhere. We also propose a set of recommended Wi-Fi configurations to reduce latency and increase throughput. Practical investigations utilising these arrangements demonstrate a single x-OSC device achieving a latency of <3 ms and a distributed network of 15 devices achieving a net throughput of 4800 packets per second ( 320 per device); where each packet is a 104-byte OSC message containing 16 analogue input channels acquired by the device.
@inproceedings{tmitchell2014, author = {Mitchell, Thomas and Madgwick, Sebastian and Rankine, Simon and Hilton, Geoffrey and Freed, Adrian and Nix, Andrew}, title = {Making the Most of Wi-Fi: Optimisations for Robust Wireless Live Music Performance}, pages = {251--256}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178875}, url = {http://www.nime.org/proceedings/2014/nime2014_386.pdf} }
Mary Mainsbridge and Kirsty Beilharz. 2014. Body As Instrument: Performing with Gestural Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 110–113. http://doi.org/10.5281/zenodo.1178859
Abstract
Download PDF DOI
This paper explores the challenge of achieving nuanced control and physical engagement with gestural interfaces in performance. Performances with a prototype gestural performance system, Gestate, provide the basis for insights into the application of gestural systems in live contexts. These reflections stem from a performer’s perspective, outlining the experience of prototyping and performing with augmented instruments that extend vocal or instrumental technique through ancillary gestures. Successful implementation of rapidly evolving gestural technologies in real-time performance calls for new approaches to performing and musicianship, centred around a growing understanding of the body’s physical and creative potential. For musicians hoping to incorporate gestural control seamlessly into their performance practice a balance of technical mastery and kinaesthetic awareness is needed to adapt existing systems to their own purposes. Within non-tactile systems, visual feedback mechanisms can support this process by providing explicit visual cues that compensate for the absence of haptic or tangible feedback. Experience gained through prototyping and performance can yield a deeper understanding of the broader nature of gestural control and the way in which performers inhabit their own bodies.
@inproceedings{mmainsbridge2014, author = {Mainsbridge, Mary and Beilharz, Kirsty}, title = {Body As Instrument: Performing with Gestural Interfaces}, pages = {110--113}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178859}, url = {http://www.nime.org/proceedings/2014/nime2014_393.pdf} }
Hanspeter Portner. 2014. CHIMAERA The Poly-Magneto-Phonic Theremin An Expressive Touch-Less Hall-Effect Sensor Array. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 501–504. http://doi.org/10.5281/zenodo.1178909
Abstract
Download PDF DOI
The Chimaera is a touch-less, expressive, polyphonic and electronic music controller based on magnetic field sensing. An array of hall-effect sensors and their vicinity make up a continuous 2D interaction space. The sensors are excited with Neodymium magnets worn on fingers. The device continuously tracks position and vicinity of multiple present magnets along the sensor array to produce event signals accordingly. Apart from the two positional signals, an event also carries the magnetic field polarization, a unique identifier and group association. We like to think of it as a mixed analog/digital offspring of theremin and trautonium. These general-purpose event signals are transmitted and eventually translated into musical events according to custom mappings on a host system. With its touch-less control (no friction), high update rates (2-4kHz), its quasi-continuous spatial resolution and its low-latency (<1 ms), the Chimaera can react to most subtle motions instantaneously and allows for a highly dynamic and expressive play. Its open source design additionally gives the user all possibilities to further tune hardware and firmware to his or her needs. The Chimaera is network-oriented and configured with and communicated by OSC (Open Sound Control), which makes it straight-forward to integrate into any setup.
@inproceedings{hportner2014, author = {Portner, Hanspeter}, title = {CHIMAERA The Poly-Magneto-Phonic Theremin An Expressive Touch-Less Hall-Effect Sensor Array}, pages = {501--504}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178909}, url = {http://www.nime.org/proceedings/2014/nime2014_397.pdf} }
Thomas Webster, Guillaume LeNost, and Martin Klang. 2014. The OWL programmable stage effects pedal: Revising the concept of the on-stage computer for live music performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 621–624. http://doi.org/10.5281/zenodo.1178979
Abstract
Download PDF DOI
This paper introduces the OWL stage effects pedal and aims to present the device within the context of Human Computer Interaction (HCI) research. The OWL is a dedicated, programmable audio device designed to provide an alternative to the use of laptop computers for bespoke audio processing on stage for music performance. By creating a software framework that allows the user to program their own code for the hardware in C++, the OWL project makes it possible to use homemade audio processing on stage without the need for a laptop running a computer music environment such as Pure Data or Supercollider. Moving away from the general-purpose computer to a dedicated audio device means that some of the potential problems and technical complexity of performing with a laptop computer onstage can be avoided, allowing the user to focus more of their attention on the musical performance. Within the format of a traditional guitar ’stomp box’, the OWL aims to integrate seamlessly into a guitarist’s existing pedal board setup, and in this way presents as an example of a ubiquitous and tangible computing device -a programmable computer designed to fit into an existing mode of musical performance whilst being transparent in use.
@inproceedings{twebster12014, author = {Webster, Thomas and LeNost, Guillaume and Klang, Martin}, title = {The OWL programmable stage effects pedal: Revising the concept of the on-stage computer for live music performance}, pages = {621--624}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178979}, url = {http://www.nime.org/proceedings/2014/nime2014_399.pdf} }
Nicholas Ward and Giuseppe Torre. 2014. Constraining Movement as a Basis for DMI Design and Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 449–454. http://doi.org/10.5281/zenodo.1178977
Abstract
Download PDF DOI
In this paper we describe the application of a movement-based design process for digital musical instruments which led to the development of a prototype DMI named the Twister. The development is described in two parts. Firstly, we consider the design of the interface or physical controller. Following this we describe the development of a specific sonic character, mapping approach and performance. In both these parts an explicit consideration of the type of movement we would like the device to engender in performance drove the design choices. By considering these two parts separately we draw attention to two different levels at which movement might be considered in the design of DMIs; at a general level of ranges of movement in the creation of the controller and a more specific, but still quite open, level in the creation of the final instrument and a particular performance. In light of the results of this process the limitations of existing representations of movement within the DMI design discourse is discussed. Further, the utility of a movement focused design approach is discussed.
@inproceedings{gtorre2014, author = {Ward, Nicholas and Torre, Giuseppe}, title = {Constraining Movement as a Basis for {DMI} Design and Performance.}, pages = {449--454}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178977}, url = {http://www.nime.org/proceedings/2014/nime2014_404.pdf} }
Matthew Davies, Adam Stark, Fabien Gouyon, and Masataka Goto. 2014. Improvasher: A Real-Time Mashup System for Live Musical Input. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 541–544. http://doi.org/10.5281/zenodo.1178744
Abstract
Download PDF DOI
In this paper we present Improvasher a real-time musical accompaniment system which creates an automatic mashup to accompany live musical input. Improvasher is built around two music processing modules, the first, a performance following technique, makes beat-synchronous predictions of chroma features from a live musical input. The second, a music mashup system, determines the compatibility between beat-synchronous chromagrams from different pieces of music. Through the combination of these two techniques, a real-time time predict mashup can be generated towards a new form of automatic accompaniment for interactive musical performance.
@inproceedings{mdavies2014, author = {Davies, Matthew and Stark, Adam and Gouyon, Fabien and Goto, Masataka}, title = {Improvasher: A Real-Time Mashup System for Live Musical Input}, pages = {541--544}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178744}, url = {http://www.nime.org/proceedings/2014/nime2014_405.pdf} }
Olivier Perrotin and Christophe d’Alessandro. 2014. Visualizing Gestures in the Control of a Digital Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 605–608. http://doi.org/10.5281/zenodo.1178901
Abstract
Download PDF DOI
Conceiving digital musical instruments might be challenging in terms of spectator accessibility. Depending on the interface and the complexity of the software used as a transition between the controller and sound, a musician performance can be totally opaque for the audience and loose its interest. This paper examines the possibility of adding a visual feedback to help the public understanding, and add expressivity to the performance. It explores the various mapping organizations between controller and sound, giving different spaces of representation for the visual feedback. It can be either an amplification of the controller parameters, or a representation of the related musical parameters. Different examples of visualization are presented and evaluated, taking the Cantor Digitalis as a support. It appears the representation of musical parameters, little used compared to the representation of controllers, received a good opinion from the audience, highlighting the musical intention of the performers.
@inproceedings{operrotin2014, author = {Perrotin, Olivier and d'Alessandro, Christophe}, title = {Visualizing Gestures in the Control of a Digital Musical Instrument}, pages = {605--608}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178901}, url = {http://www.nime.org/proceedings/2014/nime2014_406.pdf} }
Liam Donovan and Andrew McPherson. 2014. The Talking Guitar: Headstock Tracking and Mapping Strategies. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 351–354. http://doi.org/10.5281/zenodo.1178752
Abstract
Download PDF DOI
This paper presents the Talking Guitar, an electric guitar augmented with a system which tracks the position of the headstock in real time and uses that data to control the parameters of a formant-filtering effect which impresses upon the guitar sound a sense of speech. A user study is conducted with the device to establish an indication of the practicality of using headstock tracking to control effect parameters and to suggest natural and useful mapping strategies. Individual movements and gestures are evaluated in order to guide further development of the system.
@inproceedings{ldonovan2014, author = {Donovan, Liam and McPherson, Andrew}, title = {The Talking Guitar: Headstock Tracking and Mapping Strategies}, pages = {351--354}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178752}, url = {http://www.nime.org/proceedings/2014/nime2014_407.pdf} }
Victor Zappi and Andrew McPherson. 2014. Dimensionality and Appropriation in Digital Musical Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 455–460. http://doi.org/10.5281/zenodo.1178993
Abstract
Download PDF DOI
This paper investigates the process of appropriation in digital musical instrument performance, examining the effect of instrument complexity on the emergence of personal playing styles. Ten musicians of varying background were given a deliberately constrained musical instrument, a wooden cube containing a touch/force sensor, speaker and embedded computer. Each cube was identical in construction, but half the instruments were configured for two degrees of freedom while the other half allowed only a single degree. Each musician practiced at home and presented two performances, in which their techniques and reactions were assessed through video, sensor data logs, questionnaires and interviews. Results show that the addition of a second degree of freedom had the counterintuitive effect of reducing the exploration of the instrument’s affordances; this suggested the presence of a dominant constraint in one of the two configurations which strongly differentiated the process of appropriation across the two groups of participants.
@inproceedings{vzappi2014, author = {Zappi, Victor and McPherson, Andrew}, title = {Dimensionality and Appropriation in Digital Musical Instrument Design}, pages = {455--460}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178993}, url = {http://www.nime.org/proceedings/2014/nime2014_409.pdf} }
Clayton Mamedes, Mailis Rodrigues, Marcelo M. Wanderley, Jônatas Manzolli, Denise H. L. Garcia, and Paulo Ferreira-Lopes. 2014. Composing for DMIs Entoa, a Dedicate Piece for Intonaspacio. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 509–512. http://doi.org/10.5281/zenodo.1178861
Abstract
Download PDF DOI
Digital Musical Instruments (DMIs) have difficulties establishing themselves after their creation. A huge number of DMIs is presented every year and few of them actually remain in use. Several causes could explain this reality, among them the lack of a proper instrumental technique, inadequacy of the traditional musical notation and the non-existence of a repertoire dedicated to the instrument. In this paper we present Entoa, the first written music for Intonaspacio, a DMI we designed in our research project. We propose some strategies for mapping data from sensors to sound processing, in order to accomplish an expressive performance. Entoa is divided in five different sections that corresponds to five movements. For each, a different mapping is designed, introducing subtle alterations that progressively explore the ensemble of features of the instrument. The performer is then required to adapt his repertoire of gestures along the piece. Indications are expressed through a gestural notation, where freedom is give to performer to control certain parameters at specific moments in the music.
@inproceedings{mrodrigues2014, author = {Mamedes, Clayton and Rodrigues, Mailis and Wanderley, Marcelo M. and Manzolli, J{\^o}natas and Garcia, Denise H. L. and Ferreira-Lopes, Paulo}, title = {Composing for {DMI}s Entoa, a Dedicate Piece for Intonaspacio}, pages = {509--512}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178861}, url = {http://www.nime.org/proceedings/2014/nime2014_411.pdf} }
Dario Mazzanti, Victor Zappi, Darwin Caldwell, and Andrea Brogni. 2014. Augmented Stage for Participatory Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 29–34. http://doi.org/10.5281/zenodo.1178871
Abstract
Download PDF DOI
Designing a collaborative performance requires the use of paradigms and technologies which can deeply influence the whole piece experience. In this paper we define a set of six variables, and use them to describe and evaluate a number of platforms for participatory performances. Based on this evaluation, the Augmented Stage is introduced. Such concept describes how Augmented Reality techniques can be used to superimpose a performance stage with a virtual environment, populated with interactive elements. The manipulation of these objects allows spectators to contribute to the visual and sonic outcome of the performance through their mobile devices, while keeping their freedom to focus on the stage. An interactive acoustic rock performance based on this concept was staged. Questionnaires distributed to the audience and performers’ comments have been analyzed, contributing to an evaluation of the presented concept and platform done through the defined variables.
@inproceedings{dmazzanti2014, author = {Mazzanti, Dario and Zappi, Victor and Caldwell, Darwin and Brogni, Andrea}, title = {Augmented Stage for Participatory Performances}, pages = {29--34}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178871}, url = {http://www.nime.org/proceedings/2014/nime2014_413.pdf} }
Robert Tubb and Simon Dixon. 2014. The Divergent Interface: Supporting Creative Exploration of Parameter Spaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 227–232. http://doi.org/10.5281/zenodo.1178967
Abstract
Download PDF DOI
This paper outlines a theoretical framework for creative technology based on two contrasting processes: divergent exploration and convergent optimisation. We claim that these two cases require different gesture-to-parameter mapping properties. Results are presented from a user experiment that motivates this theory. The experiment was conducted using a publicly available iPad app: “Sonic Zoom”. Participants were encouraged to conduct an open ended exploration of synthesis timbre using a combination of two different interfaces. The first was a standard interface with ten sliders, hypothesised to be suited to the “convergent” stage of creation. The second was a mapping of the entire 10-D combinatorial space to a 2-D surface using a space filling curve. This novel interface was intended to support the “divergent” aspect of creativity. The paths of around 250 users through both 2-D and 10-D space were logged and analysed. Both the interaction data and questionnaire results show that the different interfaces tended to be used for different aspects of sound creation, and a combination of these two navigation styles was deemed to be more useful than either individually. The study indicates that the predictable, separate parameters found in most music technology are more appropriate for convergent tasks.
@inproceedings{rtubb2014, author = {Tubb, Robert and Dixon, Simon}, title = {The Divergent Interface: Supporting Creative Exploration of Parameter Spaces}, pages = {227--232}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178967}, url = {http://www.nime.org/proceedings/2014/nime2014_415.pdf} }
Stu Favilla and Sonja Pedell. 2014. Touch Screen Collaborative Music: Designing NIME for Older People with Dementia. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 35–39. http://doi.org/10.5281/zenodo.1178760
Abstract
Download PDF DOI
This paper presents new touch-screen collaborative music interaction for people with dementia. The authors argue that dementia technology has yet to focus on collaborative multi-user group musical interactions. The project aims to contribute to dementia care while addressing a significant gap in current literature. Two trials explore contrasting musical scenarios: the performance of abstract electronic music and the distributed performance of J.S. Bach’s Goldberg Variations. Findings presented in this paper; demonstrate that people with dementia can successfully perform and engage in collaborative music performance activities with little or no scaffolded instruction. Further findings suggest that people with dementia can develop and retain musical performance skill over time. This paper proposes a number of guidelines and design solutions.
@inproceedings{sfavilla2014, author = {Favilla, Stu and Pedell, Sonja}, title = {Touch Screen Collaborative Music: Designing NIME for Older People with Dementia}, pages = {35--39}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178760}, url = {http://www.nime.org/proceedings/2014/nime2014_417.pdf} }
Joel Eaton, Weiwei Jin, and Eduardo Miranda. 2014. The Space Between Us. A Live Performance with Musical Score Generated via Emotional Levels Measured in EEG of One Performer and an Audience Member. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 593–596. http://doi.org/10.5281/zenodo.1178756
Abstract
Download PDF DOI
The Space Between Us is a live performance piece for vocals, piano and live electronics using a Brain-Computer Music Interface system for emotional control of the score. The system not only aims to reflect emotional states but to direct and induce emotional states through the real-time generation of the score, highlighting the potential of direct neural-emotional manipulation in live performance. The EEG of the vocalist and one audience member is measured throughout the performance and the system generates a real-time score based on mapping the emotional features within the EEG. We measure the two emotional descriptors, valence and arousal, within EEG and map the two-dimensional correlate of averaged windows to musical phrases. These pre-composed phrases contain associated emotional content based on the KTH Performance Rules System (Director Musices). The piece is in three movements, the first two are led by the emotions of each subject respectively, whilst the third movement interpolates the combined response of the performer and audience member. The system not only aims to reflect the individuals’ emotional states but also attempts to induce a shared emotional experience by drawing the two responses together. This work highlights the potential available in affecting neural-emotional manipulation within live performance and demonstrates a new approach to real-time, affectively-driven composition.
@inproceedings{jeaton2014, author = {Eaton, Joel and Jin, Weiwei and Miranda, Eduardo}, title = {The Space Between Us. A Live Performance with Musical Score Generated via Emotional Levels Measured in {EEG} of One Performer and an Audience Member}, pages = {593--596}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178756}, url = {http://www.nime.org/proceedings/2014/nime2014_418.pdf} }
Justin Mathew, Stéphane Huot, and Alan Blum. 2014. A Morphological Analysis of Audio-Objects and their Control Methods for 3D Audio. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 415–420. http://doi.org/10.5281/zenodo.1178865
Abstract
Download PDF DOI
Recent technological improvements in audio reproduction systems increased the possibilities to spatialize sources in a listening environment. The spatialization of reproduced audio is however highly dependent on the recording technique, the rendering method, and the loudspeaker configuration. While object-based audio production has proven to reduce the dependency on loudspeaker configurations, authoring tools are still considered to be difficult to interact with in current production environments. In this paper, we investigate the issues of spatialization techniques for object-based audio production and introduce the Spatial Audio Design Spaces (SpADS) framework, that provides insights into the spatial manipulation of object-based audio. Based on interviews with professional sound engineers, this morphological analysis clarifies the relationships between recording and rendering techniques that define audio-objects for 3D speaker configurations, allowing the analysis and the design of advanced object-based controllers as well.
@inproceedings{jmathew2014, author = {Mathew, Justin and Huot, St{\'e}phane and Blum, Alan}, title = {A Morphological Analysis of Audio-Objects and their Control Methods for {3D} Audio}, pages = {415--420}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178865}, url = {http://www.nime.org/proceedings/2014/nime2014_420.pdf} }
Rob Canning. 2014. Interactive Parallax Scrolling Score Interface for Composed Networked Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 144–146. http://doi.org/10.5281/zenodo.1178728
Abstract
Download PDF DOI
This paper describes the Parallaxis Score System, part of the authors ongoing research into to the development of technological tools that foster creative interactions between improvising musicians and predefined instructional texts. The Parallaxis platform places these texts within a networked, interactive environment with a generalised set of controls in order to explore and devise ontologies of network performance. As an interactive tool involved in music production the score system itself undergoes a functional transformation and becomes a distributed meta-instrument in its own right, independent from, yet intrinsically connected to those instruments held by the performers.
@inproceedings{rcanning2014, author = {Canning, Rob}, title = {Interactive Parallax Scrolling Score Interface for Composed Networked Improvisation}, pages = {144--146}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178728}, url = {http://www.nime.org/proceedings/2014/nime2014_421.pdf} }
Alain Renaud, Caecilia Charbonnier, and Sylvain Chagué. 2014. 3DinMotion A Mocap Based Interface for Real Time Visualisation and Sonification of Multi-User Interactions. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 495–496. http://doi.org/10.5281/zenodo.1178915
Abstract
Download PDF DOI
This paper provides an overview of a proposed demonstration of 3DinMotion, a system using real time motion capture of one or several subjects, which can be used in interactive audiovisual pieces and network performances. The skeleton of a subject is analyzed in real time and displayed as an abstract avatar as well as sonified based on mappings and rules to make the interplay experience lively and rewarding. A series of musical pieces have been composed for the interface following cueing strategies. In addition a second display, “the prompter” guides the users through the piece. 3DinMotion has been developed from scratch and natively, leading to a system with a very low latency, making it suitable for real time music interactions. In addition, 3DinMotion is fully compatible with the OpenSoundControl (OSC) protocol, allowing expansion to commonly used musical and sound design applications.
@inproceedings{arenaud2014, author = {Renaud, Alain and Charbonnier, Caecilia and Chagu\'e, Sylvain}, title = {{3D}inMotion A Mocap Based Interface for Real Time Visualisation and Sonification of Multi-User Interactions}, pages = {495--496}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178915}, url = {http://www.nime.org/proceedings/2014/nime2014_423.pdf} }
Udit Roy, Tejaswinee Kelkar, and Bipin Indurkhya. 2014. TrAP: An Interactive System to Generate Valid Raga Phrases from Sound-Tracings. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 243–246. http://doi.org/10.5281/zenodo.1178923
Abstract
Download PDF DOI
We propose a new musical interface, TrAP (TRace-A-Phrase) for generating phrases of Hindustani Classical Music (HCM). In this system the user traces melodic phrases on a tablet interface to create phrases in a raga. We begin by analyzing tracings drawn by 28 participants, and train a classifier to categorize them into one of four melodic categories from the theory of Hindustani Music. Then we create a model based on note transitions from the raga grammar for the notes used in the singable octaves in HCM. Upon being given a new tracing, the system segments the tracing and computes a final phrase that best approximates the tracing.
@inproceedings{tkelkar2014, author = {Roy, Udit and Kelkar, Tejaswinee and Indurkhya, Bipin}, title = {TrAP: An Interactive System to Generate Valid Raga Phrases from Sound-Tracings}, pages = {243--246}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178923}, url = {http://www.nime.org/proceedings/2014/nime2014_424.pdf} }
Nick Collins and Alex McLean. 2014. Algorave: Live Performance of Algorithmic Electronic Dance Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 355–358. http://doi.org/10.5281/zenodo.1178734
Abstract
Download PDF DOI
The algorave movement has received reasonable international exposure in the last two years, including a series of concerts in Europe and beyond, and press coverage in a number of media. This paper seeks to illuminate some of the historical precedents to the scene, its primary aesthetic goals, and the divergent technological and musical approaches of representative participants. We keep in mind the novel possibilities in musical expression explored by algoravers. The scene is by no means homogeneous, and the very lack of uniformity of technique, from new live coding languages through code DJing to plug-in combination, with or without visual extension, is indicative of the flexibility of computers themselves as general information processors.
@inproceedings{ncollins2014, author = {Collins, Nick and McLean, Alex}, title = {Algorave: Live Performance of Algorithmic Electronic Dance Music}, pages = {355--358}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178734}, url = {http://www.nime.org/proceedings/2014/nime2014_426.pdf} }
John Bowers and Tim Shaw. 2014. Reappropriating Museum Collections: Performing Geology Specimens and Meterology Data as New Instruments for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 175–178. http://doi.org/10.5281/zenodo.1178720
Abstract
Download PDF DOI
In this paper we describe an artistic response to a collection of natural history museum artefacts, developed as part of a residency organised around a public participatory workshop. Drawing on a critical literature in studies of material culture, the work incorporated data sonification, image audification, field recordings and created a number of instruments for exploring geological artefacts and meterological data as aesthetic material. The residency culminated in an exhibition presented as a ’sensorium’ for the sensory exploration of museum objects. In describing the methods and thinking behind the project this paper presents an alternative approach to engaging artists and audiences with local heritage and museum archives, which draws on research in NIME and allied literatures, and which is devoted to enlivening collections as occasions for varied interpretation, appropriation and aesthetic response.
@inproceedings{jbowers12014, author = {Bowers, John and Shaw, Tim}, title = {Reappropriating Museum Collections: Performing Geology Specimens and Meterology Data as New Instruments for Musical Expression}, pages = {175--178}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178720}, url = {http://www.nime.org/proceedings/2014/nime2014_429.pdf} }
Aristotelis Hadjakos and Simon Waloschek. 2014. SPINE: A TUI Toolkit and Physical Computing Hybrid. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 625–628. http://doi.org/10.5281/zenodo.1178782
Abstract
Download PDF DOI
Physical computing platforms such as the Arduino have significantly simplified developing physical musical interfaces. However, those platforms typically target everyday programmers rather than composers and media artists. On the other hand, tangible user interface (TUI) toolkits, which provide an integrated, easy-to-use solution have not gained momentum in modern music creation. We propose a concept that hybridizes physical computing and TUI toolkit approaches. This helps to tackle typical TUI toolkit weaknesses, namely quick sensor obsolescence and limited choices. We developed a physical realization based on the idea of "universal pins", which can be configured to perform a variety of duties, making it possible to connect different sensor breakouts and modules. We evaluated our prototype by making performance measurements and conducting a user study demonstrating the feasibility of our approach.
@inproceedings{ahadjakos12014, author = {Hadjakos, Aristotelis and Waloschek, Simon}, title = {SPINE: A TUI Toolkit and Physical Computing Hybrid}, pages = {625--628}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178782}, url = {http://www.nime.org/proceedings/2014/nime2014_430.pdf} }
Owen Green. 2014. NIME, Musicality and Practice-led Methods. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 1–6. http://doi.org/10.5281/zenodo.1178776
Abstract
Download PDF DOI
To engage with questions of musicality is to invite into consideration a complex network of topics beyond the mechanics of soundful interaction with our interfaces. Drawing on the work of Born, I sketch an outline of the reach of these topics. I suggest that practice-led methods, by dint of focussing on the lived experience where many of these topics converge, may be able to serve as a useful methodological ‘glue’ for NIME by helping stimulate useful agonistic discussion on our objects of study, and map the untidy contours of contemporary practices. I contextualise this discussion by presenting two recently developed improvisation systems and drawing from these some starting suggestions for how attention to the grain of lived practice could usefully contribute to considerations for designers in terms of the pursuit of musicality and the care required in considering performances in evaluation.
@inproceedings{ogreen2014, author = {Green, Owen}, title = {NIME, Musicality and Practice-led Methods}, pages = {1--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178776}, url = {http://www.nime.org/proceedings/2014/nime2014_434.pdf} }
Janis Sokolovskis and Andrew McPherson. 2014. Optical Measurement of Acoustic Drum Strike Locations. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 70–73. http://doi.org/10.5281/zenodo.1178943
Abstract
Download PDF DOI
This paper presents a method for locating the position of a strike on an acoustic drumhead. Near-field optical sensors were installed underneath the drumhead of a commercially available snare drum. By implementing time difference of arrival (TDOA) algorithm accuracy within 2cm was achieved in approximating the location of strikes. The system can be used for drum performance analysis, timbre analysis and can form a basis for an augmented drum performance system.
@inproceedings{jsokolovskis2014, author = {Sokolovskis, Janis and McPherson, Andrew}, title = {Optical Measurement of Acoustic Drum Strike Locations}, pages = {70--73}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178943}, url = {http://www.nime.org/proceedings/2014/nime2014_436.pdf} }
Fabio Morreale, Antonella De Angeli, and Sile O’Modhrain. 2014. Musical Interface Design: An Experience-oriented Framework. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 467–472. http://doi.org/10.5281/zenodo.1178879
Abstract
Download PDF DOI
This paper presents MINUET, a framework for musical interface design grounded in the experience of the player. MINUET aims to provide new perspectives on the design of musical interfaces, referred to as a general term that comprises digital musical instruments and interactive installations. The ultimate purpose is to reduce the complexity of the design space emphasizing the experience of the player. MINUET is structured as a design process consisting of two stages: goal and specifications. The reliability of MINUET is tested through a systematic comparison with the related work and through a case study. To this end, we present the design and prototyping of Hexagon, a new musical interface with learning purposes.
@inproceedings{fmorreale2014, author = {Morreale, Fabio and Angeli, Antonella De and O'Modhrain, Sile}, title = {Musical Interface Design: An Experience-oriented Framework}, pages = {467--472}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178879}, url = {http://www.nime.org/proceedings/2014/nime2014_437.pdf} }
John Bowers and Annika Haas. 2014. Hybrid Resonant Assemblages: Rethinking Instruments, Touch and Performance in New Interfaces for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 7–12. http://doi.org/10.5281/zenodo.1178718
Abstract
Download PDF DOI
This paper outlines a concept of hybrid resonant assemblages, combinations of varied materials excited by sound transducers, feeding back to themselves via digital signal processing. We ground our concept as an extension of work by David Tudor, Nicolas Collins and Bowers and Archer [NIME 2005] and draw on a variety of critical perspectives in the social sciences and philosophy to explore such assemblages as an alternative to more familiar ideas of instruments and interfaces. We lay out a conceptual framework for the exploration of hybrid resonant assemblages and describe how we have approached implementing them. Our performance experience is presented and implications for work are discussed. In the light of our work, we urge a reconsideration of the implicit norms of performance which underlie much research in NIME. In particular, drawing on the philosophical work of Jean-Luc Nancy, we commend a wider notion of touch that also recognises the performative value of withholding contact.
@inproceedings{jbowers2014, author = {Bowers, John and Haas, Annika}, title = {Hybrid Resonant Assemblages: Rethinking Instruments, Touch and Performance in New Interfaces for Musical Expression}, pages = {7--12}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178718}, url = {http://www.nime.org/proceedings/2014/nime2014_438.pdf} }
Cornelius Pöpel, Jochen Feitsch, Marco Strobel, and Christian Geiger. 2014. Design and Evaluation of a Gesture Controlled Singing Voice Installation. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 359–362. http://doi.org/10.5281/zenodo.1178905
Abstract
Download PDF DOI
We present a system that allows users to experience singing without singing using gesture-based interaction techniques. We designed a set of body-related interaction and multi-modal feedback techniques and developed a singing voice synthesizer system that is controlled by the user’s mouth shapes and arm gestures. Based on the adaption of a number of digital media-related techniques such as face and body tracking, 3D rendering, singing voice synthesis and physical computing, we developed a media installation that allows users to perform an aria without real singing and provide the look and feel from a 20th century performance of an opera singer. We evaluated this system preliminarily with users.
@inproceedings{cgeiger2014, author = {Pöpel, Cornelius and Feitsch, Jochen and Strobel, Marco and Geiger, Christian}, title = {Design and Evaluation of a Gesture Controlled Singing Voice Installation}, pages = {359--362}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178905}, url = {http://www.nime.org/proceedings/2014/nime2014_439.pdf} }
Duncan Williams, Peter Randall-Page, and Eduardo Miranda. 2014. Timbre morphing: near real-time hybrid synthesis in a musical installation. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 435–438. http://doi.org/10.5281/zenodo.1178983
Abstract
Download PDF DOI
This paper presents an implementation of a near real-time timbre morphing signal processing system, designed to facilitate an element of ‘liveness’ and unpredictability in a musical installation. The timbre morpher is a hybrid analysis and synthesis technique based on Spectral Modeling Synthesis (an additive and subtractive modeling technique). The musical installation forms an interactive soundtrack in response to the series of Rosso Luana marble sculptures Shapes in the Clouds, I, II, III, IV & V by artist Peter Randall-Page, exhibited at the Peninsula Arts Gallery in Devon, UK, from 1 February to 29 March 2014. The timbre morphing system is used to transform live input captured at each sculpture with a discrete microphone array, by morphing towards noisy source signals that have been associated with each sculpture as part of a pre-determined musical structure. The resulting morphed audio is then fed-back to the gallery via a five-channel speaker array. Visitors are encouraged to walk freely through the installation and interact with the sound world, creating unique audio morphs based on their own movements, voices, and incidental sounds.
@inproceedings{dwilliams2014, author = {Williams, Duncan and Randall-Page, Peter and Miranda, Eduardo}, title = {Timbre morphing: near real-time hybrid synthesis in a musical installation}, pages = {435--438}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178983}, url = {http://www.nime.org/proceedings/2014/nime2014_440.pdf} }
Piers Titus van der Torren. 2014. Striso, a Compact Expressive Instrument Based on a New Isomorphic Note Layout. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 615–620. http://doi.org/10.5281/zenodo.1178957
Abstract
Download PDF DOI
The Striso is a new expressive music instrument with an acoustic feel, which is designed to be intuitive to play and playable everywhere. The sound of every note can be precisely controlled using the direction and pressure sensitive buttons, combined with instrument motion like tilting or shaking. It works standalone, with an internal speaker and battery, and is meant as a self contained instrument with its own distinct sound, but can also be connected to a computer to control other synthesizers. The notes are arranged in an easy and systematic way, according to the new DCompose note layout that is also presented in this paper. The DCompose note layout is designed to be compact, ergonomic, easy to learn, and closely bound to the harmonic properties of the notes.
@inproceedings{pvandertorren12014, author = {van der Torren, Piers Titus}, title = {Striso, a Compact Expressive Instrument Based on a New Isomorphic Note Layout}, pages = {615--620}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178957}, url = {http://www.nime.org/proceedings/2014/nime2014_442.pdf} }
Anders Tveit, Hans Wilmers, Notto Thelle, Magnus Bugge, Thom Johansen, and Eskil Muan Sæther. 2014. Reunion2012: A Novel Interface for Sound Producing Actions Through the Game of Chess. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 561–564. http://doi.org/10.5281/zenodo.1178969
Abstract
Download PDF DOI
Reunion2012 is a work for electronically modified chessboard, chess players and electronic instruments. The work is based on—but also departs from—John Cage’s Reunion, which premiered at the Sightsoundsystems Festival, Toronto, 1968. In the original performance, Cage and Marcel Duchamp played chess on an electronic board constructed by Lowell Cross. The board ‘conducted’ various electronic sound sources played by Cross, Gordon Mumma, David Tudor, and David Behrman, using photoresistors fitted under the squares [1]. Reunion2012, on the other hand, utilises magnet sensors via an Arduino. Like in Cage’s Variations V, this resulted in a musical situation where the improvising musicians had full control over their own sound, but no control regarding when their sound may be heard. In addition to a concert version, this paper also describes an interactive installation based on the same hardware.
@inproceedings{mbugge2014, author = {Tveit, Anders and Wilmers, Hans and Thelle, Notto and Bugge, Magnus and Johansen, Thom and S{\ae}ther, Eskil Muan}, title = {{Reunion}2012: A Novel Interface for Sound Producing Actions Through the Game of Chess}, pages = {561--564}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178969}, url = {http://www.nime.org/proceedings/2014/nime2014_443.pdf} }
Akito van Troyer. 2014. Composing Embodied Sonic Play Experiences: Towards Acoustic Feedback Ecology. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 118–121. http://doi.org/10.5281/zenodo.1178961
Abstract
Download PDF DOI
Acoustic feedback controllers (AFCs) are typically applied to solve feedback problems evident in applications such as public address (PA) systems, hearing aids, and speech applications. Applying the techniques of AFCs to different contexts, such as musical performance, sound installations, and product design, presents a unique insight into the research of embodied sonic interfaces and environments. This paper presents techniques that use digital acoustic feedback control algorithms to augment the sonic properties of environments and discusses approaches to the design of sonically playful experiences that apply such techniques. Three experimental prototypes are described to illustrate how the techniques can be applied to versatile environments and continuous coupling of users’ audible actions with sonically augmented environments. The knowledge obtained from these prototypes has led to Acoustic Feedback Ecology System (AFES) design patterns. The paper concludes with some future research directions based on the prototypes and proposes several other potentially useful applications ranging from musical performance to everyday contexts.
@inproceedings{avantroyer2014, author = {van Troyer, Akito}, title = {Composing Embodied Sonic Play Experiences: Towards Acoustic Feedback Ecology}, pages = {118--121}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178961}, url = {http://www.nime.org/proceedings/2014/nime2014_444.pdf} }
Mark Cartwright and Bryan Pardo. 2014. SynthAssist: Querying an Audio Synthesizer by Vocal Imitation. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 363–366. http://doi.org/10.5281/zenodo.1178730
Abstract
Download PDF DOI
Programming an audio synthesizer can be a difficult task for many. However, if a user has a general idea of the sound they are trying to program, they may be able to imitate it with their voice. This paper presents SynthAssist, a system for interactively searching the synthesis space of an audio synthesizer. In this work, we present how to use the system for querying a database of audio synthesizer patches (i.e. settings/parameters) by vocal imitation and user feedback. To account for the limitations of the human voice, it uses both absolute and relative time series representations of features and relevance feedback on both the feature weights and time series to refine the query. The method presented in this paper can be used to search through large databases of previously existing “factory presets” or program a synthesizer using the data-driven approach to automatic synthesizer programming.
@inproceedings{mcartwright2014, author = {Cartwright, Mark and Pardo, Bryan}, title = {SynthAssist: Querying an Audio Synthesizer by Vocal Imitation}, pages = {363--366}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178730}, url = {http://www.nime.org/proceedings/2014/nime2014_446.pdf} }
Charles Hutchins, Holger Ballweg, Shelly Knotts, Jonas Hummel, and Antonio Roberts. 2014. Soundbeam: A Platform for Sonyfing Web Tracking. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 497–498. http://doi.org/10.5281/zenodo.1178810
Abstract
Download PDF DOI
Government spying on internet traffic has seemingly become ubiquitous. Not to be left out, the private sector tracks our online footprint via our ISP or with a little help from facebook. Web services, such as advertisement servers and Google track our progress as we surf the net and click on links. The Mozilla plugin, Lightbeam (formerly Collusion), shows the user a visual map of every site a surfer sends data to. A interconnected web of advertisers and other (otherwise) invisible data-gatherers quickly builds during normal usage. We propose modifying this plugin so that as the graph builds, its state is broadcast visa OSC. Members of BiLE will receive and interpret those OSC messages in SuperCollider and PD. We will act as a translational object in a process of live-sonification. The collected data is the material with which we will develop a set of music tracks based on patterns we may discover. The findings of our data collection and the developed music will be presented in the form of an audiovisual live performance. Snippets of collected text and URLs will both form the basis of our audio interpretation, but also be projected on to a screen, so an audience can voyeuristically experience the actions taken on their behalf by governments and advertisers. After the concert, all of the scripts and documentation related to the data collection and sharing in the piece will be posted to github under a GPL license.
@inproceedings{chutchins2014, author = {Hutchins, Charles and Ballweg, Holger and Knotts, Shelly and Hummel, Jonas and Roberts, Antonio}, title = {Soundbeam: A Platform for Sonyfing Web Tracking}, pages = {497--498}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178810}, url = {http://www.nime.org/proceedings/2014/nime2014_447.pdf} }
Josep Comajuncosas and Enric Guaus. 2014. Conducting Collective Instruments : A Case Study. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 513–516. http://doi.org/10.5281/zenodo.1178736
Abstract
Download PDF DOI
According to the tradition, music ensembles are usually lead by a conductor who is the responsible to coordinate and guide the group under a specific musical criteria. Similarly, computer ensembles resort to a conductor to keep the synchronization and structural coordination of the performance, often with the assistance of software. Achieving integration and coherence in a networked performance, however, can be challenging in certain scenarios. This is the case for configurations with a high degree of mutual interdependence and shared control. This paper focuses on the design strategies for developing a software based conductor assistant for collective instruments. We propose a novel conductor dimension space representation for collective instruments, which takes into account both its social and structural features. We present a case study of a collective instrument implementing a software conductor. Finally, we discuss the implications of human and machine conduction schemes in the context of the proposed dimension space.
@inproceedings{jcomajuncosas2014, author = {Comajuncosas, Josep and Guaus, Enric}, title = {Conducting Collective Instruments : A Case Study}, pages = {513--516}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178736}, url = {http://www.nime.org/proceedings/2014/nime2014_448.pdf} }
Michael Gurevich. 2014. Distributed Control in a Mechatronic Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 487–490. http://doi.org/10.5281/zenodo.1178780
Abstract
Download PDF DOI
Drawing on concepts from systemics, cybernetics, and musical automata, this paper proposes a mechatronic, electroacoustic instrument that allows for shared control between programmed, mechanized motion and a human interactor. We suggest that such an instrument, situated somewhere between a robotic musical instrument and a passive controller, will foster the emergence of new, complex, and meaningful modes of musical interaction. In line with the methodological principles of practice as research, we describe the development and design of one such instrument-Stringtrees. The design process also reflects the notion of ambiguity as a resource in design: The instrument was endowed with a collection of sensors, controls, and actuators without a highly specific or prescriptive model for how a musician would interact with it.
@inproceedings{mgurevich12014, author = {Gurevich, Michael}, title = {Distributed Control in a Mechatronic Musical Instrument}, pages = {487--490}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178780}, url = {http://www.nime.org/proceedings/2014/nime2014_449.pdf} }
Diemo Schwarz, Pierre Alexandre Tremblay, and Alex Harker. 2014. Rich Contacts: Corpus-Based Convolution of Contact Interaction Sound for Enhanced Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 247–250. http://doi.org/10.5281/zenodo.1178935
Abstract
Download PDF DOI
We propose ways of enriching the timbral potential of gestural sonic material captured via piezo or contact microphones, through latency-free convolution of the microphone signal with grains from a sound corpus. This creates a new way to combine the sonic richness of large sound corpora, easily accessible via navigation through a timbral descriptor space, with the intuitive gestural interaction with a surface, captured by any contact microphone. We use convolution to excite the grains from the corpus via the microphone input, capturing the contact interaction sounds, which allows articulation of the corpus by hitting, scratching, or strumming a surface with various parts of the hands or objects. We also show how changes of grains have to be carefully handled, how one can smoothly interpolate between neighbouring grains, and finally evaluate the system against previous attempts.
@inproceedings{dschwarz12014, author = {Schwarz, Diemo and Tremblay, Pierre Alexandre and Harker, Alex}, title = {Rich Contacts: Corpus-Based Convolution of Contact Interaction Sound for Enhanced Musical Expression}, pages = {247--250}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178935}, url = {http://www.nime.org/proceedings/2014/nime2014_451.pdf} }
Federico Visi, Rodrigo Schramm, and Eduardo Miranda. 2014. Use of Body Motion to Enhance Traditional Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 601–604. http://doi.org/10.5281/zenodo.1178973
Abstract
Download PDF DOI
This work describes a new approach to gesture mapping in a performance with a traditional musical instrument and live electronics based upon theories of embodied music cognition (EMC) and musical gestures. Considerations on EMC and how gestures affect the experience of music inform different mapping strategies. Our intent is to enhance the expressiveness and the liveness of performance by tracking gestures via a multimodal motion capture system and to use motion data to control several features of the music. After a review of recent research in the field, a proposed application of such theories to a performance with electric guitar and live electronics will follow, focusing both on aspects of meaning formation and motion capturing.
@inproceedings{fvisi2014, author = {Visi, Federico and Schramm, Rodrigo and Miranda, Eduardo}, title = {Use of Body Motion to Enhance Traditional Musical Instruments}, pages = {601--604}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178973}, url = {http://www.nime.org/proceedings/2014/nime2014_460.pdf} }
Jimin Jeon, Gunho Chae, Edward Jangwon Lee, and Woon Seung Yeo. 2014. TAPIR Sound Tag: An Enhanced Sonic Communication Framework for Audience Participatory Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 367–370. http://doi.org/10.5281/zenodo.1178818
Abstract
Download PDF DOI
This paper presents an enhanced sonic data communication method using TAPIR (Theoretically Audible, but Practically Inaudible Range: frequencies above 18kHz) sound and a software toolkit as its implementation. Using inaudible sound as a data medium, a digital data network among the audience and performer can be easily built with microphones and speakers, without requiring any additional hardware. “TAPIR Sound Tag” is a smart device framework for inaudible data communication that can be easily embedded in audience participatory performances and interactive arts. With a bandwidth of 900 Hz, a high transmission rate of 200 bps can be achieved, enabling peer-to-peer or broadcasting real-time data communication among smart devices. This system can be used without any advanced knowledge in signal processing and communication system theory; simply specifying carrier frequency and bandwidth with a few lines of code can start data communication. Several usage scenarios of the system are also presented, such as participating in an interactive performance by adding and controlling sound, and collaborative completion of an artist’s work by audience. We expect this framework to provide a new way of audience interaction to artists, as well as further promoting audience participation by simplifying the process: using personal smart devices as a medium and not requiring additional hardware or complex settings.
@inproceedings{jjeon2014, author = {Jeon, Jimin and Chae, Gunho and Lee, Edward Jangwon and Yeo, Woon Seung}, title = {TAPIR Sound Tag: An Enhanced Sonic Communication Framework for Audience Participatory Performance}, pages = {367--370}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178818}, url = {http://www.nime.org/proceedings/2014/nime2014_461.pdf} }
Alvaro Sarasúa and Enric Guaus. 2014. Dynamics in Music Conducting: A Computational Comparative Study Among Subjects. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 195–200. http://doi.org/10.5281/zenodo.1178929
Abstract
Download PDF DOI
Many musical interfaces have used the musical conductor metaphor, allowing users to control the expressive aspects of a performance by imitating the gestures of conductors. In most of them, the rules to control these expressive aspects are predefined and users have to adapt to them. Other works have studied conductors’ gestures in relation to the performance of the orchestra. The goal of this study is to analyze, following the path initiated by this latter kind of works, how simple motion capture descriptors can explain the relationship between the loudness of a given performance and the way in which different subjects move when asked to impersonate the conductor of that performance. Twenty-five subjects were asked to impersonate the conductor of three classical music fragments while listening to them. The results of different linear regression models with motion capture descriptors as explanatory variables show that, by studying how descriptors correlate to loudness differently among subjects, different tendencies can be found and exploited to design models that better adjust to their expectations.
@inproceedings{asarasua2014, author = {Saras\'ua, Alvaro and Guaus, Enric}, title = {Dynamics in Music Conducting: A Computational Comparative Study Among Subjects}, pages = {195--200}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178929}, url = {http://www.nime.org/proceedings/2014/nime2014_464.pdf} }
Kirsty Keatch. 2014. An Exploration of Peg Solitaire as a Compositional Tool. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 102–105. http://doi.org/10.5281/zenodo.1178827
Abstract
Download PDF DOI
Sounds of Solitaire is a novel interface for musical expression based on an extended peg solitaire board as a generator of live musical composition. The classic puzzle game, for one person, is extended by mapping the moves of the game through a self contained system using Arduino and Raspberry Pi, triggering both analogue and digital sound. The solitaire board, as instrument, is presented as a wood and Perspex box with the hardware inside. Ball bearings function as both solitaire pegs and switches, while a purpose built solenoid controlled monochord and ball bearing run provide the analogue sound source, which is digitally manipulated in real-time, according to the sequences of game moves. The creative intention of Sounds of Solitaire is that the playful approach to participation in a musical experience, provided by the material for music making in real-time, demonstrates an integrated approach to concepts of composing, performing and listening.
@inproceedings{kkeatch2014, author = {Keatch, Kirsty}, title = {An Exploration of Peg Solitaire as a Compositional Tool}, pages = {102--105}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178827}, url = {http://www.nime.org/proceedings/2014/nime2014_466.pdf} }
Xiao Xiao, Basheer Tome, and Hiroshi Ishii. 2014. Andante: Walking Figures on the Piano Keyboard to Visualize Musical Motion. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 629–632. http://doi.org/10.5281/zenodo.1178987
Abstract
Download PDF DOI
We present Andante, a representation of music as animated characters walking along the piano keyboard that appear to play the physical keys with each step. Based on a view of music pedagogy that emphasizes expressive, full-body communication early in the learning process, Andante promotes an understanding of the music rooted in the body, taking advantage of walking as one of the most fundamental human rhythms. We describe three example visualizations on a preliminary prototype as well as applications extending our examples for practice feedback, improvisation and composition. Through our project, we reflect on some high level considerations for the NIME community.
@inproceedings{xxiao2014, author = {Xiao, Xiao and Tome, Basheer and Ishii, Hiroshi}, title = {Andante: Walking Figures on the Piano Keyboard to Visualize Musical Motion}, pages = {629--632}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178987}, url = {http://www.nime.org/proceedings/2014/nime2014_467.pdf} }
Adinda van ’t Klooster and Nick Collins. 2014. In A State: Live Emotion Detection and Visualisation for Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 545–548. http://doi.org/10.5281/zenodo.1178837
Abstract
Download PDF DOI
Emotion is a complex topic much studied in music and arguably equally central to the visual arts where this is usually referred to with the overarching label of aesthetics. This paper explores how music and the arts have incorporated the study of emotion. We then introduce the development of a live audio visual interface entitled In A State that detects emotion from live audio (in this case a piano performance) and generates visuals and electro acoustic music in response.
@inproceedings{avantklooster2014, author = {van 't Klooster, Adinda and Collins, Nick}, title = {In A State: Live Emotion Detection and Visualisation for Music Performance}, pages = {545--548}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178837}, url = {http://www.nime.org/proceedings/2014/nime2014_469.pdf} }
Dan Overholt and Steven Gelineck. 2014. Design & Evaluation of an Accessible Hybrid Violin Platform. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 122–125. http://doi.org/10.5281/zenodo.1178897
Abstract
Download PDF DOI
We introduce and describe the initial evaluation of a new low-cost augmented violin prototype, with research focused on the user experience when playing such hybrid physical-digital instruments, and the exploration of novel interactive performance techniques. Another goal of this work is wider platform accessibility for players, via a simple ‘do-it-yourself’ approach described by the design herein. While the hardware and software elements are open source, the build process can nonetheless require non-insignificant investments of time and money, as well as basic electronics construction skills. These have been kept to a minimum wherever possible. Our initial prototype is based upon an inexpensive electric violin that is widely available online for approximately $200 USD. This serves as the starting point for construction, to which the design adds local Digital Signal Processing (DSP), gestural sensing, and sound output. Real-time DSP algorithms are running on a mobile device, which also incorporates orientation/gesture sensors for parameter mapping, with the resulting sound amplified and rendered via small loudspeakers mounted on the instrument. The platform combines all necessary elements for digitally-mediated interactive performance; the need for a traditional computer only arises when developing new DSP algorithms for the platform. An initial exploratory evaluation with users is presented, in which performers explore different possibilities with the proposed platform (various DSP implementations, mapping schemes, physical setups, etc.) in order to better establish the needs of the performing artist. Based on these results, future work is outlined leading towards the development of a complete quartet of instruments.
@inproceedings{doverholt2014, author = {Overholt, Dan and Gelineck, Steven}, title = {Design \& Evaluation of an Accessible Hybrid Violin Platform}, pages = {122--125}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178897}, url = {http://www.nime.org/proceedings/2014/nime2014_470.pdf} }
Anna Xambó, Gerard Roma, Robin Laney, Chris Dobbyn, and Sergi Jordà. 2014. SoundXY4: Supporting Tabletop Collaboration and Awareness with Ambisonics Spatialisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 40–45. http://doi.org/10.5281/zenodo.1178985
Abstract
Download PDF DOI
Co-located tabletop tangible user interfaces (TUIs) for music performance are known for promoting multi-player collaboration with a shared interface, yet it is still unclear how to best support the awareness of the workspace in terms of understanding individual actions and the other group members actions, in parallel. In this paper, we investigate the effects of providing auditory feedback using ambisonics spatialisation, aimed at informing users about the location of the tangibles on the tabletop surface, with groups of mixed musical backgrounds. Participants were asked to improvise music on SoundXY4: The Art of Noise, a tabletop system that includes sound samples inspired by Russolo’s taxonomy of noises. We compared spatialisation vs. no-spatialisation conditions, and findings suggest that, when using spatialisation, there was a clearer workspace awareness, and a greater engagement in the musical activity as an immersive experience.
@inproceedings{axambo2014, author = {Xamb\'o, Anna and Roma, Gerard and Laney, Robin and Dobbyn, Chris and Jord\`a, Sergi}, title = {SoundXY4: Supporting Tabletop Collaboration and Awareness with Ambisonics Spatialisation}, pages = {40--45}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178985}, url = {http://www.nime.org/proceedings/2014/nime2014_471.pdf} }
Sergi Jordà and Sebastian Mealla. 2014. A Methodological Framework for Teaching, Evaluating and Informing NIME Design with a Focus on Mapping and Expressiveness. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 233–238. http://doi.org/10.5281/zenodo.1178824
Abstract
Download PDF DOI
The maturation process of the NIME field has brought a growing interest in teaching the design and implementation of Digital Music Instruments (DMI) as well as in finding objective evaluation methods to assess the suitability of these outcomes. In this paper we propose a methodology for teaching NIME design and a set of tools meant to inform the design process. This approach has been applied in a master course focused on the exploration of expressiveness and on the role of the mapping component in the NIME creation chain, through hands-on and self-reflective approach based on a restrictive setup consisting of smart-phones and the Pd programming language. Working Groups were formed, and a 2-step DMI design process was applied, including 2 performance stages. The evaluation tools assessed both System and Performance aspects of each project, according to Listeners’ impressions after each performance. Listeners’ previous music knowledge was also considered. Through this methodology, students with different backgrounds were able to effectively engage in the NIME design processes, developing working DMI prototypes according to the demanded requirements; the assessment tools proved to be consistent for evaluating NIMEs systems and performances, and the fact of informing the design processes with the outcome of the evaluation, showed a traceable progress in the students outcomes.
@inproceedings{smealla2014, author = {Jord\`a, Sergi and Mealla, Sebastian}, title = {A Methodological Framework for Teaching, Evaluating and Informing NIME Design with a Focus on Mapping and Expressiveness}, pages = {233--238}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178824}, url = {http://www.nime.org/proceedings/2014/nime2014_472.pdf} }
Benjamin Taylor, Jesse Allison, William Conlin, Yemin Oh, and Daniel Holmes. 2014. Simplified Expressive Mobile Development with NexusUI, NexusUp, and NexusDrop. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 257–262. http://doi.org/10.5281/zenodo.1178951
Abstract
Download PDF DOI
Developing for mobile and multimodal platforms is more important now than ever, as smartphones and tablets proliferate and mobile device orchestras become commonplace. We detail NexusUI, a JavaScript framework that enables rapid prototyping and development of expressive multitouch electronic instrument interfaces within a web browser. Extensions of this project assist in easily creating dynamic user interfaces. NexusUI contains several novel encapsulations of creative interface objects, each accessible with one line of code. NexusUp assists in one-button duplication of Max interfaces into mobile-friendly web pages that transmit to Max automatically via Open Sound Control. NexusDrop enables drag-and-drop interface building and saves interfaces to a central Nexus database. Finally, we provide an overview of several projects made with NexusUI, including mobile instruments, art installations, sound diffusion tools, and iOS games, and describe Nexus’ possibilities as an architecture for our future Mobile App Orchestra.
@inproceedings{btaylor2014, author = {Taylor, Benjamin and Allison, Jesse and Conlin, William and Oh, Yemin and Holmes, Daniel}, title = {Simplified Expressive Mobile Development with NexusUI, NexusUp, and NexusDrop}, pages = {257--262}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178951}, url = {http://www.nime.org/proceedings/2014/nime2014_480.pdf} }
Bastiaan van Hout, Luca Giacolini, Bart Hengeveld, Mathias Funk, and Joep Frens. 2014. Experio: a Design for Novel Audience Participation in Club Settings. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 46–49. http://doi.org/10.5281/zenodo.1178808
Abstract
Download PDF DOI
When looking at modern music club settings, especially in the area of electronic music, music is consumed in a unidirectional way -from DJ or producer to the audience -with little direct means to influence and participate. In this paper we challenge this phenomenon and aim for a new bond between the audience and the DJ through the creation of an interactive dance concept: Experio. Experio allows for multiple audience participants influencing the musical performance through dance, facilitated by a musical moderator using a tailored interface. This co-creation of electronic music on both novice and expert levels is a new participatory live performance approach, which is evaluated on the basis of thousands of visitors who interacted with Experio during several international exhibitions.
@inproceedings{mfunk2014, author = {van Hout, Bastiaan and Giacolini, Luca and Hengeveld, Bart and Funk, Mathias and Frens, Joep}, title = {Experio: a Design for Novel Audience Participation in Club Settings}, pages = {46--49}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178808}, url = {http://www.nime.org/proceedings/2014/nime2014_481.pdf} }
Jules Françoise, Norbert Schnell, Riccardo Borghesi, and Frédéric Bevilacqua. 2014. Probabilistic Models for Designing Motion and Sound Relationships. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 287–292. http://doi.org/10.5281/zenodo.1178764
Abstract
Download PDF DOI
We present a set of probabilistic models that support the design of movement and sound relationships in interactive sonic systems. We focus on a mapping–by–demonstration approach in which the relationships between motion and sound are defined by a machine learning model that learns from a set of user examples. We describe four probabilistic models with complementary characteristics in terms of multimodality and temporality. We illustrate the practical use of each of the four models with a prototype application for sound control built using our Max implementation.
@inproceedings{jfrancoise12014, author = {Fran\c{c}oise, Jules and Schnell, Norbert and Borghesi, Riccardo and Bevilacqua, Fr\'ed\'eric}, title = {Probabilistic Models for Designing Motion and Sound Relationships}, pages = {287--292}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178764}, url = {http://www.nime.org/proceedings/2014/nime2014_482.pdf} }
Augoustinos Tsiros. 2014. Evaluating the Perceived Similarity Between Audio-Visual Features Using Corpus-Based Concatenative Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 421–426. http://doi.org/10.5281/zenodo.1178965
Abstract
Download PDF DOI
This paper presents the findings of two exploratory studies. In these studies participants performed a series of image-sound association tasks. The aim of the studies was to investigate the perceived similarity and the efficacy of two multidimensional mappings each consisting of three audio-visual associations. The purpose of the mappings is to enable visual control of corpus-based concatenative synthesis. More specifically the stimuli in the first study was designed to test the perceived similarity of six audio-visual associations, between the two mappings using three corpora resulting in 18 audio-visual stimuli. The corpora differ in terms of two sound characteristics: harmonic contain and continuity. Data analysis revealed no significant differences in the participant’s responses between the three corpora, or between the two mappings. However highly significant differences were revealed between the individual audio-visual association pairs. The second study investigates the affects of the mapping and the corpus in the ability of the participants to detect which image out of three similar images was used to generate six audio stimuli. The data analysis revealed significant differences in the ability of the participants’ to detect the correct image depending on which corpus was used. Less significant was the effect of the mapping in the success rate of the participant responses.
@inproceedings{atsiros12014, author = {Tsiros, Augoustinos}, title = {Evaluating the Perceived Similarity Between Audio-Visual Features Using Corpus-Based Concatenative Synthesis}, pages = {421--426}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178965}, url = {http://www.nime.org/proceedings/2014/nime2014_484.pdf} }
Jihyun Han and Nicolas Gold. 2014. Lessons Learned in Exploring the Leap Motion(TM) Sensor for Gesture-based Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 371–374. http://doi.org/10.5281/zenodo.1178784
Abstract
Download PDF DOI
The Leap Motion(TM) sensor offers fine-grained gesture-recognition and hand tracking. Since its release, there have been several uses of the device for instrument design, musical interaction and expression control, documented through online video. However, there has been little formal documented investigation of the potential and challenges of the platform in this context. This paper presents lessons learned from work-in-progress on the development of musical instruments and control applications using the Leap Motion(TM) sensor. Two instruments are presented: Air-Keys and Air-Pads and the potential for augmentation of a traditional keyboard is explored. The results show that the platform is promising in this context but requires various challenges, both physical and logical, to be overcome.
@inproceedings{ngold2014, author = {Han, Jihyun and Gold, Nicolas}, title = {Lessons Learned in Exploring the Leap Motion(TM) Sensor for Gesture-based Instrument Design}, pages = {371--374}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178784}, url = {http://www.nime.org/proceedings/2014/nime2014_485.pdf} }
Jeppe Larsen, Dan Overholt, and Thomas Moeslund. 2014. The Actuated guitar: Implementation and user test on children with Hemiplegia. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 60–65. http://doi.org/10.5281/zenodo.1178845
Abstract
Download PDF DOI
People with a physical handicap are often not able to engage and embrace the world of music on the same terms as normal functioning people. Musical instruments have been refined the last centuries which makes them highly specialized instruments that nearly all requires at least two functioning hands. In this study we try to enable people with hemiplegia to play a real electrical guitar by modifying it in a way that make people with hemiplegia able to actually play the guitar. We developed the guitar platform to utilize sensors to capture the rhythmic motion of alternative fully functioning limbs, such as a foot, knee or the head to activate a motorized fader moving a pick back and forth across the strings. The approach employs the flexibility of a programmable digital system which allows us to scale and map different ranges of data from various sensors to the motion of the actuator and thereby making it easier adapt to individual users. To validate and test the instrument platform we collaborated with the Helena Elsass Center during their 2013 Summer Camp to see if we actually succeeded in creating an electrical guitar that children with hemiplegia could actually play. The initial user studies showed that children with hemiplegia were able to play the actuated guitar by producing rhythmical movement across the strings that enables them to enter a world of music they so often see as closed.
@inproceedings{jlarsen2014, author = {Larsen, Jeppe and Overholt, Dan and Moeslund, Thomas}, title = {The Actuated guitar: Implementation and user test on children with Hemiplegia}, pages = {60--65}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178845}, url = {http://www.nime.org/proceedings/2014/nime2014_486.pdf} }
Thomas Resch and Matthias Krebs. 2014. A Simple Architecture for Server-based (Indoor) Audio Walks. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 269–272. http://doi.org/10.5281/zenodo.1178917
Abstract
Download PDF DOI
This paper proposes a simple architecture for creating (indoor) audio walks by using a server running Max/MSP together with the external object fhnw.audiowalk.state and smartphone clients running either under Android or iOS using LibPd. Server and smartphone clients communicate over WLAN by exchanging OSC messages. Server and client have been designed in a way that allows artists with only little programming skills to create position-based audio walks.
@inproceedings{tresch2014, author = {Resch, Thomas and Krebs, Matthias}, title = {A Simple Architecture for Server-based (Indoor) Audio Walks}, pages = {269--272}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178917}, url = {http://www.nime.org/proceedings/2014/nime2014_491.pdf} }
Shawn Trail, Duncan MacConnell, Leo Jenkins, Jeff Snyder, George Tzanetakis, and Peter Driessen. 2014. El-Lamellophone A Low-cost, DIY, Open Framework for Acoustic Lemellophone Based Hyperinstruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 537–540. http://doi.org/10.5281/zenodo.1178959
Abstract
Download PDF DOI
The El-Lamellophone (El-La) is a Lamellophone hyperinstrument incorporating electronic sensors and integrated DSP. Initial investigations have been made into digitallycontrolled physical actuation of the acoustic tines. An embedded Linux micro-computer supplants the laptop. A piezoelectric pickup is mounted to the underside of the body of the instrument for direct audio acquisition providing a robust signal with little interference. The signal is used for electric sound-reinforcement, creative signal processing and audio analysis developed in Puredata (Pd). This signal inputs and outputs the micro computer via stereo 1/8th inch phono jacks. Sensors provide gesture recognition affording the performer a broader, more dynamic range of musical human computer interaction (MHCI) over specific DSP functions. Work has been done toward electromagnetic actuation of the tines, aiming to allow performer control and sensation via both traditional Lamellophone techniques, as well as extended playing techniques that incorporate shared human/computer control of the resulting sound. The goal is to achieve this without compromising the traditional sound production methods of the acoustic instrument while leveraging inherent performance gestures with embedded continuous controller values essential to MHCI. The result is an intuitive, performer designed, hybrid electro-acoustic instrument, idiomatic computer interface, and robotic acoustic instrument in one framework.
@inproceedings{strail2014, author = {Trail, Shawn and MacConnell, Duncan and Jenkins, Leo and Snyder, Jeff and Tzanetakis, George and Driessen, Peter}, title = {El-Lamellophone A Low-cost, DIY, Open Framework for Acoustic Lemellophone Based Hyperinstruments}, pages = {537--540}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178959}, url = {http://www.nime.org/proceedings/2014/nime2014_492.pdf} }
Niklas Klügel, Gerhard Hagerer, and Georg Groh. 2014. TreeQuencer: Collaborative Rhythm Sequencing A Comparative Study. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 50–53. http://doi.org/10.5281/zenodo.1178835
Abstract
Download PDF DOI
In this contribution we will show three prototypical applications that allow users to collaboratively create rhythmic structures with successively more degrees of freedom to generate rhythmic complexity. By means of a user study we analyze the impact of this on the users’ satisfaction and further compare it to data logged during the experiments that allow us to measure the rhythmic complexity created.
@inproceedings{nklugel2014, author = {Kl\''ugel, Niklas and Hagerer, Gerhard and Groh, Georg}, title = {TreeQuencer: Collaborative Rhythm Sequencing A Comparative Study}, pages = {50--53}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178835}, url = {http://www.nime.org/proceedings/2014/nime2014_498.pdf} }
Dominik Schlienger and Sakari Tervo. 2014. Acoustic Localisation as an Alternative to Positioning Principles in Applications presented at NIME 2001-2013. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 439–442. http://doi.org/10.5281/zenodo.1178933
Abstract
Download PDF DOI
This paper provides a rationale for choosing acoustic localisation techniques as an alternative to other principles to provide spatial positions in interactive locative audio applications (ILAA). By comparing positioning technology in existing ILAAs to the expected performance of acoustic positioning systems (APS), we can evaluate if APS would perform equivalently in a particular application. In this paper, the titles of NIME conference proceedings from 2001 to 2013 were searched for presentations on ILAA using positioning technology. Over 80 relevant articles were found. For each of the systems we evaluated if and why APS would be a contender or not. The results showed that for over 73 percent of the reviewed applications, APS could possibly provide competitive alternatives and at very low cost.
@inproceedings{dschlienger2014, author = {Schlienger, Dominik and Tervo, Sakari}, title = {Acoustic Localisation as an Alternative to Positioning Principles in Applications presented at NIME 2001-2013}, pages = {439--442}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178933}, url = {http://www.nime.org/proceedings/2014/nime2014_501.pdf} }
Christian Faubel. 2014. Rhythm Apparatus on Overhead. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 491–494. http://doi.org/10.5281/zenodo.1180950
Abstract
Download PDF DOI
In the paper I present a robotic device that offers new ways of interaction for producing rhythmic patterns. The apparatus is placed on an overhead projector and a visual presentation of these rhythmic patterns is delivered as a shadow play. The rhythmic patterns can be manipulated by modifying the environment of the robot, through direct physical interaction with the robot, by rewiring the internal connectivity, and by adjusting internal parameters. The theory of embodied cognition provides the theoretical basis of this device. The core postulate of embodied cognition is that biological behavior can only be understood through an understanding of the real-time interactions of an organism’s nervous system, the organism’s body and the environment. One the one hand the device illustrates this theory because the patterns that are created equally depend on the real-time interactions of the electronics, the physical structure of the device and the environment. On the other hand the device presents a synthesis of these ideas and it is effectively possible to play with it at all the three levels, the electronics, the physical configuration of the robot and the environment.
@inproceedings{cfaubel12014, author = {Faubel, Christian}, title = {Rhythm Apparatus on Overhead}, pages = {491--494}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1180950}, url = {http://www.nime.org/proceedings/2014/nime2014_503.pdf} }
Adrian Hazzard, Steve Benford, and Gary Burnett. 2014. You’ll Never Walk Alone: Composing Location-Based Soundtracks. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 411–414. http://doi.org/10.5281/zenodo.1178794
Abstract
Download PDF DOI
Music plays a vital role in accompanying all manner of our experiences. Soundtracks within films, video games and ceremonies possess a unique ability to enhance a narrative, suggest emotional content and mark key transitions. Moreover, soundtracks often achieve all of this without being the primary focus, on the contrary they typically assume a supporting role. The proliferation of mobile devices increasingly leads us to listen to music while on the move and musicians are seizing on locative technologies as a tool for creating new kinds of music that directly respond to people’s movements through space. In light of these trends, we consider the interesting question of how composers might set about creating musical soundtracks to accompany mobile experiences. What we have in mind are experiences such as guided walks, tours and even pervasive games. The novelty of our research here is in the music serving as an accompaniment to enhance a location specific activity, much as a soundtrack does for a film. This calls for composers to take into account the key features of the experience, and its setting, to gently complement them through the music. We examine this process from a composer’s perspective by presenting ‘from the field’ an account of how they address the multifaceted challenges of designing a soundtrack for public sculpture park. We chart a composer’s rationale as they developed a soundtrack for this site over multiple iterations of design, testing and refinement. We expose key relationships between the raw materials of music (melody, harmony, timbre, rhythm and dynamics) and those of the physical setting, that enable the composer to gracefully mesh the music into the fabric of the space. The result is to propose a set of recommendations to inform the composition of mobile soundtracks that we intend to guide future practice and research.
@inproceedings{ahazzard2014, author = {Hazzard, Adrian and Benford, Steve and Burnett, Gary}, title = {You'll Never Walk Alone: Composing Location-Based Soundtracks}, pages = {411--414}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178794}, url = {http://www.nime.org/proceedings/2014/nime2014_506.pdf} }
Karl Yerkes and Matthew Wright. 2014. Twkyr: a Multitouch Waveform Looper. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 375–378. http://doi.org/10.5281/zenodo.1178989
Abstract
Download PDF DOI
Twkyr is a new interface for musical expression that emphasizes realtime manipulation, audification, and visualization of waveforms with a multitouch surface, offering different interactivity at different time scales, within the same waveform. The interactive audiovisual design of Tweakyr is motivated by the need for increased parsimony and transparency in electronic musical instruments and draws from the work of Curtis Roads on time scales as qualitative musical parameters, and Edward Tufte’s “data-ink” principles for the improvement of data graphics.
@inproceedings{kyerkes2014, author = {Yerkes, Karl and Wright, Matthew}, title = {Twkyr: a Multitouch Waveform Looper}, pages = {375--378}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178989}, url = {http://www.nime.org/proceedings/2014/nime2014_508.pdf} }
Tom Mays and Francis Faber. 2014. A Notation System for the Karlax Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 553–556. http://doi.org/10.5281/zenodo.1178869
Abstract
Download PDF DOI
In this paper we expose the need to go beyond the composer/performer model of electronic instrument design and programming to encourage the transmission of compositions and the creation of a repertory via notation of repeatable performance practice. Drawing on 4 years of practice using the Karlax controller (Da Fact) as a base for new digital musical instruments, we present our notation system in detail and cite some mapping strategies and examples from to pieces in a growing repertory of chamber music compositions for electronic and acoustic instruments
@inproceedings{tmays2014, author = {Mays, Tom and Faber, Francis}, title = {A Notation System for the Karlax Controller}, pages = {553--556}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178869}, url = {http://www.nime.org/proceedings/2014/nime2014_509.pdf} }
Alejandro Van Zandt-Escobar, Baptiste Caramiaux, and Atau Tanaka. 2014. PiaF: A Tool for Augmented Piano Performance Using Gesture Variation Following. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 167–170. http://doi.org/10.5281/zenodo.1178991
Abstract
Download PDF DOI
When performing a piece, a pianist’s interpretation is communicated both through the sound produced and through body gestures. We present PiaF (Piano Follower), a prototype for augmenting piano performance by measuring gesture variations. We survey other augmented piano projects, several of which focus on gestural recognition, and present our prototype which uses machine learning techniques for gesture classification and estimation of gesture variations in real-time. Our implementation uses the Kinect depth sensor to track body motion in space, which is used as input data. During an initial learning phase, the system is taught a set of reference gestures, or templates. During performance, the live gesture is classified in real-time, and variations with respect to the recognized template are computed. These values can then be mapped to audio processing parameters, to control digital effects which are applied to the acoustic output of the piano in real-time. We discuss initial tests using PiaF with a pianist, as well as potential applications beyond live performance, including pedagogy and embodiment of recorded performance.
@inproceedings{avanzandt2014, author = {Zandt-Escobar, Alejandro Van and Caramiaux, Baptiste and Tanaka, Atau}, title = {PiaF: A Tool for Augmented Piano Performance Using Gesture Variation Following}, pages = {167--170}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178991}, url = {http://www.nime.org/proceedings/2014/nime2014_511.pdf} }
Palle Dahlstedt, Patrik Karlsson, Katarina Widell, and Tony Blomdahl. 2014. YouHero Making an Expressive Concert Instrument from the GuitarHero Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 403–406. http://doi.org/10.5281/zenodo.1178742
Abstract
Download PDF DOI
The idea behind the YouHero was two-fold. First, to make an expressive instrument out of the computer game toy guitar controller from the famous game GuitarHero. With its limited amount of control parameters, this was a challenge. Second, through this instrument we wanted to provide an alternative to the view that you become a hero by perfect imitation of your idols. Instead, play yourself. You are the hero. In this paper, we describe the design of the instrument, including its novel mapping approach based on switched timbre vectors scaled by accellerometer data, unconventional sound engines and the sound and mapping editing features, including manual editing of individual vectors. The instrument is evaluated through its practical applications during the whole project, with workshops with teenagers, a set of state-funded commissions from professional composers, and the development of considerable skill by the key performers. We have also submitted a performance proposal for this project.
@inproceedings{pdahlstedt12014, author = {Dahlstedt, Palle and Karlsson, Patrik and Widell, Katarina and Blomdahl, Tony}, title = {YouHero Making an Expressive Concert Instrument from the GuitarHero Controller}, pages = {403--406}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178742}, url = {http://www.nime.org/proceedings/2014/nime2014_513.pdf} }
Luke Dahl. 2014. Triggering Sounds from Discrete Air Gestures: What Movement Feature Has the Best Timing? Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 201–206. http://doi.org/10.5281/zenodo.1178738
Abstract
Download PDF DOI
Motion sensing technologies enable musical interfaces where a performer moves their body "in the air" without manipulating or contacting a physical object. These interfaces work well when the movement and sound are smooth and continuous, but it has proven difficult to design a system which triggers discrete sounds with precision that allows for complex rhythmic performance. We conducted a study where participants perform “air-drumming” gestures in time to rhythmic sounds. These movements are recorded, and the timing of various movement features with respect to the onset of audio events is analyzed. A novel algorithm for detecting sudden changes in direction is used to find the end of the strike gesture. We find that these occur on average after the audio onset and that this timing varies with the tempo of the movement. Sharp peaks in magnitude acceleration occur before the audio onset and do not vary with tempo. These results suggest that detecting peaks in acceleration will lead to more naturally responsive air gesture instruments.
@inproceedings{ldahl2014, author = {Dahl, Luke}, title = {Triggering Sounds from Discrete Air Gestures: What Movement Feature Has the Best Timing?}, pages = {201--206}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178738}, url = {http://www.nime.org/proceedings/2014/nime2014_514.pdf} }
Colin Honigman, Jordan Hochenbaum, and Ajay Kapur. 2014. Techniques in Swept Frequency Capacitive Sensing: An Open Source Approach. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 74–77. http://doi.org/10.5281/zenodo.1178802
Abstract
Download PDF DOI
This paper introduces a new technique for creating Swept Frequency Capacitive Sensing with open source technology for use in creating richer and more complex musical gestures. This new style of capacitive touch sensing is extremely robust compared to older versions and will allow greater implementation of gesture recognition and touch control in the development of NIMEs. Inspired by the Touché project, this paper discusses how to implement this technique using the community standard hardware Arduino instead of custom designed electronics. The technique requires only passive components and can be used to enhance the touch sensitivity of many everyday objects and even biological materials and substances such as plants, which this paper will focus on as a case study through the project known as Cultivating Frequencies. This paper will discuss different techniques of filtering data captured by this system, different methods for creating gesture recognition unique to the object being used, and the implications of this technology as it pertains to the goal of ubiquitous sensing. Furthermore, this paper will introduce a new Arduino Library, SweepingCapSense, which simplifies the coding required to implement this technique.
@inproceedings{chonigman2014, author = {Honigman, Colin and Hochenbaum, Jordan and Kapur, Ajay}, title = {Techniques in Swept Frequency Capacitive Sensing: An Open Source Approach}, pages = {74--77}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178802}, url = {http://www.nime.org/proceedings/2014/nime2014_515.pdf} }
Haojing Diao, Yanchao Zhou, Christopher Andrew Harte, and Nick Bryan-Kinns. 2014. Sketch-Based Musical Composition and Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 569–572. http://doi.org/10.5281/zenodo.1178748
Abstract
Download PDF DOI
Sketching is a natural way for one person to convey their thoughts and intentions to another. With the recent rise of tablet-based computing, the use of sketching as a control and interaction paradigm is one that deserves exploration. In this paper we present an interactive sketch-based music composition and performance system called Drawchestra. The aim of the system is to give users an intuitive way to convey their musical ideas to a computer system with the minimum of technical training thus enabling them to focus on the creative tasks of composition and performance. The system provides the user with a canvas upon which they may create their own instruments by sketching shapes on the tablet screen. The system recognises a certain set of shapes which it treats as virtual instruments or effects. Once recognised, these virtual instruments can then be played by the user in real time. The size of a sketched instrument shape is used to control certain parameters of the sound so the user can build complex orchestras containing many different shapes of different sizes. The sketched shapes may also be moved and resized as desired making it possible to customise and edit the virtual orchestra as the user goes along. The system has been implemented in Python and user tests conducted using an iPad as the control surface. We report the results of the user study at the end of the paper before briefly discussing the outcome and outlining the next steps for the system design.
@inproceedings{hdiao2014, author = {Diao, Haojing and Zhou, Yanchao and Harte, Christopher Andrew and Bryan-Kinns, Nick}, title = {Sketch-Based Musical Composition and Performance}, pages = {569--572}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178748}, url = {http://www.nime.org/proceedings/2014/nime2014_517.pdf} }
Jarrod Ratcliffe. 2014. Hand and Finger Motion-Controlled Audio Mixing Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 136–139. http://doi.org/10.5281/zenodo.1178911
Abstract
Download PDF DOI
This paper presents a control surface interface for music mixing using real time computer vision. Two input sensors are considered: the Leap Motion and the Microsoft Kinect. The author presents significant design considerations, including improving of the user’s sense of depth and panorama, maintaining broad accessibility by integrating the system with Digital Audio Workstation (DAW) software, and implementing a system that is portable and affordable. To provide the user with a heightened sense of sound spatialization over the traditional channel strip, the concept of depth is addressed directly using the stage metaphor. Sound sources are represented as colored spheres in a graphical user interface to provide the user with visual feedback. Moving sources back and forward controls volume, while left to right controls panning. To provide broader accessibility, the interface is configured to control mixing within the Ableton Live DAW. The author also discusses future plans to expand functionality and evaluate the system.
@inproceedings{jratcliffe2014, author = {Ratcliffe, Jarrod}, title = {Hand and Finger Motion-Controlled Audio Mixing Interface}, pages = {136--139}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178911}, url = {http://www.nime.org/proceedings/2014/nime2014_518.pdf} }
Chad McKinney. 2014. Quick Live Coding Collaboration In The Web Browser. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 379–382. http://doi.org/10.5281/zenodo.1178873
Abstract
Download PDF DOI
With the growing adoption of internet connectivity across the world, online collaboration is still a difficult and slow endeavor. Many amazing languages and tools such as SuperCollider, ChucK, and Max/MSP all facilitate networking and collaboration, however these languages and tools were not created explicitly to make group performances simple and intuitive. New web standards such as Web Audio and Web GL introduce the capability for web browsers to duplicate many of the features in computer music tools. This paper introduces Lich.js, an effort to bring musicians together over the internet with minimal effort by leveraging web technologies.
@inproceedings{cmckinney2014, author = {McKinney, Chad}, title = {Quick Live Coding Collaboration In The Web Browser}, pages = {379--382}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178873}, url = {http://www.nime.org/proceedings/2014/nime2014_519.pdf} }
Shelly Knotts and Nick Collins. 2014. The Politics of Laptop Ensembles: A Survey of 160 Laptop Ensembles and their Organisational Structures. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 191–194. http://doi.org/10.5281/zenodo.1178839
Abstract
Download PDF DOI
This paper reports the results of an online survey of 160 laptop ensembles and the relative democracy of their organisational and social structures. For the purposes of this research a laptop ensemble is defined as a performing group of three or more musicians for whom the laptop is the main sound generating source and who typically perform together in the same room. The concept of democracy (i.e. governance by members of the group) has been used as a starting point to assess firstly what types of organisational structures are currently used in laptop ensembles and secondarily to what extent laptop ensembles consider the implications of organisational and social structure on their musical output. To assess this I recorded a number of data points including ensemble size, whether the group has a director or conductor, use of homogenous vs. heterogenous hardware and software, whether they perform composed pieces or mainly improvise, the level of network interaction and whether or not the ensemble has an academic affiliation. The survey allowed me to define a scale of democracy in laptop ensembles and typical features of the most and least democratic groups. Some examples are given of democratic and autocratic activity in existing laptop ensembles. This work is part of a larger scale project investigating the effect of social structures on the musical output of laptop ensembles.
@inproceedings{sknotts2014, author = {Knotts, Shelly and Collins, Nick}, title = {The Politics of Laptop Ensembles: A Survey of 160 Laptop Ensembles and their Organisational Structures}, pages = {191--194}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178839}, url = {http://www.nime.org/proceedings/2014/nime2014_521.pdf} }
Lionel Feugère and Christophe d’Alessandro. 2014. Rule-Based Performative Synthesis of Sung Syllables. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 86–87. http://doi.org/10.5281/zenodo.1178762
Abstract
Download PDF DOI
In this demonstration, the mapping and the gestural control strategy developed in the Digitartic are presented. Digitartic is a musical instrument able to control sung syllables. Performative rule-based synthesis allows for controlling semi-consonants, plosive, fricative and nasal consonants with a same gesture, despite the structural differences in natural production of such vocal segments. A graphic pen tablet is used for capturing the gesture with a high sampling rate and resolution. This system alows for both performing various manners of articulation and having a continuous control over the articulation.
@inproceedings{lfeugere2014, author = {Feug\`ere, Lionel and d'Alessandro, Christophe}, title = {Rule-Based Performative Synthesis of Sung Syllables}, pages = {86--87}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178762}, url = {http://www.nime.org/proceedings/2014/nime2014_522.pdf} }
Jiffer Harriman, Michael Theodore, Nikolaus Correll, and Hunter Ewen. 2014. endo/exo Making Art and Music with Distributed Computing. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 383–386. http://doi.org/10.5281/zenodo.1178786
Abstract
Download PDF DOI
What do new possibilities for music and art making look like in a world in which the biological and mechanical are increasingly entangled? Can a contrived environment envelope the senses to the point that one feel fully immersed in it? It was with these questions in mind that the interactive mechanical sound art installation endo/exo came into being. Through the use of networked technology the system becomes more like a self-aware organism, passing messages from node to node as cells communicate through chemical signals with their neighbors. In an artistic context, the communication network resembles, but differs from, other mechanical systems. Issues such as latency are often considered negative factors, yet they can contribute a touch of personality in this context. This paper is a reflection on these and other considerations gained from the experience of designing and constructing endo/exo as well as future implications for the Honeycomb platform as a tool for creating musical interactions within a new paradigm which allows for emergent behavior across vast physical spaces. The use of swarming and self-organization, as well as playful interaction, creates an “aliveness” in the mechanism, and renders its exploration pleasurable, intriguing and uncanny.
@inproceedings{jharriman2014, author = {Harriman, Jiffer and Theodore, Michael and Correll, Nikolaus and Ewen, Hunter}, title = {endo/exo Making Art and Music with Distributed Computing}, pages = {383--386}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178786}, url = {http://www.nime.org/proceedings/2014/nime2014_523.pdf} }
Ricky Graham and Brian Bridges. 2014. Gesture and Embodied Metaphor in Spatial Music Performance Systems Design. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 581–584. http://doi.org/10.5281/zenodo.1178774
Abstract
Download PDF DOI
This paper describes the theoretical underpinnings, design, and development of a hyper–instrumental performance system driven by gestural data obtained from an electric guitar. The system combines a multichannel audio feed from the guitar (which is parsed for its pitch, spectral content and note inter–onset time data to provide abstractions of sounded performance gestures) with motion tracking of the performer’s larger–scale bodily movements using a Microsoft Xbox Kinect sensor. These gestural materials are used to provide the basis for the structures of relational mappings, informed by the embodied image schema structures of Lakoff and Johnson. These theoretical perspectives are refined via larger-scale ecological-embodied structural relationships in electroacoustic music outlined in Smalley’s theory of spectromorphology, alongside the incorporation of an additional active-agential response structure through the use of the boids flocking algorithm by Reynolds to control the spatialization of outputs and other textural processes. The paper aims to advance a broadly-applicable ’performance gesture ecology’, providing a shared spatial-relational mapping (a ’basic gestural space’) which allows for creative (but still coherent) mappings from the performance gestures to the control of textural and spatial structures.
@inproceedings{rgraham2014, author = {Graham, Ricky and Bridges, Brian}, title = {Gesture and Embodied Metaphor in Spatial Music Performance Systems Design.}, pages = {581--584}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178774}, url = {http://www.nime.org/proceedings/2014/nime2014_526.pdf} }
Chris Kiefer. 2014. Musical Instrument Mapping Design with Echo State Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 293–298. http://doi.org/10.5281/zenodo.1178829
Abstract
Download PDF DOI
Echo State Networks (ESNs), a form of recurrent neural network developed in the field of Reservoir Computing, show significant potential for use as a tool in the design of mappings for digital musical instruments. They have, however, seldom been used in this area, so this paper explores their possible uses. This project contributes a new open source library, which was developed to allow ESNs to run in the Pure Data dataflow environment. Several use cases were explored, focusing on addressing current issues in mapping research. ESNs were found to work successfully in scenarios of pattern classification, multiparametric control, explorative mapping and the design of nonlinearities and uncontrol. Un-trained behaviours are proposed, as augmentations to the conventional reservoir system that allow the player to introduce potentially interesting non-linearities and uncontrol into the reservoir. Interactive evolution style controls are proposed as strategies to help design these behaviours, which are otherwise dependent on arbitrary parameters. A study on sound classification shows that ESNs can reliably differentiate between two drum sounds, and also generalise to other similar input. Following evaluation of the use cases, heuristics are proposed to aid the use of ESNs in computer music scenarios.
@inproceedings{ckiefer2014, author = {Kiefer, Chris}, title = {Musical Instrument Mapping Design with Echo State Networks}, pages = {293--298}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178829}, url = {http://www.nime.org/proceedings/2014/nime2014_530.pdf} }
Bradley Strylowski, Jesse Allison, and Jesse Guessford. 2014. Pitch Canvas: Touchscreen Based Mobile Music Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 171–174. http://doi.org/10.5281/zenodo.1178947
Abstract
Download PDF DOI
Mobile music applications are typically quite limiting to musicians, as they either attempt to mimic non-touch screen interfaces or do not offer enough control. Pitch Canvas is a musical interface that was built specifically for the touchscreen. Pitches are laid out in a hexagonal pattern that allow for easy scale, chord, and arpeggiation patterns. Notes are played by touch, but are sustained through continuous movement. Pitch bends can be achieved by passing through the space between the notes. Its current implementation runs only on Apple iPad tablet computers using a libPd to convert user interaction into audio. An iPad overlay offers physical feedback for the circles as well as the pitch bend area between the circles. A performable version of the application has been built, though several active developments allow alternative sonic interpretation of the gestures, enhanced visual response to user interaction, and the ability to control the instrument with multiple devices.
@inproceedings{jallison2014, author = {Strylowski, Bradley and Allison, Jesse and Guessford, Jesse}, title = {Pitch Canvas: Touchscreen Based Mobile Music Instrument}, pages = {171--174}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178947}, url = {http://www.nime.org/proceedings/2014/nime2014_533.pdf} }
Palle Dahlstedt. 2014. Circle Squared and Circle Keys Performing on and with an Unstable Live Algorithm for the Disklavier. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 114–117. http://doi.org/10.5281/zenodo.1178740
Abstract
Download PDF DOI
Two related versions of an unstable live algorithm for the Disklavier player piano are presented. The underlying generative feedback system consists of four virtual musicians, listening to each other in a circular configuration. There is no temporal form, and all parameters of the system are controlled by the performer through an intricate but direct mapping, in an attempt to combine the experienced musician’s physical control of gesture and phrasing, with the structural complexities and richness of generative music. In the first version, Circle Squared, the interface is an array of pressure sensors, and the performer performs on the system without participating directly, like a puppet master. In the second version, control parameters are derived directly from playing on the same piano that performs the output of the system. Here, the performer both plays with and on the system in an intricate dance with the unpredictable output of the unstable virtual ensemble. The underlying mapping strategies are presented, together with the structure of the generative system. Experiences from a series of performances are discussed, primarily from the perspective of the improvising musician.
@inproceedings{pdahlstedt2014, author = {Dahlstedt, Palle}, title = {Circle Squared and Circle Keys Performing on and with an Unstable Live Algorithm for the Disklavier}, pages = {114--117}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178740}, url = {http://www.nime.org/proceedings/2014/nime2014_534.pdf} }
Daniel Tormoen, Florian Thalmann, and Guerino Mazzola. 2014. The Composing Hand: Musical Creation with Leap Motion and the BigBang Rubette. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 207–212. http://doi.org/10.5281/zenodo.1178955
Abstract
Download PDF DOI
This paper introduces an extension of the Rubato Composer software’s BigBang rubette module for gestural composition. The extension enables composers and improvisers to operate BigBang using the Leap Motion controller, which uses two cameras to detect hand motions in three-dimensional space. The low latency and high precision of the device make it a good fit for BigBang’s functionality, which is based on immediate visual and auditive feedback. With the new extensions, users can define an infinite variety of musical objects, such as oscillators, pitches, chord progressions, or frequency modulators, in real-time and transform them in order to generate more complex musical structures on any level of abstraction.
@inproceedings{fthalmann2014, author = {Tormoen, Daniel and Thalmann, Florian and Mazzola, Guerino}, title = {The Composing Hand: Musical Creation with Leap Motion and the BigBang Rubette}, pages = {207--212}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178955}, url = {http://www.nime.org/proceedings/2014/nime2014_536.pdf} }
Oliver Bown, Renick Bell, and Adam Parkinson. 2014. Examining the Perception of Liveness and Activity in Laptop Music: Listeners’ Inference about what the Performer is Doing from the Audio Alone. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 13–18. http://doi.org/10.5281/zenodo.1178722
Abstract
Download PDF DOI
Audiences of live laptop music frequently express dismay at the opacity of performer activity and question how “live” performances actually are. Yet motionless laptop performers endure as musical spectacles from clubs to concert halls, suggesting that for many this is a non-issue. Understanding these perceptions might help performers better achieve their intentions, inform interface design within the NIME field and help develop theories of liveness and performance. To this end, a study of listeners’ perception of liveness and performer control in laptop performance was carried out, in which listeners examined several short audio-only excerpts of laptop performances and answered questions about their perception of the performance: what they thought was happening and its sense of liveness. Our results suggest that audiences are likely to associate liveness with perceived performer activity such as improvisation and the audibility of gestures, whereas perceptions of generative material, backing tracks, or other preconceived material do not appear to inhibit perceptions of liveness.
@inproceedings{obown2014, author = {Bown, Oliver and Bell, Renick and Parkinson, Adam}, title = {Examining the Perception of Liveness and Activity in Laptop Music: Listeners' Inference about what the Performer is Doing from the Audio Alone}, pages = {13--18}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178722}, url = {http://www.nime.org/proceedings/2014/nime2014_538.pdf} }
Jeff Snyder and Danny Ryan. 2014. The Birl: An Electronic Wind Instrument Based on an Artificial Neural Network Parameter Mapping Structure. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 585–588. http://doi.org/10.5281/zenodo.1178939
Abstract
Download PDF DOI
This paper discusses the Birl, an electronic wind instrument developed by the authors. It uses artificial neural nets to apply machine learning to the mapping of fingering systems and embouchure position. The design features of the instrument are described, and the machine learning mapping strategy is discussed.
@inproceedings{jsnyder12014, author = {Snyder, Jeff and Ryan, Danny}, title = {The Birl: An Electronic Wind Instrument Based on an Artificial Neural Network Parameter Mapping Structure}, pages = {585--588}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178939}, url = {http://www.nime.org/proceedings/2014/nime2014_540.pdf} }
Abram Hindle. 2014. CloudOrch: A Portable SoundCard in the Cloud. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 277–280. http://doi.org/10.5281/zenodo.1178798
Abstract
Download PDF DOI
One problem with live computer music performance is the transport of computers to a venue and the following setup of the computers used in playing and rendering music. The more computers involved the longer the setup and tear-down of a performance. Each computer adds power and cabling requirements that the venue must accommodate. Cloud computing can change of all this by simplifying the setup of many (10s, 100s) of machines at the click of a button. But there’s a catch, the cloud is not physically near you, you cannot run an audio cable to the cloud. The audio from a computer music instrument in the cloud needs to streamed back to the performer and listeners. There are many solutions for streaming audio over networks and the internet, most of them suffer from high latency, heavy buffering, or proprietary/non-portable clients. In this paper we propose a portable cloud-friendly method of streaming, almost a cloud soundcard, whereby performers can use mobile devices (Android, iOS, laptops) to stream audio from the cloud with far lower latency than technologies like icecast. This technology enables near-realtime control over power computer music networks enabling performers to travel light and perform live with more computers than ever before.
@inproceedings{ahindle12014, author = {Hindle, Abram}, title = {CloudOrch: A Portable SoundCard in the Cloud}, pages = {277--280}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178798}, url = {http://www.nime.org/proceedings/2014/nime2014_541.pdf} }
Jeff Snyder and Avneesh Sarwate. 2014. Mobile Device Percussion Parade. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 147–150. http://doi.org/10.5281/zenodo.1178941
Abstract
Download PDF DOI
In this paper, we present the “Mobile Marching Band” (MMB) as a new mode of musical performance with mobile computing devices. We define an MMB to be, at its most general, any ensemble utilizing mobile computation that can travel as it performs, with the performance being independent of its location. We will discuss the affordances and limitations of mobile-based instrument design and performance, specifically within the context of a “moving” ensemble. We will also discuss the use of a Mobile Marching Band as an educational tool. Finally, we will explore our implementation of a Mobile Parade, a digital Brazilian samba ensemble.
@inproceedings{jsnyder2014, author = {Snyder, Jeff and Sarwate, Avneesh}, title = {Mobile Device Percussion Parade}, pages = {147--150}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178941}, url = {http://www.nime.org/proceedings/2014/nime2014_542.pdf} }
Navid Navab, Doug Van Nort, and Sha Xin Wei. 2014. A Material Computation Perspective on Audio Mosaicing and Gestural Conditioning. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 387–390. http://doi.org/10.5281/zenodo.1178893
Abstract
Download PDF DOI
This paper discusses an approach to instrument conception that is based on a careful consideration of the coupling of tactile and sonic gestural action both into and out of the performance system. To this end we propose a design approach that not only considers the materiality of the instrument, but that leverages it as a central part of the conception of the sonic quality, the control structuring and what generally falls under the umbrella of "mapping" design. As we will discuss, this extended computational matter-centric view is of benefit towards holistically understanding an “instrument” gestural engagement, as it is realized through physical material, sonic gestural matter and felt human engagement. We present instrumental systems that have arisen as a result of this approach to instrument design.
@inproceedings{dvannort2014, author = {Navab, Navid and Nort, Doug Van and Wei, Sha Xin}, title = {A Material Computation Perspective on Audio Mosaicing and Gestural Conditioning}, pages = {387--390}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178893}, url = {http://www.nime.org/proceedings/2014/nime2014_544.pdf} }
Sang Won Lee, Georg Essl, and Z. Morley Mao. 2014. Distributing Mobile Music Applications for Audience Participation Using Mobile Ad-hoc Network (MANET). Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 533–536. http://doi.org/10.5281/zenodo.1178849
Abstract
Download PDF DOI
This work introduces a way to distribute mobile applications using mobile ad-hoc network in the context of audience participation. The goal is to minimize user configuration so that the process is highly accessible for casual smartphone users. The prototype mobile applications utilize WiFiDirect and Service Discovery Protocol to distribute code. With the aid of these two technologies, the prototype system requires no infrastructure and minimum user configuration.
@inproceedings{slee12014, author = {Lee, Sang Won and Essl, Georg and Mao, Z. Morley}, title = {Distributing Mobile Music Applications for Audience Participation Using Mobile Ad-hoc Network ({MANET})}, pages = {533--536}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178849}, url = {http://www.nime.org/proceedings/2014/nime2014_546.pdf} }
Hyung Suk Kim, Jorge Herrera, and Ge Wang. 2014. Ping-Pong: Musically Discovering Locations. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 273–276. http://doi.org/10.5281/zenodo.1178831
Abstract
Download PDF DOI
A recently developed system that uses pitched sounds to discover relative 3D positions of a group of devices located in the same physical space is described. The measurements are coordinated over an IP network in a decentralized manner, while the actual measurements are carried out measuring the time-of-flight of the notes played by different devices. Approaches to sonify the discovery process are discussed. A specific instantiation of the system is described in detail. The melody is specified in the form of a score, available to every device in the network. The system performs the melody by playing different notes consecutively on different devices, keeping a consistent timing, while carrying out the inter-device measurements necessary to discover the geometrical configuration of the devices in the physical space.
@inproceedings{jherrera2014, author = {Kim, Hyung Suk and Herrera, Jorge and Wang, Ge}, title = {Ping-Pong: Musically Discovering Locations}, pages = {273--276}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178831}, url = {http://www.nime.org/proceedings/2014/nime2014_550.pdf} }
Edgar Berdahl. 2014. How to Make Embedded Acoustic Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 140–143. http://doi.org/10.5281/zenodo.1178710
Abstract
Download PDF DOI
An embedded acoustic instrument is an embedded musical instrument that provides a direct acoustic output. This paper describes how to make embedded acoustic instruments using laser cutting for digital fabrication. Several tips are given for improving the acoustic quality including: employing maximally stiff material, placing loudspeaker drivers in the corners of enclosure faces, increasing the stiffness of “loudspeaker” faces by doubling their thickness, choosing side-lengths with non-integer ratios, and incorporating bracing. Various versions of an open design of the “LapBox” are provided to help community members replicate and extend the work. A procedure is suggested for testing and optimizing the acoustic quality.
@inproceedings{eberdahl2014, author = {Berdahl, Edgar}, title = {How to Make Embedded Acoustic Instruments}, pages = {140--143}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178710}, url = {http://www.nime.org/proceedings/2014/nime2014_551.pdf} }
Carlos Dominguez. 2014. 16-CdS: A Surface Controller for the Simultaneous Manipulation of Multiple Analog Components. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 78–79. http://doi.org/10.5281/zenodo.1178750
Abstract
Download PDF DOI
This paper presents a project that discusses a brief history of artistic systems that use photoresistors (light-dependent resistors) and results in the construction of an interface and performance controller. The controller combines an Arduino microcontroller with a grid of photoresistors set into a slab of wood covered with a thin acrylic sheet. A brief background on past uses of these components for music and film composition and instrument-building introduces a few different implementations and performance contexts for the controller. Topics such as implementation, construction, and performance possibilities (including electroacoustic and audio-visual performance) of the controller are also discussed.
@inproceedings{cdominguez2014, author = {Dominguez, Carlos}, title = {16-{CdS}: A Surface Controller for the Simultaneous Manipulation of Multiple Analog Components}, pages = {78--79}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178750}, url = {http://www.nime.org/proceedings/2014/nime2014_552.pdf} }
Sang Won Lee and Georg Essl. 2014. Communication, Control, and State Sharing in Collaborative Live Coding. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 263–268. http://doi.org/10.5281/zenodo.1178847
Abstract
Download PDF DOI
In the setting of collaborative live coding, a number of issues emerge: (1) need for communication, (2) issues of conflicts in sharing program state space, and (3) remote control of code execution. In this paper, we propose solutions to these problems. In the recent extension of UrMus, a programming environment for mobile music application development, we introduce a paradigm of shared and individual namespaces safeguard against conflicts in parallel coding activities. We also develop live variable view that communicates live changes in state among live coders, networked performers, and the audience. Lastly, we integrate collaborative aspects of programming execution into built-in live chat, which enables not only communication with others, but also distributed execution of code.
@inproceedings{slee2014, author = {Lee, Sang Won and Essl, Georg}, title = {Communication, Control, and State Sharing in Collaborative Live Coding}, pages = {263--268}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178847}, url = {http://www.nime.org/proceedings/2014/nime2014_554.pdf} }
Regina Collecchia, Dan Somen, and Kevin McElroy. 2014. The Siren Organ. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 391–394. http://doi.org/10.5281/zenodo.1178732
Abstract
Download PDF DOI
Sirens evoke images of alarm, public service, war, and forthcoming air raid. Outside of the music of Edgard Varese, sirens have rarely been framed as musical instruments. By connecting air hoses to spinning disks with evenly-spaced perforations, the siren timbre is translated musically. Polyphony gives our instrument an organ-like personality: keys are mapped to different frequencies and the pressure applied to them determines volume. The siren organ can produce a large range of sounds both timbrally and dynamically. In addition to a siren timbre, the instrument produces similar sounds to a harmonica. Portability, robustness, and electronic stability are all areas for improvement.
@inproceedings{rcollecchia2014, author = {Collecchia, Regina and Somen, Dan and McElroy, Kevin}, title = {The Siren Organ}, pages = {391--394}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178732}, url = {http://www.nime.org/proceedings/2014/nime2014_558.pdf} }
David Rector and Spencer Topel. 2014. Internally Actuated Drums for Expressive Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 395–398. http://doi.org/10.5281/zenodo.1178913
Abstract
Download PDF DOI
Actuated instruments is a growing area of activity for research and composition, yet there has been little focus on membrane-based instruments. This paper describes a novel design for an internally actuated drum based on the mechanical principles of a loudspeaker. Implementation is described in detail; in particular, two modes of actuation, a moving-coil electromagnet and a moving-magnet design, are described. We evaluate the drum using a synthesized frequency sweep, and find that the instrument has a broad frequency response and exhibits qualities of both a drum and speaker.
@inproceedings{drector2014, author = {Rector, David and Topel, Spencer}, title = {Internally Actuated Drums for Expressive Performance}, pages = {395--398}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178913}, url = {http://www.nime.org/proceedings/2014/nime2014_559.pdf} }
Spencer Salazar and Ge Wang. 2014. Auraglyph: Handwritten Computer Music Composition and Design. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 106–109. http://doi.org/10.5281/zenodo.1178927
Abstract
Download PDF DOI
Effective software interaction design must consider all of the capabilities and limitations of the platform for which it is developed. To this end, we propose a new model for computer music system design on touchscreen devices, combining both pen/stylus input and multitouch gestures. Such a model surpasses the barrier of touchscreen-based keyboard input, preserving the primary interaction of touch and direct manipulation throughout the development of a complex musical program. We have implemented an iPad software application utilizing these principles, called “Auraglyph.” Auraglyph offers a number of fundamental audio processing and control operators, as well as facilities for structured input and output. All of these software objects are created, parameterized, and interconnected via stylus and touch input. Underlying this application is an advanced handwriting recognition framework, LipiTk, which can be trained to recognize both alphanumeric characters and arbitrary figures, shapes, and patterns.
@inproceedings{ssalazar2014, author = {Salazar, Spencer and Wang, Ge}, title = {Auraglyph: Handwritten Computer Music Composition and Design}, pages = {106--109}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178927}, url = {http://www.nime.org/proceedings/2014/nime2014_560.pdf} }
Anthony Hornof. 2014. The Prospects For Eye-Controlled Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 461–466. http://doi.org/10.5281/zenodo.1178804
Abstract
Download PDF DOI
Although new sensor devices and data streams are increasingly used for musical expression, and although eye-tracking devices have become increasingly cost-effective and prevalent in research and as a means of communication for people with severe motor impairments, eye-controlled musical expression nonetheless remains somewhat elusive and minimally explored. This paper (a) identifies a number of fundamental human eye movement capabilities and constraints which determine in part what can and cannot be musically expressed with eye movements, (b) reviews prior work on eye-controlled musical expression, and (c) analyzes and provides a taxonomy of what has been done, and what will need to be addressed in future eye-controlled musical instruments. The fundamental human constraints and processes that govern eye movements create a challenge for eye-controlled music in that the instrument needs to be designed to motivate or at least permit specific unique visual goals, each of which when accomplished must then be mapped, using the eye tracker and some sort of sound generator, to different musical outcomes. The control of the musical instrument is less direct than if it were played with muscles that can be controlled in a more direct manner, such as the muscles in the hands.
@inproceedings{ahornof2014, author = {Hornof, Anthony}, title = {The Prospects For Eye-Controlled Musical Performance}, pages = {461--466}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178804}, url = {http://www.nime.org/proceedings/2014/nime2014_562.pdf} }
Adam Place, Liam Lacey, and Thomas Mitchell. 2014. AlphaSphere from Prototype to Product. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 399–402. http://doi.org/10.5281/zenodo.1178903
Abstract
Download PDF DOI
This paper explores the design process of the AlphaSphere, an experimental new musical instrument that has transitioned into scale production and international distribution. Initially, the design intentions and engineering processes are covered. The paper continues by briefly evaluating the user testing process and outlining the ergonomics, communication protocol and software of the device. The paper closes by questioning what it takes to evaluate success as a musical instrument.
@inproceedings{aplace2014, author = {Place, Adam and Lacey, Liam and Mitchell, Thomas}, title = {AlphaSphere from Prototype to Product}, pages = {399--402}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178903}, url = {http://www.nime.org/proceedings/2014/nime2014_568.pdf} }
Anders-Petter Andersson, Birgitta Cappelen, and Fredrik Olofsson. 2014. Designing Sound for Recreation and Well-Being. Proceedings of the International Conference on New Interfaces for Musical Expression, Goldsmiths, University of London, pp. 529–532. http://doi.org/10.5281/zenodo.1178702
Abstract
Download PDF DOI
In this paper we explore how we compose sound for an interactive tangible and mobile interface; where the goal is to improve health and well-being for families with children with disabilities. We describe the composition process from how we decompose a linear beat-based and vocal sound material; recompose it with real-time audio synthesis and composition rules into interactive Scenes. Scenes that make it possible for the user to select, explore and recreate different “sound worlds” with the tangible interface as an instrument; create and play with it as a friend; improvise and create; or relax with it as an ambient sounding furniture. We continue discussing a user story, how the Scenes are recreated by amateur users, persons with severe disabilities and family members; improvising with the mobile tangibles. We discuss composition techniques for mixing sound, tangible-physical and lighting elements in the Scenes. Based on observations we explore how a diverse audience in the family and at school can recreate and improvise their own sound experience and play together with others. We conclude by discussing the possible impact of our findings for the NIME-community; how the techniques of decomposing, recomposing and recreating sound, based on a relational perspective, could contribute to the design of new instruments for musical expression.
@inproceedings{aandersson2014, author = {Andersson, Anders-Petter and Cappelen, Birgitta and Olofsson, Fredrik}, title = {Designing Sound for Recreation and Well-Being}, pages = {529--532}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2014}, month = jun, publisher = {Goldsmiths, University of London}, address = {London, United Kingdom}, issn = {2220-4806}, doi = {10.5281/zenodo.1178702}, url = {http://www.nime.org/proceedings/2014/nime2014_572.pdf} }
2013
Yoshihito Nakanishi, Seiichiro Matsumura, and Chuichi Arakawa. 2013. POWDER BOX: An Interactive Device with Sensor Based Replaceable Interface For Musical Session. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 373–376. http://doi.org/10.5281/zenodo.1178620
Abstract
Download PDF DOI
In this paper, the authors introduce an interactive device, “POWDER BOX”for use by novices in musical sessions. “POWDER BOX” is equipped withsensor-based replaceable interfaces, which enable participants to discover andselect their favorite playing styles of musical instruments during a musicalsession. In addition, it has a wireless communication function thatsynchronizes musical scale and BPM between multiple devices. To date, various kinds of “inventive” electronic musical instruments havebeen created in the field of Computer Music field. The authors are interestedin formations of musical sessions, aiming for a balance between simpleinteraction and musical expression. This study focuses on the development ofperformance playing styles.Musicians occasionally change their playing styles (e.g., guitar pluckingstyle) during a musical session. Generally, it is difficult for nonmusicians toachieve this kind of smooth changing depends on levels of their skillacquisition. However, it is essentially important for enjoying musical sessionswhether people could acquire these skills. Here, the authors attempted to develop the device that supports nonmusicians toconquer this point using replaceable interfaces. The authors expected thatchanging interfaces would bring similar effect as changing playing style by theskillful player. This research aims to establish an environment in whichnonmusicians and musicians share their individual musical ideas easily. Here,the interaction design and configuration of the “POWDER BOX” is presented.
@inproceedings{Nakanishi2013, author = {Nakanishi, Yoshihito and Matsumura, Seiichiro and Arakawa, Chuichi}, title = {{POWDER} {BOX}: An Interactive Device with Sensor Based Replaceable Interface For Musical Session}, pages = {373--376}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178620}, url = {http://www.nime.org/proceedings/2013/nime2013_101.pdf}, keywords = {Musical instrument, synthesizer, replaceable interface, sensors} }
Wolfgang Fohl and Malte Nogalski. 2013. A Gesture Control Interface for a Wave Field Synthesis System. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 341–346. http://doi.org/10.5281/zenodo.1178522
Abstract
Download PDF DOI
This paper presents the design and implementation of agesture control interface for a wave field synthesis system.The user’s motion is tracked by a IR-camera-based trackingsystem. The developed connecting software processes thetracker data to modify the positions of the virtual soundsources of the wave field synthesis system. Due to the mod-ular design of the software, the triggered actions of the ges-tures may easily be modified. Three elementary gestureswere designed and implemented: Select / deselect, circularmovement and radial movement. The guidelines for gesturedesign and detection are presented, and the user experiencesare discussed.
@inproceedings{Fohl2013, author = {Fohl, Wolfgang and Nogalski, Malte}, title = {A Gesture Control Interface for a Wave Field Synthesis System}, pages = {341--346}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178522}, url = {http://www.nime.org/proceedings/2013/nime2013_106.pdf}, keywords = {Wave field synthesis, gesture control} }
Gregory Burlet and Ichiro Fujinaga. 2013. Stompboxes: Kicking the Habit. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 41–44. http://doi.org/10.5281/zenodo.1178488
Abstract
Download PDF DOI
Sensor-based gesture recognition is investigated as a possible solution to theproblem of managing an overwhelming number of audio effects in live guitarperformances. A realtime gesture recognition system, which automaticallytoggles digital audio effects according to gestural information captured by anaccelerometer attached to the body of a guitar, is presented. To supplement theseveral predefined gestures provided by the recognition system, personalizedgestures may be trained by the user. Upon successful recognition of a gesture,the corresponding audio effects are applied to the guitar signal and visualfeedback is provided to the user. An evaluation of the system yielded 86%accuracy for user-independent recognition and 99% accuracy for user-dependentrecognition, on average.
@inproceedings{Burlet2013, author = {Burlet, Gregory and Fujinaga, Ichiro}, title = {Stompboxes: Kicking the Habit}, pages = {41--44}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178488}, url = {http://www.nime.org/proceedings/2013/nime2013_109.pdf}, keywords = {Augmented instrument, gesture recognition, accelerometer, pattern recognition, performance practice} }
Alexander Refsum Jensenius. 2013. Kinectofon: Performing with Shapes in Planes. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 196–197. http://doi.org/10.5281/zenodo.1178564
Abstract
Download PDF DOI
The paper presents the Kinectofon, an instrument for creating sounds through free-hand interaction in a 3D space. The instrument is based on the RGB anddepth image streams retrieved from a Microsoft Kinect sensor device. These twoimage streams are used to create different types of motiongrams, which, again, are used as the source material for a sonification process based on inverse FFT. The instrument is intuitive to play, allowing the performer to createsound by "touching" a virtual sound wall.
@inproceedings{Jensenius2013, author = {Jensenius, Alexander Refsum}, title = {Kinectofon: Performing with Shapes in Planes}, pages = {196--197}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178564}, url = {http://www.nime.org/proceedings/2013/nime2013_110.pdf}, keywords = {Kinect, motiongram, sonification, video analysis} }
Ohad Fried and Rebecca Fiebrink. 2013. Cross-modal Sound Mapping Using Deep Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 531–534. http://doi.org/10.5281/zenodo.1178528
Abstract
Download PDF DOI
We present a method for automatic feature extraction and cross-modal mappingusing deep learning. Our system uses stacked autoencoders to learn a layeredfeature representation of the data. Feature vectors from two (or more)different domains are mapped to each other, effectively creating a cross-modalmapping. Our system can either run fully unsupervised, or it can use high-levellabeling to fine-tune the mapping according a user’s needs. We show severalapplications for our method, mapping sound to or from images or gestures. Weevaluate system performance both in standalone inference tasks and incross-modal mappings.
@inproceedings{Fried2013, author = {Fried, Ohad and Fiebrink, Rebecca}, title = {Cross-modal Sound Mapping Using Deep Learning}, pages = {531--534}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178528}, url = {http://www.nime.org/proceedings/2013/nime2013_111.pdf}, keywords = {Deep learning, feature learning, mapping, gestural control} }
Ajay Kapur, Dae Hong Kim, Raakhi Kapur, and Kisoon Eom. 2013. New Interfaces for Traditional Korean Music and Dance. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 45–48. http://doi.org/10.5281/zenodo.1178576
Abstract
Download PDF DOI
This paper describes the creation of new interfaces that extend traditionalKorean music and dance. Specifically, this research resulted in the design ofthe eHaegum (Korean bowed instrument), eJanggu (Korean drum), and ZiOm wearableinterfaces. The paper describes the process of making these new interfaces aswell as how they have been used to create new music and forms of digital artmaking that blend traditional practice with modern techniques.
@inproceedings{Kapur2013, author = {Kapur, Ajay and Kim, Dae Hong and Kapur, Raakhi and Eom, Kisoon}, title = {New Interfaces for Traditional Korean Music and Dance}, pages = {45--48}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178576}, url = {http://www.nime.org/proceedings/2013/nime2013_113.pdf}, keywords = {Hyperinstrument, Korean interface design, wearable sensors, dance controllers, bowed controllers, drum controllers} }
Edward Zhang. 2013. KIB: Simplifying Gestural Instrument Creation Using Widgets. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 519–524. http://doi.org/10.5281/zenodo.1178698
Abstract
Download PDF DOI
The Microsoft Kinect is a popular and versatile input devicefor musical interfaces. However, using the Kinect for suchinterfaces requires not only signi_x000C_cant programming experience,but also the use of complex geometry or machinelearning techniques to translate joint positions into higherlevel gestures. We created the Kinect Instrument Builder(KIB) to address these di_x000E_culties by structuring gesturalinterfaces as combinations of gestural widgets. KIB allowsthe user to design an instrument by con_x000C_guring gesturalprimitives, each with a set of simple but attractive visualfeedback elements. After designing an instrument on KIB’sweb interface, users can play the instrument on KIB’s performanceinterface, which displays visualizations and transmitsOSC messages to other applications for sound synthesisor further remapping.
@inproceedings{Zhang2013, author = {Zhang, Edward}, title = {KIB: Simplifying Gestural Instrument Creation Using Widgets}, pages = {519--524}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178698}, url = {http://www.nime.org/proceedings/2013/nime2013_114.pdf}, keywords = {Kinect, gesture, widgets, OSC, mapping} }
Jordan Hochenbaum and Ajay Kapur. 2013. Toward The Future Practice Room: Empowering Musical Pedagogy through Hyperinstruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 307–312. http://doi.org/10.5281/zenodo.1178552
Abstract
Download PDF DOI
Music education is a rich subject with many approaches and methodologies thathave developed over hundreds of years. More than ever, technology playsimportant roles at many levels of a musician’s practice. This paper begins toexplore some of the ways in which technology developed out of the NIMEcommunity (specifically hyperinstruments), can inform a musician’s dailypractice, through short and long term metrics tracking and data visualization.
@inproceedings{Hochenbaum2013, author = {Hochenbaum, Jordan and Kapur, Ajay}, title = {Toward The Future Practice Room: Empowering Musical Pedagogy through Hyperinstruments}, pages = {307--312}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178552}, url = {http://www.nime.org/proceedings/2013/nime2013_116.pdf}, keywords = {Hyperinstruments, Pedagogy, Metrics, Ezither, Practice Room} }
Romain Michon, Myles Borins, and David Meisenholder. 2013. The Black Box. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 464–465. http://doi.org/10.5281/zenodo.1178612
Abstract
Download PDF DOI
Black Box is a site based installation that allows users to create uniquesounds through physical interaction. The installation consists of a geodesicdome, surround sound speakers, and a custom instrument suspended from the apexof thedome. Audience members entering the space are able to create sound by strikingor rubbing the cube, and are able to control a delay system by moving the cubewithin the space.
@inproceedings{Michon2013, author = {Michon, Romain and Borins, Myles and Meisenholder, David}, title = {The Black Box}, pages = {464--465}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178612}, url = {http://www.nime.org/proceedings/2013/nime2013_117.pdf}, keywords = {Satellite CCRMA, Beagleboard, PureData, Faust, Embedded-Linux, Open Sound Control} }
Hongchan Choi and Jonathan Berger. 2013. WAAX: Web Audio API eXtension. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 499–502. http://doi.org/10.5281/zenodo.1178494
Abstract
Download PDF DOI
The advent of Web Audio API in 2011 marked a significant advance for web-basedmusic systems by enabling real-time sound synthesis on web browsers simply bywriting JavaScript code. While this powerful functionality has arrived there isa yet unaddressed need for an extension to the API to fully reveal itspotential. To meet this need, a JavaScript library dubbed WAAX was created tofacilitate music and audio programming based on Web Audio API bypassingunderlying tasks and augmenting useful features. In this paper, we describecommon issues in web audio programming, illustrate how WAAX can speed up thedevelopment, and discuss future developments.
@inproceedings{Choi2013, author = {Choi, Hongchan and Berger, Jonathan}, title = {WAAX: Web Audio {API} eXtension}, pages = {499--502}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178494}, url = {http://www.nime.org/proceedings/2013/nime2013_119.pdf}, keywords = {Web Audio API, Chrome, JavaScript, web-based music system, collaborative music making, audience participation} }
Takayuki Hamano, Tomasz Rutkowski, Hiroko Terasawa, Kazuo Okanoya, and Kiyoshi Furukawa. 2013. Generating an Integrated Musical Expression with a Brain–Computer Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 49–54. http://doi.org/10.5281/zenodo.1178542
Abstract
Download PDF DOI
Electroencephalography (EEG) has been used to generate music for over 40 years,but the most recent developments in brain–computer interfaces (BCI) allowgreater control and more flexible expression for using new musical instrumentswith EEG. We developed a real-time musical performance system using BCItechnology and sonification techniques to generate imagined musical chords withorganically fluctuating timbre. We aim to emulate the expressivity oftraditional acoustic instruments. The BCI part of the system extracts patternsfrom the neural activity while a performer imagines a score of music. Thesonification part of the system captures non-stationary changes in the brainwaves and reflects them in the timbre by additive synthesis. In this paper, wediscuss the conceptual design, system development, and the performance of thisinstrument.
@inproceedings{Hamano2013, author = {Hamano, Takayuki and Rutkowski, Tomasz and Terasawa, Hiroko and Okanoya, Kazuo and Furukawa, Kiyoshi}, title = {Generating an Integrated Musical Expression with a Brain--Computer Interface}, pages = {49--54}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178542}, url = {http://www.nime.org/proceedings/2013/nime2013_120.pdf}, keywords = {Brain-computer interface (BCI), qualitative and quantitative information, classification, sonification} }
Charles Martin. 2013. Performing with a Mobile Computer System for Vibraphone. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 377–380. http://doi.org/10.5281/zenodo.1178602
Abstract
Download PDF DOI
This paper describes the development of an Apple iPhone based mobile computersystem for vibraphone and its use in a series of the author’s performanceprojects in 2011 and 2012.This artistic research was motivated by a desire to develop an alternative tolaptop computers for the author’s existing percussion and computer performancepractice. The aims were to develop a light, compact and flexible system usingmobile devices that would allow computer music to infiltrate solo and ensembleperformance situations where it is difficult to use a laptop computer.The project began with a system that brought computer elements to NordligVinter, a suite of percussion duos, using an iPhone, RjDj, Pure Data and ahome-made pickup system. This process was documented with video recordings andanalysed using ethnographic methods.The mobile computer music setup proved to be elegant and convenient inperformance situations with very little time and space to set up, as well as inperformance classes and workshops. The simple mobile system encouragedexperimentation and the platforms used enabled sharing with a wider audience.
@inproceedings{Martin2013, author = {Martin, Charles}, title = {Performing with a Mobile Computer System for Vibraphone}, pages = {377--380}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178602}, url = {http://www.nime.org/proceedings/2013/nime2013_121.pdf}, keywords = {percussion, mobile computer music, Apple iOS, collaborative performance practice, ethnography, artistic research} }
Alex McLean, EunJoo Shin, and Kia Ng. 2013. Paralinguistic Microphone. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 381–384. http://doi.org/10.5281/zenodo.1178608
Abstract
Download PDF DOI
The Human vocal tract is considered for its sonorous qualities incarrying prosodic information, which implicates vision in theperceptual processes of speech. These considerations are put in thecontext of previous work in NIME, forming background for theintroduction of two sound installations; “Microphone”, which uses acamera and computer vision to translate mouth shapes to sounds, and“Microphone II”, a work-in-progress, which adds physical modellingsynthesis as a sound source, and visualisation of mouth movements.
@inproceedings{McLean2013, author = {McLean, Alex and Shin, EunJoo and Ng, Kia}, title = {Paralinguistic Microphone}, pages = {381--384}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178608}, url = {http://www.nime.org/proceedings/2013/nime2013_122.pdf}, keywords = {face tracking, computer vision, installation, microphone} }
Daniel Bisig and Sébastien Schiesser. 2013. Coral – a Physical and Haptic Extension of a Swarm Simulation. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 385–388. http://doi.org/10.5281/zenodo.1178482
Abstract
Download PDF DOI
This paper presents a proof of concept implementation of an interface entitledCoral. The interface serves as a physical and haptic extension of a simulatedcomplex system, which will be employed as an intermediate mechanism for thecreation of generative music and imagery. The paper discusses the motivationand conceptual context that underly the implementation, describes its technicalrealisation and presents some first interaction experiments. The paper focuseson the following two aspects: the interrelation between the physical andvirtual behaviours and properties of the interface and simulation, and thecapability of the interface to enable an intuitive and tangible exploration ofthis hybrid dynamical system.
@inproceedings{Bisig2013, author = {Bisig, Daniel and Schiesser, S{\'e}bastien}, title = {Coral -- a Physical and Haptic Extension of a Swarm Simulation}, pages = {385--388}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178482}, url = {http://www.nime.org/proceedings/2013/nime2013_126.pdf}, keywords = {haptic interface, swarm simulation, generative art} }
Jan C. Schacher. 2013. Hybrid Musicianship — Teaching Gestural Interaction with Traditional and Digital Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 55–60. http://doi.org/10.5281/zenodo.1178656
Abstract
Download PDF DOI
This article documents a class that teaches gestural interaction and juxtaposestraditional instrumental skills with digital musical instrument concepts. Inorder to show the principles and reflections that informed the choices made indeveloping this syllabus, fundamental elements of an instrument-bodyrelationship and the perceptual import of sensori-motor integration areinvestigated. The methods used to let participants learn in practicalexperimental settings are discussed, showing a way to conceptualise andexperience the entire workflow from instrumental sound to electronictransformations by blending gestural interaction with digital musicalinstrument techniques and traditional instrumental playing skills. Thetechnical interfaces and software that were deployed are explained, focussingof the interactive potential offered by each solution. In an attempt tosummarise and evaluate the impact of this course, a number of insights relatingto this specific pedagogical situation are put forward. Finally, concreteexamples of interactive situations that were developed by the participants areshown in order to demonstrate the validity of this approach.
@inproceedings{Schacher2013, author = {Schacher, Jan C.}, title = {Hybrid Musicianship --- Teaching Gestural Interaction with Traditional and Digital Instruments}, pages = {55--60}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178656}, url = {http://www.nime.org/proceedings/2013/nime2013_127.pdf}, keywords = {gestural interaction, digital musical instruments, pedagogy, mapping, enactive approach} }
Jackie, Yi Tang Chui, Mubarak Marafa, Samson, and Ka Fai Young. 2013. SoloTouch: A Capacitive Touch Controller with Lick-based Note Selector. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 389–393. http://doi.org/10.5281/zenodo.1178560
Abstract
Download PDF DOI
SoloTouch is a guitar inspired pocket sized controller system that consists ofa capacitive touch trigger and a lick-based note selector. The touch triggerallows an intuitive way to play both velocity sensitive notes and vibratoexpressively using only one finger. The lick-based note selector is an originalconcept that provides the player an easy way to play expressive melodic linesby combining pre-programmed “licks” without the need to learn the actualnotes. The two-part controller is primarily used as a basic MIDI controller forplaying MIDI controlled virtual instruments, normally played by keyboardcontrollers. The controller is targeted towards novice musicians, playerswithout prior musical training could play musical and expressive solos,suitable for improvised jamming along modern popular music.
@inproceedings{Jackie2013, author = {Jackie and Chui, Yi Tang and Marafa, Mubarak and Samson and Young, Ka Fai}, title = {SoloTouch: A Capacitive Touch Controller with Lick-based Note Selector}, pages = {389--393}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178560}, url = {http://www.nime.org/proceedings/2013/nime2013_130.pdf}, keywords = {Capacitive touch controller, automated note selector, virtual instrument MIDI controller, novice musicians.} }
Parag Kumar Mital and Mick Grierson. 2013. Mining Unlabeled Electronic Music Databases through 3D Interactive Visualization of Latent Component Relationships. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 227–232. http://doi.org/10.5281/zenodo.1178614
Abstract
Download PDF DOI
We present an interactive content-based MIR environment specifically designedto aid in the exploration of databases of experimental electronic music,particularly in cases where little or no metadata exist. In recent years,several rare archives of early experimental electronic music have becomeavailable. The Daphne Oram Collection contains one such archive, consisting ofapproximately 120 hours of 1/4 inch tape recordings and representing a perioddating from circa 1957. This collection is recognized as an importantmusicological resource, representing aspects of the evolution of electronicmusic practices, including early tape editing methods, experimental synthesistechniques and composition. However, it is extremely challenging to derivemeaningful information from this dataset, primarily for three reasons. First,the dataset is very large. Second, there is limited metadata — some titles,track lists, and occasional handwritten notes exist, but where this is true,the reliability of the annotations are unknown. Finally, and mostsignificantly, as this is a collection of early experimental electronic music,the sonic characteristics of the material are often not consistent withtraditional musical information. In other words, there is no score, no knowninstrumentation, and often no recognizable acoustic source. We present amethod for the construction of a frequency component dictionary derived fromthe collection via Probabilistic Latent Component Analysis (PLCA), anddemonstrate how an interactive 3D visualization of the relationships betweenthe PLCA-derived dictionary and the archive is facilitating researcher’sunderstanding of the data.
@inproceedings{Mital2013, author = {Mital, Parag Kumar and Grierson, Mick}, title = {Mining Unlabeled Electronic Music Databases through {3D} Interactive Visualization of Latent Component Relationships}, pages = {227--232}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178614}, url = {http://www.nime.org/proceedings/2013/nime2013_132.pdf}, keywords = {mir, plca, mfcc, 3d browser, daphne oram, content-based information retrieval, interactive visualization} }
Dae Ryong Hong and Woon Seung Yeo. 2013. Laptap: Laptop Computer as a Musical Instrument using Audio Feedback. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 233–236. http://doi.org/10.5281/zenodo.1178554
Abstract
Download PDF DOI
Laptap is a laptop-based, real-time sound synthesis/control system for musicand multimedia performance. The system produces unique sounds by positive audiofeedback between the on-board microphone and the speaker of a laptop com-puter. Users can make a variety of sounds by touching the laptop computer inseveral different ways, and control their timbre with the gestures of the otherhand above the mi, crophone and the speaker to manipulate the characteristicsof the acoustic feedback path. We introduce the basic con, cept of this audiofeedback system, describe its features for sound generation and manipulation,and discuss the result of an experimental performance. Finally we suggest somerelevant research topics that might follow in the future.
@inproceedings{Hong2013, author = {Hong, Dae Ryong and Yeo, Woon Seung}, title = {Laptap: Laptop Computer as a Musical Instrument using Audio Feedback}, pages = {233--236}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178554}, url = {http://www.nime.org/proceedings/2013/nime2013_137.pdf}, keywords = {Laptop music, laptop computer, audio feedback, hand gesture, gestural control, musical mapping, audio visualization, musical notation} }
Danielle Bragg. 2013. Synchronous Data Flow Modeling for DMIs. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 237–242. http://doi.org/10.5281/zenodo.1178486
Abstract
Download PDF DOI
This paper presents a graph-theoretic model that supports the design andanalysis of data flow within digital musical instruments (DMIs). The state ofthe art in DMI design fails to provide any standards for the scheduling ofcomputations within a DMI’s data flow. It does not provide a theoreticalframework within which we can analyze different scheduling protocols and theirimpact on the DMI’s performance. Indeed, the mapping between the DMI’s sensoryinputs and sonic outputs is classically treated as a black box. DMI designersand builders are forced to design and schedule the flow of data through thisblack box on their own. Improper design of the data flow can produceundesirable results, ranging from overflowing buffers that cause system crashesto misaligned sensory data that result in strange or disordered sonic events.In this paper, we attempt to remedy this problem by providing a framework forthe design and analysis of the DMI data flow. We also provide a schedulingalgorithm built upon that framework that guarantees desirable properties forthe resulting DMI.
@inproceedings{Bragg2013, author = {Bragg, Danielle}, title = {Synchronous Data Flow Modeling for {DMI}s}, pages = {237--242}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178486}, url = {http://www.nime.org/proceedings/2013/nime2013_139.pdf}, keywords = {DMI design, data flow, mapping function} }
Lionel Feugère and Christophe d’Alessandro. 2013. Digitartic: bi-manual gestural control of articulation in performative singing synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 331–336. http://doi.org/10.5281/zenodo.1178520
Abstract
Download PDF DOI
Digitartic, a system for bi-manual gestural control of Vowel-Consonant-Vowelperformative singing synthesis is presented. This system is an extension of areal-time gesture-controlled vowel singing instrument developed in the Max/MSPlanguage. In addition to pitch, vowels and voice strength control, Digitarticis designed for gestural control of articulation parameters for a wide set onconsonant, including various places and manners of articulation. The phases ofarticulation between two phonemes are continuously controlled and can bedriven in real time without noticeable delay, at any stage of the syntheticphoneme production. Thus, as in natural singing, very accurate rhythmicpatterns are produced and adapted while playing with other musicians. Theinstrument features two (augmented) pen tablets for controlling voiceproduction: one is dealing with the glottal source and vowels, the second oneis dealing with consonant/vowel articulation. The results show very naturalconsonant and vowel synthesis. Virtual choral practice confirms theeffectiveness of Digitartic as an expressive musical instrument.
@inproceedings{Feugere2013, author = {Feug{\`e}re, Lionel and d'Alessandro, Christophe}, title = {Digitartic: bi-manual gestural control of articulation in performative singing synthesis}, pages = {331--336}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178520}, url = {http://www.nime.org/proceedings/2013/nime2013_143.pdf}, keywords = {singing voice synthesis, gestural control, syllabic synthesis, articulation, formants synthesis} }
Jan C. Schacher. 2013. The Quarterstaff, a Gestural Sensor Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 535–540. http://doi.org/10.5281/zenodo.1178658
Abstract
Download PDF DOI
This article describes the motivations and reflections that led to thedevelopment of a gestural sensor instrument called the Quarterstaff. In aniterative design and fabrication process, several versions of this interfacewere build, tested and evaluated in performances. A detailed explanation of thedesign choices concerning the shape but also the sensing capabilities of theinstrument illustrates the emphasis on establishing an ‘enactive’instrumental relationship. A musical practice for this type of instrument isshown by discussing the methods used in the exploration of the gesturalpotential of the interface and the strategies deployed for the development ofmappings and compositions. Finally, to gain more information about how thisinstrument compares with similar designs, two dimension-space analyses are madethat show a clear positioning in relation to instruments that precede theQuarterstaff.
@inproceedings{Schacher2013a, author = {Schacher, Jan C.}, title = {The Quarterstaff, a Gestural Sensor Instrument}, pages = {535--540}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178658}, url = {http://www.nime.org/proceedings/2013/nime2013_144.pdf}, keywords = {Gestural sensor interface, instrument design, body-object relation, composition and performance practice, dimension space analysis} }
Alessandro Altavilla, Baptiste Caramiaux, and Atau Tanaka. 2013. Towards Gestural Sonic Affordances. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 61–64. http://doi.org/10.5281/zenodo.1178463
Abstract
Download PDF DOI
We present a study that explores the affordance evoked by sound andsound-gesture mappings. In order to do this, we make use of a sensor systemwith minimal form factor in a user study that minimizes cultural associationThe present study focuses on understanding how participants describe sounds andgestures produced while playing designed sonic interaction mappings. Thisapproach seeks to move from object-centric affordance towards investigatingembodied gestural sonic affordances.
@inproceedings{Altavilla2013, author = {Altavilla, Alessandro and Caramiaux, Baptiste and Tanaka, Atau}, title = {Towards Gestural Sonic Affordances}, pages = {61--64}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178463}, url = {http://www.nime.org/proceedings/2013/nime2013_145.pdf}, keywords = {Gestural embodiment of sound, Affordances, Mapping} }
Mark Cerqueira, Spencer Salazar, and Ge Wang. 2013. SoundCraft: Transducing StarCraft 2. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 243–247. http://doi.org/10.5281/zenodo.1178492
Abstract
Download PDF DOI
SoundCraft is a framework that enables real-time data gathering from aStarCraft 2 game to external software applications, allowing for musicalinterpretation of the game’s internal structure and strategies in novel ways.While players battle each other for victory within the game world, a customStarCraft 2 map collects and writes out data about players’ decision-making,performance, and current focus on the map. This data is parsed and transmittedover Open Sound Control (OSC) in real-time, becoming the source for thesoundscape that accompanies the player’s game. Using SoundCraft, we havecomposed a musical work for two em StarCraft 2 players, entitled GG Music. Thispaper details the technical and aesthetic development of SoundCraft, includingdata collection and sonic mapping. Please see the attached video file for a performance of GG Music using theSoundCraft framework.
@inproceedings{Cerqueira2013, author = {Cerqueira, Mark and Salazar, Spencer and Wang, Ge}, title = {SoundCraft: Transducing StarCraft 2}, pages = {243--247}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178492}, url = {http://www.nime.org/proceedings/2013/nime2013_146.pdf}, keywords = {interactive sonification, interactive game music, StarCraft 2} }
Xin Fan and Georg Essl. 2013. Air Violin: A Body-centric Style Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 122–123. http://doi.org/10.5281/zenodo.1178512
Abstract
Download PDF DOI
We show how body-centric sensing can be integrated in musical interface toenable more flexible gestural control. We present a barehanded body-centricinteraction paradigm where users are able to interact in a spontaneous waythroughperforming gestures. The paradigm employs a wearable camera and see-throughdisplay to enable flexible interaction in the 3D space. We designed andimplemented a prototype called Air Violin, a virtual musical instrument usingdepth camera, to demonstrate the proposed interaction paradigm. We describedthe design and implementation details.
@inproceedings{Fan2013, author = {Fan, Xin and Essl, Georg}, title = {Air Violin: A Body-centric Style Musical Instrument}, pages = {122--123}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178512}, url = {http://www.nime.org/proceedings/2013/nime2013_149.pdf}, keywords = {NIME, musical instrument, interaction, gesture, Kinect} }
Johnty Wang, Nicolas d’Alessandro, Aura Pon, and Sidney Fels. 2013. PENny: An Extremely Low-Cost Pressure-Sensitive Stylus for Existing Capacitive Touchscreens. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. input interfaces, touch screens, tablets, pressure-sensitive, low-cost. http://doi.org/10.5281/zenodo.1178686
Abstract
Download PDF DOI
By building a wired passive stylus we have added pressure sensitivity toexisting capacitive touch screen devices for less than
@inproceedings{Wang2013, author = {Wang, Johnty and d'Alessandro, Nicolas and Pon, Aura and Fels, Sidney}, title = {PENny: An Extremely Low-Cost Pressure-Sensitive Stylus for Existing Capacitive Touchscreens}, pages = {input interfaces, touch screens, tablets, pressure-sensitive, low-cost}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178686}, url = {http://www.nime.org/proceedings/2013/nime2013_150.pdf}, keywords = {10 in materials, about1/10th the cost of existing solutions. The stylus makes use of the built inaudio interface that is available on most smartphones and tablets on the markettoday. Limitations of the device include the physical constraint of wires, theoccupation of one audio input and output channel, and increased latency equalto the period of at least one audio buffer duration. The stylus has beendemonstrated in two cases thus far: a visual musical score drawing and asinging synthesis application.} }
Andrew Johnston. 2013. Fluid Simulation as Full Body Audio-Visual Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 132–135. http://doi.org/10.5281/zenodo.1178572
Abstract
Download PDF DOI
This paper describes an audio-visual performance system based on real-timefluid simulation. The aim is to provide a rich environment for works whichblur the boundaries between dance and instrumental performance – and sound andvisuals – while maintaining transparency for audiences and new performers. The system uses infra-red motion tracking to allow performers to manipulate areal-time fluid simulation, which in turn provides control data forcomputer-generated audio and visuals. It also provides a control andconfiguration system which allows the behaviour of the interactive system to bechanged over time, enabling the structure within which interactions take placeto be ‘composed’.
@inproceedings{Johnston2013, author = {Johnston, Andrew}, title = {Fluid Simulation as Full Body Audio-Visual Instrument}, pages = {132--135}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178572}, url = {http://www.nime.org/proceedings/2013/nime2013_151.pdf}, keywords = {performance, dance, fluid simulation, composition} }
Yuan-Yi Fan and Myles Sciotto. 2013. BioSync: An Informed Participatory Interface for Audience Dynamics and Audiovisual Content Co-creation using Mobile PPG and EEG. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 248–251. http://doi.org/10.5281/zenodo.1178514
Abstract
Download PDF DOI
The BioSync interface presented in this paper merges the heart-rate basedparadigm with the brain-wave based paradigm into one mobile unit which isscalable for large audience real-time applications. The goal of BioSync is toprovide a hybrid interface, which uses audience biometric responses foraudience participation techniques. To provide an affordable and scalablesolution, BioSync collects the user’s heart rate via mobile phone pulseoximetry and the EEG data via Bluetooth communication with the off-the-shelfMindWave Mobile hardware. Various interfaces have been designed and implementedin the development of audience participation techniques and systems. In thedesign and concept of BioSync, we first summarize recent interface research foraudience participation within the NIME-related context, followed by the outlineof the BioSync methodology and interface design. We then present a techniquefor dynamic tempo control based on the audience biometric responses and anearly prototype of a mobile dual-channel pulse oximetry and EEG bi-directionalinterface for iOS device (BioSync). Finally, we present discussions and ideasfor future applications, as well as plans for a series of experiments, whichinvestigate if temporal parameters of an audience’s physiological metricsencourage crowd synchronization during a live event or performance, acharacteristic, which we see as having great potential in the creation offuture live musical and audiovisual performance applications.
@inproceedings{Fan2013a, author = {Fan, Yuan-Yi and Sciotto, Myles}, title = {BioSync: An Informed Participatory Interface for Audience Dynamics and Audiovisual Content Co-creation using Mobile PPG and {EEG}}, pages = {248--251}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178514}, url = {http://www.nime.org/proceedings/2013/nime2013_152.pdf}, keywords = {Mobile, Biometrics, Synchronous Interaction, Social, Audience, Experience} }
Qi Yang and Georg Essl. 2013. Visual Associations in Augmented Keyboard Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 252–255. http://doi.org/10.5281/zenodo.1178694
Abstract
Download PDF DOI
What is the function of visuals in the design of an augmented keyboardperformance device with projection? We address this question by thinkingthrough the impact of choices made in three examples on notions of locus ofattention, visual anticipation and causal gestalt to articulate a space ofdesign choices. Visuals can emphasize and deemphasize aspects of performanceand help clarify the role input has to the performance. We suggest that thisprocess might help thinking through visual feedback design in NIMEs withrespect to the performer or the audience.
@inproceedings{Yang2013, author = {Yang, Qi and Essl, Georg}, title = {Visual Associations in Augmented Keyboard Performance}, pages = {252--255}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178694}, url = {http://www.nime.org/proceedings/2013/nime2013_156.pdf}, keywords = {Visual feedback, interaction, NIME, musical instrument, interaction, augmented keyboard, gesture, Kinect} }
Miles Thorogood and Philippe Pasquier. 2013. Impress: A Machine Learning Approach to Soundscape Affect Classification for a Music Performance Environment. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 256–260. http://doi.org/10.5281/zenodo.1178674
Abstract
Download PDF DOI
Soundscape composition in improvisation and performance contexts involves manyprocesses that can become overwhelming for a performer, impacting on thequality of the composition. One important task is evaluating the mood of acomposition for evoking accurate associations and memories of a soundscape. Anew system that uses supervised machine learning is presented for theacquisition and realtime feedback of soundscape affect. A model of sound-scape mood is created by users entering evaluations of audio environmentsusing a mobile device. The same device then provides feedback to the user ofthe predicted mood of other audio environments. We used a features vector ofTotal Loudness and MFCC extracted from an audio signal to build a multipleregression models. The evaluation of the system shows the tool is effective inpredicting soundscape affect.
@inproceedings{Thorogood2013, author = {Thorogood, Miles and Pasquier, Philippe}, title = {Impress: A Machine Learning Approach to Soundscape Affect Classification for a Music Performance Environment}, pages = {256--260}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178674}, url = {http://www.nime.org/proceedings/2013/nime2013_157.pdf}, keywords = {soundscape, performance, machine learning, audio features, affect grid} }
Gibeom Park and Kyogu Lee. 2013. Sound Spray — can-shaped sound effect device. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 65–68. http://doi.org/10.5281/zenodo.1178634
Abstract
Download PDF DOI
In this paper, we designed a sound effect device, which was applicable forspray paint art process. For the applicability research of the device, wedesigned a prototype which had a form not far off the traditional spray cans,using Arduino and various sensors. Through the test process of the prototype,we verified the elements that would be necessary to apply our newly designeddevice to real spray paint art activities. Thus we checked the possibility ofvarious musical expressions by expanding the functions of the designed device.
@inproceedings{Park2013a, author = {Park, Gibeom and Lee, Kyogu}, title = {Sound Spray --- can-shaped sound effect device}, pages = {65--68}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178634}, url = {http://www.nime.org/proceedings/2013/nime2013_158.pdf}, keywords = {Sound effect device, Spray paint art, Arduino, Pure Data} }
Hayami Tobise, Yoshinari Takegawa, Tsutomu Terada, and Masahiko Tsukamoto. 2013. Construction of a System for Recognizing Touch of Strings for Guitar. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 261–266. http://doi.org/10.5281/zenodo.1178676
Abstract
Download PDF DOI
In guitar performance, fingering is an important factor, and complicated. In particular, the fingering of the left hand comprises various relationshipsbetween the finger and the string, such as a finger touching the strings, afinger pressing the strings, and a finger releasing the strings. The recognition of the precise fingering of the left hand is applied to aself-learning support system, which is able to detect strings being muted by afinger, and which transcribes music automatically, including the details offingering techniques. Therefore, the goal of our study is the construction of a system forrecognizing the touch of strings for the guitar. We propose a method for recognizing the touch of strings based on theconductive characteristics of strings and frets. We develop a prototype system, and evaluate its effectiveness.Furthermore, we propose an application which utilizes our system.
@inproceedings{Tobise2013, author = {Tobise, Hayami and Takegawa, Yoshinari and Terada, Tsutomu and Tsukamoto, Masahiko}, title = {Construction of a System for Recognizing Touch of Strings for Guitar}, pages = {261--266}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178676}, url = {http://www.nime.org/proceedings/2013/nime2013_159.pdf}, keywords = {Guitar, Touched strings, Fingering recognition} }
Tomohiro Tokunaga and Michael J. Lyons. 2013. Enactive Mandala: Audio-visualizing Brain Waves. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 118–119. http://doi.org/10.5281/zenodo.1178678
Abstract
Download PDF DOI
We are exploring the design and implementation of artificial expressions,kinetic audio-visual representations of real-time physiological data whichreflect emotional and cognitive state. In this work we demonstrate a prototype,the Enactive Mandala, which maps real-time EEG signals to modulate ambientmusic and animated visual music. The design draws inspiration from the visualmusic of the Whitney brothers as well as traditional meditative practices.Transparent real-time audio-visual feedback ofbrainwave qualities supports intuitive insight into the connection betweenthoughts and physiological states. Our method is constructive: by linkingphysiology with an dynamic a/v display, and embedding the human-machine systemin the social contexts that arise in real-time play, we hope to seed new, andas yet unknown forms, of non-verbal communication, or “artificialexpressions”.
@inproceedings{Tokunaga2013, author = {Tokunaga, Tomohiro and Lyons, Michael J.}, title = {Enactive Mandala: Audio-visualizing Brain Waves}, pages = {118--119}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178678}, url = {http://www.nime.org/proceedings/2013/nime2013_16.pdf}, keywords = {Brain-computer Interfaces, BCI, EEG, Sonification, Visualization, Artificial Expressions, NIME, Visual Music} }
Russell Eric Dobda. 2013. Applied and Proposed Installations with Silent Disco Headphones for Multi-Elemental Creative Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 69–72. http://doi.org/10.5281/zenodo.1178502
Abstract
Download PDF DOI
Breaking musical and creative expression into elements, layers, and formulas, we explore how live listeners create unique sonic experiences from a palette of these elements and their interactions. Bringing us to present-day creative applications, a social and historical overview of silent disco is presented. The advantages of this active listening interface are outlined by the author’s expressions requiring discrete elements, such as binaural beats, 3D audio effects, and multiple live music acts in the same space. Events and prototypes as well as hardware and software proposals for live multi-listener manipulation of multielemental sound and music are presented. Examples in audio production, sound healing, music composition, tempo phasing, and spatial audio illustrate the applications.
@inproceedings{Dobda2013, author = {Dobda, Russell Eric}, title = {Applied and Proposed Installations with Silent Disco Headphones for Multi-Elemental Creative Expression}, pages = {69--72}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178502}, url = {http://www.nime.org/proceedings/2013/nime2013_161.pdf}, keywords = {wireless headphones, music production, silent disco, headphone concert, binaural beats, multi-track audio, active music listening, sound healing, mobile clubbing, smart-phone apps} }
Kameron Christopher, Jingyin He, Raakhi Kapur, and Ajay Kapur. 2013. Kontrol: Hand Gesture Recognition for Music and Dance Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 267–270. http://doi.org/10.5281/zenodo.1178496
Abstract
Download PDF DOI
This paper describes Kontrol, a new hand interface that extends the intuitivecontrol of electronic music to traditional instrumentalist and dancers. Thegoal of the authors has been to provide users with a device that is capable ofdetecting the highly intricate and expressive gestures of the master performer,in order for that information to be interpreted and used for control ofelectronic music. This paper discusses related devices, the architecture ofKontrol, it’s potential as a gesture recognition device, and severalperformance applications.
@inproceedings{Christopher2013, author = {Christopher, Kameron and He, Jingyin and Kapur, Raakhi and Kapur, Ajay}, title = {Kontrol: Hand Gesture Recognition for Music and Dance Interaction}, pages = {267--270}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178496}, url = {http://www.nime.org/proceedings/2013/nime2013_164.pdf}, keywords = {Hand controller, computational ethnomusicology, dance interface, conducting interface, Wekinator, wearable sensors} }
Yoon Chung Han, Byeong-jun Han, and Matthew Wright. 2013. Digiti Sonus: Advanced Interactive Fingerprint Sonification Using Visual Feature Analysis. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 136–141. http://doi.org/10.5281/zenodo.1178548
Abstract
Download PDF DOI
This paper presents a framework that transforms fingerprint patterns intoaudio. We describe Digiti Sonus, an interactive installation performingfingerprint sonification and visualization, including novel techniques forrepresenting user-intended fingerprint expression as audio parameters. In orderto enable personalized sonification and broaden timbre of sound, theinstallation employs sound synthesis based on various visual feature analysissuch as minutiae extraction, area, angle, and push pressure of fingerprints.The sonification results are discussed and the diverse timbres of soundretrieved from different fingerprints are compared.
@inproceedings{Han2013a, author = {Han, Yoon Chung and Han, Byeong-jun and Wright, Matthew}, title = {Digiti Sonus: Advanced Interactive Fingerprint Sonification Using Visual Feature Analysis}, pages = {136--141}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178548}, url = {http://www.nime.org/proceedings/2013/nime2013_170.pdf}, keywords = {Fingerprint, Fingerprint sonification, interactive sonification, sound synthesis, biometric data} }
Olivier Perrotin and Christophe d’Alessandro. 2013. Adaptive mapping for improved pitch accuracy on touch user interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 186–189. http://doi.org/10.5281/zenodo.1178640
Abstract
Download PDF DOI
Touch user interfaces such as touchpad or pen tablet are often used forcontinuous pitch control in synthesis devices. Usually, pitch is set at thecontact point on the interface, thus introducing possible pitch inaccuracies atthe note onset. This paper proposes a new algorithm, based on an adaptiveattraction mapping, for improving initial pitch accuracy with touch userinterfaces with continuous control. At each new contact on the interface, thealgorithm adjusts the mapping to produce the most likely targeted note of thescale in the vicinity of the contact point. Then, pitch remains continuouslyadjustable as long as the contact is maintained, allowing for vibrato,portamento and other subtle melodic control. The results of experimentscomparing the users’ pitch accuracy with and without the help of the algorithmshow that such a correction enables to play sharply in tune at the contact withthe interface, regardless the musical background of the player. Therefore, thedynamic mapping algorithm allows for a clean and accurate attack when playing touch user interfaces for controlling continuous pitch instruments like voicesynthesizers.
@inproceedings{Perrotin2013, author = {Perrotin, Olivier and d'Alessandro, Christophe}, title = {Adaptive mapping for improved pitch accuracy on touch user interfaces}, pages = {186--189}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178640}, url = {http://www.nime.org/proceedings/2013/nime2013_178.pdf}, keywords = {Sound synthesis control, touch user interfaces, pen tablet, automatic correction, accuracy, precision} }
Fumitaka Kikukawa, Sojiro Ishihara, Masato Soga, and Hirokazu Taki. 2013. Development of A Learning Environment for Playing Erhu by Diagnosis and Advice regarding Finger Position on Strings. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 271–276. http://doi.org/10.5281/zenodo.1178580
Abstract
Download PDF DOI
So far, there are few studies of string instruments with bows because there aremany parameters to acquire skills and it is difficult to measure theseparameters. Therefore, the aim of this paper is to propose a design of alearning environment for a novice learner to acquire an accurate fingerposition skill. For achieving the aim, we developed a learning environmentwhich can diagnose learner’s finger position and give the learner advice byusing magnetic position sensors. The system shows three windows; a fingerposition window for visualization of finger position, a score window fordiagnosing finger position along the score and command prompt window forshowing states of system and advices. Finally, we evaluated the system by anexperiment. The experimental group improved accuracy values about fingerpositions and also improved accuracy of pitches of sounds compared withcontrol group. These results shows significant differences.
@inproceedings{Kikukawa2013, author = {Kikukawa, Fumitaka and Ishihara, Sojiro and Soga, Masato and Taki, Hirokazu}, title = {Development of A Learning Environment for Playing Erhu by Diagnosis and Advice regarding Finger Position on Strings}, pages = {271--276}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178580}, url = {http://www.nime.org/proceedings/2013/nime2013_181.pdf}, keywords = {Magnetic Position Sensors, String Instruments, Skill, Learning Environment, Finger Position} }
Brennon Bortz, Aki Ishida, Ivica Ico Bukvic, and R. Benjamin Knapp. 2013. Lantern Field: Exploring Participatory Design of a Communal, Spatially Responsive Installation. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 73–78. http://doi.org/10.5281/zenodo.1178484
Abstract
Download PDF DOI
Mountains and Valleys (an anonymous name for confidentiality) is a communal,site-specific installation that takes shape as a spatially-responsiveaudio-visual field. The public participates in the creation of theinstallation, resulting in shared ownership of the work between both theartists and participants. Furthermore, the installation takes new shape in eachrealization, both to incorporate the constraints and affordances of eachspecific site, as well as to address the lessons learned from the previousiteration. This paper describes the development and execution of Mountains andValleys over its most recent version, with an eye toward the next iteration ata prestigious art museum during a national festival in Washington, D.C.
@inproceedings{Bortz2013, author = {Bortz, Brennon and Ishida, Aki and Bukvic, Ivica Ico and Knapp, R. Benjamin}, title = {Lantern Field: Exploring Participatory Design of a Communal, Spatially Responsive Installation}, pages = {73--78}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178484}, url = {http://www.nime.org/proceedings/2013/nime2013_192.pdf}, keywords = {Participatory creation, communal interaction, fields, interactive installation, Japanese lanterns} }
Edmar Soria and Roberto Morales-Manzanares. 2013. Multidimensional sound spatialization by means of chaotic dynamical systems. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 79–83. http://doi.org/10.5281/zenodo.1178664
Abstract
Download PDF DOI
This work presents a general framework method for cre-ating spatialization systems focused on electroacoustic andacousmatic music performance and creation. Although weused the logistic equation as orbit generator, any dynami-cal system could be suitable. The main idea lies on generating vectors of Rn with entriesfrom data series of di_x000B_erent orbits from an speci_x000C_c dynami-cal system. Such vectors will be called system vectors. Ourproposal is to create ordered paths between those pointsor system vectors using the Splines Quark library by Felix,1which allow us to generate smooth curves joining the points.Finally, interpolating that result with a _x000C_xed sample value,we are able to obtain speci_x000C_c and independent multidimen-sional panning trajectories for each speaker array and forany number of sound sources.Our contribution is intended to be at the very root of the compositionalprocess giving to the creator a method for exploring new ways for spatialsound placement over time for a wide range of speakers ar-rangements. The advantage of using controlled chaotic dy-namical systems like the logistic equation, lies on the factthat the composer can freely and consciously choose be-tween stable or irregular behaviour for the orbits that willgenerate his/her panning trajectories. Besides, with the useof isometries, it is possible to generate di_x000B_erent related or-bits with one single evaluation of the system. The use ofthe spline method in SuperCollider allows the possibilityof joining and relating those values from orbits into a wellde_x000C_ned and coherent general system. Further research willinclude controlling synthesis parameters in the same waywe created panning trajectories.
@inproceedings{Soria2013, author = {Soria, Edmar and Morales-Manzanares, Roberto}, title = {Multidimensional sound spatialization by means of chaotic dynamical systems}, pages = {79--83}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178664}, url = {http://www.nime.org/proceedings/2013/nime2013_195.pdf}, keywords = {NIME, spatialization, dynamical systems, chaos} }
Ulysse Rosselet and Alain Renaud. 2013. Jam On: a new interface for web-based collective music performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 394–399. http://doi.org/10.5281/zenodo.1178650
Abstract
Download PDF DOI
This paper presents the musical interactions aspects of the design anddevelopment of a web-based interactive music collaboration system called JamOn. Following a design science approach, this system is being built accordingto principles taken from usability engineering and human computer interaction(HCI). The goal of the system is to allow people with no to little musicalbackground to play a song collaboratively. The musicians control the musicalcontent and structure of the song thanks to an interface relying on the freeinking metaphor. One contribution of this interface is that it displays musicalpatterns of different lengths in the same space. The design of Jam On is basedon a list of performance criteria aimed at ensuring the musicality of theperformance and the interactivity of the technical system. The paper comparestwo alternative interfaces used for the system and explores the various stagesof the design process aimed at making the system as musical and interactive aspossible.
@inproceedings{Rosselet2013, author = {Rosselet, Ulysse and Renaud, Alain}, title = {Jam On: a new interface for web-based collective music performance}, pages = {394--399}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178650}, url = {http://www.nime.org/proceedings/2013/nime2013_196.pdf}, keywords = {Networked performance, interface design, mapping, web-based music application} }
Chi-Hsia Lai and Till Bovermann. 2013. Audience Experience in Sound Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 170–173. http://doi.org/10.5281/zenodo.1178590
Abstract
Download PDF DOI
This paper presents observations from investigating audience experience of apractice-based research in live sound performance with electronics. In seekingto understand the communication flow and the engagement between performer andaudience in this particular performance context, we designed an experiment thatinvolved the following steps: (a) performing WOSAWIP at a new media festival,(b) conducting a qualitative research study with audience members and (c)analyzing the data for new insights.
@inproceedings{Lai2013, author = {Lai, Chi-Hsia and Bovermann, Till}, title = {Audience Experience in Sound Performance}, pages = {170--173}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178590}, url = {http://www.nime.org/proceedings/2013/nime2013_197.pdf}, keywords = {Audience Experience Study, Live Performance, Evaluation, Research Methods} }
Steve Everett. 2013. Sonifying Chemical Evolution. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 277–278. http://doi.org/10.5281/zenodo.1178508
Abstract
Download PDF DOI
This presentation-demonstration discusses the creation of FIRST LIFE, a75-minute mixed media performance for string quartet, live audio processing,live motion capture video, and audience participation utilizing stochasticmodels of chemical data provided by Martha Grover’s Research Group at theSchool of Chemical and Biomolecular Engineering at Georgia Institute ofTechnology. Each section of this work is constructed from contingent outcomesdrawn from biochemical research exploring possible early Earth formations oforganic compounds. Audio-video excerpts of the composition will be played during the presentation.Max patches for sonification and for generating stochastic processes will bedemonstrated as well.
@inproceedings{Everett2013, author = {Everett, Steve}, title = {Sonifying Chemical Evolution}, pages = {277--278}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178508}, url = {http://www.nime.org/proceedings/2013/nime2013_198.pdf}, keywords = {Data-driven composition, sonification, live electronics-video} }
Chad McKinney and Nick Collins. 2013. An Interactive 3D Network Music Space. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 400–405. http://doi.org/10.5281/zenodo.1178606
Abstract
Download PDF DOI
In this paper we present Shoggoth, a 3D graphics based program for performingnetwork music. In Shoggoth, users utilize video game style controls to navigateand manipulate a grid of malleable height maps. Sequences can be created bydefining paths through the maps which trigger and modulate audio playback. Withrespect to a context of computer music performance, and specific problems innetwork music, design goals and technical challenges are outlined. The systemis evaluated through established taxonomies for describing interfaces, followedby an enumeration of the merits of 3D graphics in networked performance. Indiscussing proposed improvements to Shoggoth, design suggestions for otherdevelopers and network musicians are drawn out.
@inproceedings{McKinney2013, author = {McKinney, Chad and Collins, Nick}, title = {An Interactive {3D} Network Music Space}, pages = {400--405}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178606}, url = {http://www.nime.org/proceedings/2013/nime2013_199.pdf}, keywords = {3D, Generative, Network, Environment} }
Sam Ferguson, Aengus Martin, and Andrew Johnston. 2013. A corpus-based method for controlling guitar feedback. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 541–546. http://doi.org/10.5281/zenodo.1178518
Abstract
Download PDF DOI
Feedback created by guitars and amplifiers is difficult to use in musicalsettings – parameters such as pitch and loudness are hard to specify preciselyby fretting a string or by holding the guitar near an amplifier. This researchinvestigates methods for controlling the level and pitch of the feedbackproduced by a guitar and amplifier, which are based on incorporatingcorpus-based control into the system. Two parameters are used to define thecontrol parameter space – a simple automatic gain control system to controlthe output level, and a band-pass filter frequency for controlling the pitch ofthe feedback. This control parameter space is mapped to a corpus of soundscreated by these parameters and recorded, and these sounds are analysed usingsoftware created for concatenative synthesis. Following this process, thedescriptors taken from the analysis can be used to select control parametersfrom the feedback system.
@inproceedings{Ferguson2013, author = {Ferguson, Sam and Martin, Aengus and Johnston, Andrew}, title = {A corpus-based method for controlling guitar feedback}, pages = {541--546}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178518}, url = {http://www.nime.org/proceedings/2013/nime2013_200.pdf} }
Toshihiro KITA and Naotoshi Osaka. 2013. Providing a feeling of other remote learners’ presence in an online learning environment via realtime sonification of Moodle access log. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 198–199. http://doi.org/10.5281/zenodo.1178584
Abstract
Download PDF DOI
When people learn using Web-based educational resources such as an LMS(Learning Management System) or other e-learning related systems, they aresitting in front of their own computer at home and are often physicallyisolated from other online learners. In some courses they are typically gettingin touch online with each others for doing some particular group workassignments, but most of the time they must do their own learning tasks alone.In other courses simply the individual assignments and quizzes are provided, sothe learners are alone all the time from the beginning until the end of thecourse.In order to keep the learners’ motivation, it helps to feel other learnersdoing the same learning activities and belonging to the same course.Communicating formally or informally with other learners via Social NetworkingServices or something is one way for learners to get such a feeling, though ina way it might sometimes disturb their learning. Sonification of the access logof the e-learning system could be another indirect way to provide such afeeling.
@inproceedings{KITA2013, author = {KITA, Toshihiro and Osaka, Naotoshi}, title = {Providing a feeling of other remote learners' presence in an online learning environment via realtime sonification of Moodle access log}, pages = {198--199}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178584}, url = {http://www.nime.org/proceedings/2013/nime2013_203.pdf}, keywords = {e-learning, online learners, Moodle, Csound, realtime sonification, OSC (Open Sound Control)} }
Steven Gelineck, Dan Overholt, Morten Büchert, and Jesper Andersen. 2013. Towards an Interface for Music Mixing based on Smart Tangibles and Multitouch. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 180–185. http://doi.org/10.5281/zenodo.1178532
Abstract
Download PDF DOI
This paper presents the continuous work towards the development of an interface for music mixing targeted towards expert sound technicians and producers. The mixing interface uses a stage metaphor mapping scheme where audio channels arerepresented as digital widgets on a 2D surface. These can be controlled bymulti touch or by smart tangibles, which are tangible blocks with embedded sensors. The smart tangibles developed for this interface are able to sense howthey are grasped by the user. The paper presents the design of the mixing interface including the smart tangible as well as a preliminary user study involving a hands-on focus group session where 5 different control technologiesare contrasted and discussed. Preliminary findings suggest that smart tangibles were preferred, but that an optimal interface would include a combination of touch, smart tangibles and an extra function control tangible for extending the functionality of the smart tangibles. Finally, the interface should incorporate both an edit and mix mode—the latter displaying very limited visual feedback in order to force users to focus their attention to listening instead of the interface.
@inproceedings{Gelineck2013, author = {Gelineck, Steven and Overholt, Dan and B{\''u}chert, Morten and Andersen, Jesper}, title = {Towards an Interface for Music Mixing based on Smart Tangibles and Multitouch}, pages = {180--185}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178532}, url = {http://www.nime.org/proceedings/2013/nime2013_206.pdf}, keywords = {music mixing, tangibles, smart objects, multi-touch, control surface, graspables, physical-digital interface, tangible user interface, wireless sensing, sketching in hardware} }
Will W. W. Tang, Stephen Chan, Grace Ngai, and Hong-va Leong. 2013. Computer Assisted Melo-rhythmic Generation of Traditional Chinese Music from Ink Brush Calligraphy. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 84–89. http://doi.org/10.5281/zenodo.1178668
Abstract
Download PDF DOI
CalliMusic, is a system developed for users to generate traditional Chinesemusic by writing Chinese ink brush calligraphy, turning the long-believedstrong linkage between the two art forms with rich histories into reality. Inaddition to traditional calligraphy writing instruments (brush, ink and paper),a camera is the only addition needed to convert the motion of the ink brushinto musical notes through a variety of mappings such as human-inspired,statistical and a hybrid. The design of the system, including details of eachmapping and research issues encountered are discussed. A user study of systemperformance suggests that the result is quite encouraging. The technique is,obviously, applicable to other related art forms with a wide range ofapplications.
@inproceedings{Tang2013, author = {Tang, Will W. W. and Chan, Stephen and Ngai, Grace and Leong, Hong-va}, title = {Computer Assisted Melo-rhythmic Generation of Traditional Chinese Music from Ink Brush Calligraphy}, pages = {84--89}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178668}, url = {http://www.nime.org/proceedings/2013/nime2013_208.pdf}, keywords = {Chinese Calligraphy, Chinese Music, Assisted Music Generation} }
Shoken Kaneko. 2013. A Function-Oriented Interface for Music Education and Musical Expressions: “the Sound Wheel.” Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 202–205. http://doi.org/10.5281/zenodo.1178574
Abstract
Download PDF DOI
In this paper, a function-oriented musical interface, named the sound wheel_x0011_,is presented. This interface is designed to manipulate musical functions likepitch class sets, tonal centers and scale degrees, rather than the _x0010_musicalsurface_x0011_, i.e. the individual notes with concrete note heights. The sound wheelhas an interface summarizing harmony theory, and the playing actions haveexplicit correspondencewith musical functions. Easy usability is realized by semi-automatizing theconversion process from musical functions into the musical surface. Thus, theplayer can use this interface with concentration on the harmonic structure,without having his attention caught by manipulating the musical surface.Subjective evaluation indicated the e_x001B_ffectiveness of this interface as a toolhelpful for understanding the music theory. Because of such features, thisinterface can be used for education and interactive training of tonal musictheory.
@inproceedings{Kaneko2013, author = {Kaneko, Shoken}, title = {A Function-Oriented Interface for Music Education and Musical Expressions: ``the Sound Wheel''}, pages = {202--205}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178574}, url = {http://www.nime.org/proceedings/2013/nime2013_21.pdf}, keywords = {Music education, Interactive tonal music generation} }
Anders-Petter Andersson and Birgitta Cappelen. 2013. Designing Empowering Vocal and Tangible Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 406–412. http://doi.org/10.5281/zenodo.1178465
Abstract
Download PDF DOI
Our voice and body are important parts of our self-experience, andcommunication and relational possibilities. They gradually become moreimportant for Interaction Design, due to increased development of tangibleinteraction and mobile communication. In this paper we present and discuss ourwork with voice and tangible interaction in our ongoing research project XXXXX.The goal is to improve health for families, adults and children withdisabilities through use of collaborative, musical, tangible media. We build onuse of voice in Music Therapy and on a humanistic health approach. Ourchallenge is to design vocal and tangible interactive media that through usereduce isolation and passivity and increase empowerment for the users. We usesound recognition, generative sound synthesis, vibrations and cross-mediatechniques, to create rhythms, melodies and harmonic chords to stimulatebody-voice connections, positive emotions and structures for actions.
@inproceedings{Andersson2013, author = {Andersson, Anders-Petter and Cappelen, Birgitta}, title = {Designing Empowering Vocal and Tangible Interaction}, pages = {406--412}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178465}, url = {http://www.nime.org/proceedings/2013/nime2013_210.pdf}, keywords = {Vocal Interaction, Tangible Interaction, Music & Health, Voice, Empowerment, Music Therapy, Resource-Oriented} }
Maria Astrinaki, Nicolas d’Alessandro, Reboursière Loı̈c, Alexis Moinet, and Thierry Dutoit. 2013. MAGE 2.0: New Features and its Application in the Development of a Talking Guitar. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 547–550. http://doi.org/10.5281/zenodo.1178467
Abstract
Download PDF DOI
This paper describes the recent progress in our approach to generateperformative and controllable speech. The goal of the performative HMM-basedspeech and singing synthesis library, called Mage, is to have the ability togenerate natural sounding speech with arbitrary speaker’s voicecharacteristics, speaking styles and expressions and at the same time to haveaccurate reactive user control over all the available production levels. Mageallows to arbitrarily change between voices, control speaking style or vocalidentity, manipulate voice characteristics or alter the targeted contexton-the-fly and also maintain the naturalness and intelligibility of the output.To achieve these controls, it was essential to redesign and improve the initiallibrary. This paper focuses on the improvements of the architectural design,the additional user controls and provides an overview of a prototype, where aguitar is used to reactively control the generation of a synthetic voice invarious levels.
@inproceedings{Astrinaki2013, author = {Astrinaki, Maria and d'Alessandro, Nicolas and Reboursi{\`e}re, Lo{\''\i}c and Moinet, Alexis and Dutoit, Thierry}, title = {MAGE 2.0: New Features and its Application in the Development of a Talking Guitar}, pages = {547--550}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178467}, url = {http://www.nime.org/proceedings/2013/nime2013_214.pdf}, keywords = {speech synthesis, augmented guitar, hexaphonic guitar} }
Sang Won Lee and Georg Essl. 2013. Live Coding The Mobile Music Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 493–498. http://doi.org/10.5281/zenodo.1178592
Abstract
Download PDF DOI
We introduce a form of networked music performance where a performer plays amobile music instrument while it is being implemented on the fly by a livecoder. This setup poses a set of challenges in performing a music instrumentwhich changes over time and we suggest design guidelines such as making asmooth transition, varying adoption of change, and sharing information betweenthe pair of two performers. A proof-of-concept instrument is implemented on amobile device using UrMus, applying the suggested guidelines. We wish that thismodel would expand the scope of live coding to the distributed interactivesystem, drawing existing performance ideas of NIMEs.
@inproceedings{Lee2013a, author = {Lee, Sang Won and Essl, Georg}, title = {Live Coding The Mobile Music Instrument}, pages = {493--498}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178592}, url = {http://www.nime.org/proceedings/2013/nime2013_216.pdf}, keywords = {live coding, network music, on-the-fly instrument, mobile music} }
Jaeseong You and Red Wierenga. 2013. Remix_Dance 3: Improvisatory Sound Displacing on Touch Screen-Based Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 124–127. http://doi.org/10.5281/zenodo.1178696
Abstract
Download PDF DOI
Remix_Dance Music 3 is a four-channel quasi-fixed media piece that can beimprovised by a single player operating the Max/MSP-based controller on atablet such as iPad. Within the fixed time limit of six minutes, the performercan freely (de)activate and displace the eighty seven precomposed audio filesthat are simultaneously running, generating a sonic structure to one’s likingout of the given network of musical possibilities. The interface is designed toinvite an integral musical structuring particularly in the dimensions ofperformatively underexplored (but still sonically viable) parameters that arelargely based on MPEG-7 audio descriptors.
@inproceedings{You2013, author = {You, Jaeseong and Wierenga, Red}, title = {Remix_Dance 3: Improvisatory Sound Displacing on Touch Screen-Based Interface}, pages = {124--127}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178696}, url = {http://www.nime.org/proceedings/2013/nime2013_219.pdf}, keywords = {Novel controllers, interface for musical expression, musical mapping strategy, music cognition, music perception, MPEG-7} }
2013. A Drawing-Based Digital Music Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 499–502. http://doi.org/10.5281/zenodo.1178566
Abstract
Download PDF DOI
This paper presents an innovative digital musical instrument, the Illusio, based on an augmented multi-touch interface that combines a traditional multi-touch surface and a device similar to a guitar pedal. Illusio allows users to perform by drawing and by associating the sketches with live loops. These loops are manipulated based on a concept called hierarchical live looping, which extends traditional live looping through the use of a musical tree, in which any music operation applied to a given node affects all its children nodes. Finally, we evaluate the instrument considering the performer and the audience, which are two of the most important stakeholders involved in the use, conception, and perception of a musical device. The results achieved are encouraging and led to useful insights about how to improve instrument features, performance and usability.
@inproceedings{Barbosa2013, author = {}, title = {A Drawing-Based Digital Music Instrument}, pages = {499--502}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178566}, url = {http://www.nime.org/proceedings/2013/nime2013_220.pdf}, keywords = {Digital musical instruments, augmented multi-touch, hierarchical live looping, interaction techniques, evaluation methodology} }
Avneesh Sarwate and Rebecca Fiebrink. 2013. Variator: A Creativity Support Tool for Music Composition. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 279–282. http://doi.org/10.5281/zenodo.1178654
Abstract
Download PDF DOI
The Variator is a compositional assistance tool that allows users to quicklyproduce and experiment with variations on musical objects, such as chords,melodies, and chord progressions. The transformations performed by the Variatorcan range from standard counterpoint transformations (inversion, retrograde,transposition) to more complicated custom transformations, and the system isbuilt to encourage the writing of custom transformations.This paper explores the design decisions involved in creating a compositionalassistance tool, describes the Variator interface and a preliminary set ofimplemented transformation functions, analyzes the results of the evaluationsof a prototype system, and lays out future plans for expanding upon thatsystem, both as a stand-alone application and as the basis for an opensource/collaborative community where users can implement and share their owntransformation functions.
@inproceedings{Sarwate2013, author = {Sarwate, Avneesh and Fiebrink, Rebecca}, title = {Variator: A Creativity Support Tool for Music Composition}, pages = {279--282}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178654}, url = {http://www.nime.org/proceedings/2013/nime2013_224.pdf}, keywords = {Composition assistance tool, computer-aided composition, social composition} }
Mick Grierson and Chris Kiefer. 2013. NoiseBear: A Malleable Wireless Controller Designed In Participation with Disabled Children. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST. http://doi.org/10.5281/zenodo.1178536
Abstract
Download PDF DOI
NoiseBear is a wireless malleable controller designed for, and in participationwith, physically and cognitively disabled children. The aim of the project wasto produce a musical controller that was robust, and flexible enough to be usedin a wide range of interactive scenarios in participatory design workshops. NoiseBear demonstrates an open ended system for designing wireless malleablecontrollers in different shapes. It uses pressure sensitive material made fromconductive thread and polyester cushion stuffing, to give the feel of a softtoy. The sensor networks with other devices using the Bluetooth Low Energyprotocol, running on a BlueGiga BLE112 chip. This contains an embedded 8051processor which manages the sensor. NoiseBear has undergone an initialformative evaluation in a workshop session with four autistic children, andcontinues to evolve in series of participatory design workshops. The evaluationshowed that controller could be engaging for the children to use, andhighlighted some technical limitations of the design. Solutions to theselimitations are discussed, along with plans for future design iterations.
@inproceedings{Grierson2013, author = {Grierson, Mick and Kiefer, Chris}, title = {NoiseBear: A Malleable Wireless Controller Designed In Participation with Disabled Children}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178536}, url = {http://www.nime.org/proceedings/2013/nime2013_227.pdf}, keywords = {malleable controllers, assistive technology, multiparametric mapping} }
kazuhiro jo. 2013. cutting record — a record without (or with) prior acoustic information. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 283–286. http://doi.org/10.5281/zenodo.1178578
Abstract
Download PDF DOI
In this paper, we present a method to produce analog records with standardvector graphics software (i.e. Adobe Illustrator) and two different types ofcutting machines: laser cutter, and paper cutter. The method enables us toengrave wave forms on a surface of diverse materials such as paper, wood,acrylic, and leather without or with prior acoustic information (i.e. digitalaudio data). The results could be played with standard record players. Wepresent the method with its technical specification and explain our initialtrials with two performances and a workshop. The work examines the role ofmusical reproduction in the age of personal fabrication. —p.s. If it’s possible, we also would like to submit the work for performanceand workshop.A video of performance < it contains information on the authorshttp://www.youtube.com/watch?v=vbCLe06P7j0
@inproceedings{jo2013, author = {kazuhiro jo}, title = {cutting record --- a record without (or with) prior acoustic information}, pages = {283--286}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178578}, url = {http://www.nime.org/proceedings/2013/nime2013_228.pdf}, keywords = {Analog Record, Personal Fabrication, Media Archaeology} }
Niklas Klügel and Georg Groh. 2013. Towards Mapping Timbre to Emotional Affect. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 525–530. http://doi.org/10.5281/zenodo.1178586
Abstract
Download PDF DOI
Controlling the timbre generated by an audio synthesizerin a goal-oriented way requires a profound understandingof the synthesizer’s manifold structural parameters. Especially shapingtimbre expressively to communicate emotional affect requires expertise.Therefore, novices in particular may not be able to adequately control timbrein viewof articulating the wealth of affects musically. In this context, the focus ofthis paper is the development of a model that can represent a relationshipbetween timbre and an expected emotional affect . The results of the evaluationof the presented model are encouraging which supports its use in steering oraugmenting the control of the audio synthesis. We explicitly envision thispaper as a contribution to the field of Synthesis by Analysis in the broadersense, albeit being potentially suitable to other related domains.
@inproceedings{Klugel2013, author = {Kl{\''u}gel, Niklas and Groh, Georg}, title = {Towards Mapping Timbre to Emotional Affect}, pages = {525--530}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178586}, url = {http://www.nime.org/proceedings/2013/nime2013_23.pdf}, keywords = {Emotional affect,Timbre, Machine Learning, Deep Belief Networks, Analysis by Synthesis} }
Shawn Greenlee. 2013. Graphic Waveshaping. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 287–290. http://doi.org/10.5281/zenodo.1178534
Abstract
Download PDF DOI
In the design of recent systems, I have advanced techniques that positiongraphic synthesis methods in the context of solo, improvisational performance.Here, the primary interfaces for musical action are prepared works on paper,scanned by digital video cameras which in turn pass image data on to softwarefor analysis and interpretation as sound synthesis and signal processingprocedures. The focus of this paper is on one of these techniques, a process Idescribe as graphic waveshaping. A discussion of graphic waveshaping in basicform and as utilized in my performance work, (title omitted), is offered. Inthe latter case, the performer’s objective is to guide the interpretation ofimages as sound, constantly tuning and retuning the conversion while selectingand scanning images from a large catalog. Due to the erratic nature of thesystem and the precondition that image to sound relationships are unfixed, theperformance situation is replete with the discovery of new sounds and thecircumstances that bring them into play. Graphic waveshaping may be understood as non-linear distortion synthesis withtime-varying transfer functions stemming from visual scan lines. As a form ofgraphic synthesis, visual images function as motivations for sound generation.There is a strategy applied for creating one out of the other. However, counterto compositionally oriented forms of graphic synthesis where one may assignimage characteristics to musical parameters such as pitches, durations,dynamics, etc., graphic waveshaping is foremost a processing technique, as itdistorts incoming signals according to graphically derived transfer functions.As such, it may also be understood as an audio effect; one that in myimplementations is particularly feedback dependent, oriented towards shapingthe erratic behavior of synthesis patches written in Max/MSP/Jitter. Used inthis manner, graphic waveshaping elicits an emergent system behaviorconditioned by visual features.
@inproceedings{Greenlee2013, author = {Greenlee, Shawn}, title = {Graphic Waveshaping}, pages = {287--290}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178534}, url = {http://www.nime.org/proceedings/2013/nime2013_232.pdf}, keywords = {Graphic waveshaping, graphic synthesis, waveshaping synthesis, graphic sound, drawn sound} }
Tae Hong Park and Oriol Nieto. 2013. Fortissimo: Force-Feedback for Mobile Devices. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 291–294. http://doi.org/10.5281/zenodo.1178638
Abstract
Download PDF DOI
In this paper we present a highly expressive, robust, and easy-to-build systemthat provides force-feedback interaction for mobile computing devices (MCD).Our system, which we call Fortissimo (ff), utilizes standard built-inaccelerometer measurements in conjunction with generic foam padding that can beeasily placed under a device to render an expressive force-feedback performancesetup. Fortissimo allows for musically expressive user-interaction with addedforce-feedback which is integral for any musical controller –a feature that isabsent for touchscreen-centric MCDs. This paper details ff core concepts,hardware and software designs, and expressivity of musical features.
@inproceedings{Park2013c, author = {Park, Tae Hong and Nieto, Oriol}, title = {Fortissimo: Force-Feedback for Mobile Devices}, pages = {291--294}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178638}, url = {http://www.nime.org/proceedings/2013/nime2013_233.pdf}, keywords = {force-feedback, expression, mobile computing devices, mobile music} }
Jeffrey Scott, Mickey Moorhead, Justin Chapman, Ryan Schwabe, and Youngmoo E. Kim. 2013. Personalized Song Interaction Using a Multi Touch Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 417–420. http://doi.org/10.5281/zenodo.1178660
Abstract
Download PDF DOI
Digital music technology has transformed the listener experience and creatednew avenues for creative interaction and expression within the musical domain.The barrier to music creation, distribution and collaboration has been reduced,leading to entirely new ecosystems of musical experience. Software editingtools such as digital audio workstations (DAW) allow nearly limitlessmanipulation of source audio into new sonic elements and textures and havepromoted a culture of recycling and repurposing of content via mashups andremixes. We present a multi-touch application that allows a user to customizetheir listening experience by blending various versions of a song in real time.
@inproceedings{Scott2013, author = {Scott, Jeffrey and Moorhead, Mickey and Chapman, Justin and Schwabe, Ryan and Kim, Youngmoo E.}, title = {Personalized Song Interaction Using a Multi Touch Interface}, pages = {417--420}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178660}, url = {http://www.nime.org/proceedings/2013/nime2013_234.pdf}, keywords = {Multi-track, Multi-touch, Mobile devices, Interactive media} }
Alyssa Batula, Manu Colacot, David Grunberg, and Youngmoo Kim. 2013. Using Audio and Haptic Feedback to Improve Pitched Percussive Instrument Performance in Humanoids. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 295–300. http://doi.org/10.5281/zenodo.1178472
Abstract
Download PDF DOI
We present a system which allows an adult-sized humanoid to determine whetheror not it is correctly playing a pitched percussive instrument to produce adesired sound. As hu, man musicians utilize sensory feedback to determine ifthey are successfully using their instruments to generate certain pitches,robot performers should be capable of the same feat. We present a noteclassification algorithm that uses auditory and haptic feedback to decide if anote was well- or poorly-struck. This system is demonstrated using Hubo, anadult-sized humanoid, which has been enabled to actu, ate pitched pipes usingmallets. We show that, with this system, Hubo is able to determine whether ornot a note was played correctly.
@inproceedings{Batula2013, author = {Batula, Alyssa and Colacot, Manu and Grunberg, David and Kim, Youngmoo}, title = {Using Audio and Haptic Feedback to Improve Pitched Percussive Instrument Performance in Humanoids}, pages = {295--300}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178472}, url = {http://www.nime.org/proceedings/2013/nime2013_235.pdf}, keywords = {Musical robots, humanoids, auditory feedback, haptic feedback} }
Jim Torresen, Yngve Hafting, and Kristian Nymoen. 2013. A New Wi-Fi based Platform for Wireless Sensor Data Collection. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 337–340. http://doi.org/10.5281/zenodo.1178680
Abstract
Download PDF DOI
A custom designed WLAN (Wireless Local Area Network) based sensor interface ispresented in this paper. It is aimed at wirelessly interfacing a large varietyof sensors to supplement built-in sensors in smart phones and media players.The target application area is collection of human related motions andcondition to be applied in musical applications. The interface is based oncommercially available units and allows for up to nine sensors. The benefit ofusing WLAN based communication is high data rate with low latency. Ourexperiments show that the average transmission time is less than 2ms for asingle sensor. Further, it is operational for a whole day without batteryrecharging.
@inproceedings{Torresen2013, author = {Torresen, Jim and Hafting, Yngve and Nymoen, Kristian}, title = {A New Wi-Fi based Platform for Wireless Sensor Data Collection}, pages = {337--340}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178680}, url = {http://www.nime.org/proceedings/2013/nime2013_236.pdf}, keywords = {wireless communication, sensor data collection, WLAN, Arduino} }
Ståle A. Skogstad. 2013. Filtering Motion Capture Data for Real-Time Applications. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 142–147. http://doi.org/10.5281/zenodo.1178662
Abstract
Download PDF DOI
In this paper we present some custom designed filters for real-time motioncapture applications. Our target application is so-called motion controllers,i.e. systems that interpret hand motion for musical interaction. In earlierresearch we found effective methods to design nearly optimal filters forreal-time applications. However, to be able to design suitable filters for ourtarget application, it is necessary to establish the typical frequency contentof the motion capture data we want to filter. This will again allow us todetermine a reasonable cutoff frequency for the filters. We have thereforeconducted an experiment in which we recorded the hand motion of 20 subjects.The frequency spectra of these data together with a method similar to theresidual analysis method were then used to determine reasonable cutofffrequencies. Based on this experiment, we propose three cutoff frequencies fordifferent scenarios and filtering needs: 5, 10 and 15 Hz, which corresponds toheavy, medium and light filtering respectively. Finally, we propose a range ofreal-time filters applicable to motion controllers. In particular, low-passfilters and low-pass differentiators of degrees one and two, which in ourexperience are the most useful filters for our target application.
@inproceedings{Skogstad2013, author = {Skogstad, St{\aa}le A.}, title = {Filtering Motion Capture Data for Real-Time Applications}, pages = {142--147}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178662}, url = {http://www.nime.org/proceedings/2013/nime2013_238.pdf} }
Rebecca Kleinberger. 2013. PAMDI Music Box: Primarily Analogico-Mechanical, Digitally Iterated Music Box. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 19–20. http://doi.org/10.5281/zenodo.1178588
Abstract
Download PDF DOI
PAMDI is an electromechanical music controller based on an expansion of the common metal music boxes. Our system enables an augmentation of the musical properties by adding different musical channels triggered and parameterized by natural gestures during the “performance”. All the channels are generated form the original melody recorded once at the start. To capture and treat the different expressive parameters both natural and intentional, our platform is composed of a metallic structure supporting sensors. The measured values are processed by an arduino system that finallysends the results by serial communication to a Max/MSP patch for signaltreatment and modification. We will explain how our embedded instrument aims to bring a certain awareness to the player of the mapping and the potential musical freedom of the very specific – and not that much automatic — instrument that is a music box. We will also address how our design tackles the different questions of mapping, ergonomics and expressiveness while choosing the controller modalities and the parameters to be sensed.
@inproceedings{Kleinberger2013, author = {Kleinberger, Rebecca}, title = {{PAM}DI Music Box: Primarily Analogico-Mechanical, Digitally Iterated Music Box}, pages = {19--20}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178588}, url = {http://www.nime.org/proceedings/2013/nime2013_24.pdf}, keywords = {Tangible interface, musical controller, music box, mechanical and electronic coupling, mapping.} }
Andrew McPherson. 2013. Portable Measurement and Mapping of Continuous Piano Gesture. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 152–157. http://doi.org/10.5281/zenodo.1178610
Abstract
Download PDF DOI
This paper presents a portable optical measurement system for capturingcontinuous key motion on any piano. Very few concert venues have MIDI-enabledpianos, and many performers depend on the versatile but discontinued MoogPianoBar to provide MIDI from a conventional acoustic instrument. The scannerhardware presented in this paper addresses the growing need for alternativesolutions while surpassing existing systems in the level of detail measured.Continuous key position on both black and white keys is gathered at 1kHz samplerate. Software extracts traditional and novel features of keyboard touch fromeach note, which can be flexibly mapped to sound using MIDI or Open SoundControl. RGB LEDs provide rich visual feedback to assist the performer ininteracting with more complex sound mapping arrangements. An application ispresented to the magnetic resonator piano, an electromagnetically-augmentedacoustic grand piano which is performed using continuous key positionmeasurements.
@inproceedings{McPherson2013, author = {McPherson, Andrew}, title = {Portable Measurement and Mapping of Continuous Piano Gesture}, pages = {152--157}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178610}, url = {http://www.nime.org/proceedings/2013/nime2013_240.pdf}, keywords = {Piano, keyboard, optical sensing, gesture sensing, visual feedback, mapping, magnetic resonator piano} }
Sam Tarakajian, David Zicarelli, and Joshua Clayton. 2013. Mira: Liveness in iPad Controllers for Max/MSP. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 421–426. http://doi.org/10.5281/zenodo.1178670
Abstract
Download PDF DOI
Mira is an iPad app for controlling Max patchers in real time with minimalconfiguration. This submission includes a paper describing Mira’s design andimplementation, as well as a demo showing how Mira works with Max.The Mira iPad app discovers open Max patchers automatically using the Bonjourprotocol, connects to them over WiFi and negotiates a description of the Maxpatcher. As objects change position and appearance, Mira makes sure that theinterface on the iPad is kept up to date. Mira eliminates the need for anexplicit mapping step between the interface and the system being controlled.The user is never asked to input an IP address, nor to configure the mappingbetween interface objects on the iPad and those in the Max patcher. So theprototyping composer is free to rapidly configure and reconfigure theinterface.
@inproceedings{Tarakajian2013, author = {Tarakajian, Sam and Zicarelli, David and Clayton, Joshua}, title = {Mira: Liveness in iPad Controllers for Max/MSP}, pages = {421--426}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178670}, url = {http://www.nime.org/proceedings/2013/nime2013_241.pdf}, keywords = {NIME, Max/MSP/Jitter, Mira, ipad, osc, bonjour, zeroconf} }
Taehun Kim and Stefan Weinzierl. 2013. Modelling Gestures in Music Performance with Statistical Latent-State Models. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 427–430. http://doi.org/10.5281/zenodo.1178582
Abstract
Download PDF DOI
We discuss how to model "gestures" in music performance with statistical latent-states models. A music performance can be described with compositional and expressive properties varying over time. In those property changes we often observe particular patterns, and such a pattern can be understood as a "gesture", since it serves as a medium transferring specific emotions. Assuming a finite number of latent states on each property value changes, we can describe those gestures with statistical latent-states models, and train them by unsupervised learning algorithms. In addition, model entropy provides us a measure for different effects of each properties on the gesture implementation. Test result on some of real performances indicates that the trained models could capture the structure of gestures observed in the given performances, and detect their boundaries. The entropy-based measure was informative to understand the effectiveness of each property on the gesture implementation. Test result on large corpora indicates that our model has potentials for afurther model improvement.
@inproceedings{Kim2013, author = {Kim, Taehun and Weinzierl, Stefan}, title = {Modelling Gestures in Music Performance with Statistical Latent-State Models}, pages = {427--430}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178582}, url = {http://www.nime.org/proceedings/2013/nime2013_244.pdf}, keywords = {Musical gestures, performance analysis, unsupervised machine learning} }
Laurel Pardue and William Sebastian. 2013. Hand-Controller for Combined Tactile Control and Motion Tracking. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 90–93. http://doi.org/10.5281/zenodo.1178630
Abstract
Download PDF DOI
The Hand Controller is a new interface designed to enable a performer toachieve detailed control of audio and visual parameters through a tangibleinterface combined with motion tracking of the hands to capture large scalephysical movement. Such movement empowers an expressive dynamic for bothperformer and audience. However tracking movements in free space isnotoriously difficult for virtuosic performance. The lack of tactile feedbackleads to difficulty learning the repeated muscle movements required for precisecontrol. In comparison, the hands have shown an impressive ability to mastercomplex motor tasks through feel. The hand controller uses both modes ofinteraction. Electro-magnetic field tracking enables 6D hand motion trackingwhile two options provide tactile interaction- a set of tracks that providelinear positioning and applied finger pressure, or a set of trumpet like sliderkeys that provide continuous data describing key depth. Thumbs actuateadditional pressure sensitive buttons. The two haptic interfaces are mountedto a comfortable hand grip that allows a significant range of reach, andpressure to be applied without restricting hand movement highly desirable inexpressive motion.
@inproceedings{Pardue2013, author = {Pardue, Laurel and Sebastian, William}, title = {Hand-Controller for Combined Tactile Control and Motion Tracking}, pages = {90--93}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178630}, url = {http://www.nime.org/proceedings/2013/nime2013_245.pdf}, keywords = {hand, interface, free gesture, force sensing resistor, new musical instrument, tactile feedback, position tracking} }
Antonius Wiriadjaja. 2013. Gamelan Sampul: Laptop Sleeve Gamelan. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 469–470. http://doi.org/10.5281/zenodo.1178688
Abstract
Download PDF DOI
The Gamelan Sampul is a laptop sleeve with embedded circuitry that allows usersto practice playing Javanese gamelan instruments without a full set ofinstruments. It is part of a larger project that aims to develop a set ofportable and mobile tools for learning, recording and performing classicalJavanese gamelan music.The accessibility of a portable Javanese gamelan set introduces the musicalgenre to audiences who have never experienced this traditional music before,passing down long established customs to future generations. But it also raisesthe question of what is and what isn’t appropriate to the musical tradition.The Gamelan Sampul attempts to introduce new technology to traditional folkmusic while staying sensitive to cultural needs.
@inproceedings{Wiriadjaja2013, author = {Wiriadjaja, Antonius}, title = {Gamelan Sampul: Laptop Sleeve Gamelan}, pages = {469--470}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178688}, url = {http://www.nime.org/proceedings/2013/nime2013_246.pdf}, keywords = {Physical computing, product design, traditional folk arts, gamelan} }
Laurel Pardue and Andrew McPherson. 2013. Near-Field Optical Reflective Sensing for Bow Tracking. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 363–368. http://doi.org/10.5281/zenodo.1178628
Abstract
Download PDF DOI
This paper explores the potential of near-field optical reflective sensing formusical instrument gesture capture. Near-field optical sensors are inexpensive,portable and non-intrusive, and their high spatial and temporal resolutionmakes them ideal for tracking the finer motions of instrumental performance.The paper discusses general optical sensor performance with detailedinvestigations of three sensor models. An application is presented to violinbow position tracking using reflective sensors mounted on the stick. Bowtracking remains a difficult task, and many existing solutions are expensive,bulky, or offer limited temporal resolution. Initial results indicate that bowposition and pressure can be derived from optical measurements of thehair-string distance, and that similar techniques may be used to measure bowtilt.
@inproceedings{Pardue2013a, author = {Pardue, Laurel and McPherson, Andrew}, title = {Near-Field Optical Reflective Sensing for Bow Tracking}, pages = {363--368}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178628}, url = {http://www.nime.org/proceedings/2013/nime2013_247.pdf}, keywords = {optical sensor, reflectance, LED, photodiode, phototransistor, violin, bow tracking, gesture, near-field sensing} }
Qian Liu, Yoon Chung Han, JoAnn Kuchera-Morin, and Matthew Wright. 2013. Cloud Bridge: a Data-driven Immersive Audio-Visual Software Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 431–436. http://doi.org/10.5281/zenodo.1178596
Abstract
Download PDF DOI
Cloud Bridge is an immersive interactive audiovisual software interface forboth data exploration and artistic creation. It explores how information can besonified and visualized to facilitate findings, and eventually becomeinteractive musical compositions. Cloud Bridge functions as a multi-user,multimodal instrument. The data represents the history of items checked out bypatrons of the Seattle Public Library. A single user or agroup of users functioning as a performance ensemble participate in the pieceby interactively querying the database using iOS devices. Each device isassociated with aunique timbre and color for contributing to the piece, whichappears on large shared screens and a surround-sound system for allparticipants and observers. Cloud Bridge leads to a new media interactiveinterface utilizing audio synthesis, visualization and real-time interaction.
@inproceedings{Liu2013, author = {Liu, Qian and Han, Yoon Chung and Kuchera-Morin, JoAnn and Wright, Matthew}, title = {Cloud Bridge: a Data-driven Immersive Audio-Visual Software Interface}, pages = {431--436}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178596}, url = {http://www.nime.org/proceedings/2013/nime2013_250.pdf}, keywords = {Data Sonification, Data Visualization, Sonification, User Interface, Sonic Interaction Design, Open Sound Control} }
Michael Everman and Colby Leider. 2013. Toward DMI Evaluation Using Crowd-Sourced Tagging Techniques. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 437–440. http://doi.org/10.5281/zenodo.1178510
Abstract
Download PDF DOI
Few formal methods exist for evaluating digital musical instruments (DMIs) .This paper proposes a novel method of DMI evaluation using crowd-sourcedtagging. One of the challenges in devising such methods is that the evaluationof a musical instrument is an inherently qualitative task. While previouslyproposed methods have focused on quantitative methods and largely ignored thequalitative aspects of the task, tagging is well-suited to this and is alreadyused to classify things such as websites and musical genres. These, like DMIs,do not lend themselves to simple categorization or parameterization. Using the social tagging method, participating individuals assign descriptivelabels, or tags, to a DMI. A DMI can then be evaluated by analyzing the tagsassociated with it. Metrics can be generated from the tags assigned to theinstrument, and comparisons made to other instruments. This can give thedesigner valuable insight into the where the strengths of the design lie andwhere improvements may be needed. A prototype system for testing the method is proposed in the paper and iscurrently being implemented as part of an ongoing DMI evaluation project. It isexpected that results from the prototype will be available to report by thetime of the conference in May.
@inproceedings{Everman2013, author = {Everman, Michael and Leider, Colby}, title = {Toward {DMI} Evaluation Using Crowd-Sourced Tagging Techniques}, pages = {437--440}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178510}, url = {http://www.nime.org/proceedings/2013/nime2013_251.pdf}, keywords = {Evaluation, tagging, digital musical instrument} }
Sangbong Nam. 2013. Musical Poi (mPoi). Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 148–151. http://doi.org/10.5281/zenodo.1178622
Abstract
Download PDF DOI
This paper describes the Musical Poi (mPoi), a unique sensor-based musicalinstrument rooted in the ancient art of poi spinning. The trajectory ofcircular motion drawn by the performance and the momentum of the mPoiinstrument are converted to the energetic and vibrant sound, which makesspiritual and meditative soundscape that opens everyone up the aura and clearsthe thought forms away. The mPoi project and its concepts will be introducedfirst and then its interaction with a performer will be discussed.The mPoi project seeks to develop a prototype for a set of mobile musicalinstrument based on electronic motion sensors and circuit boards. Thistechnology is installed in egg-shaped structure and allows communicationbetween a performer and the mPoi instrument. The principal motivation for themPoi project has been a desire to develop an extensible interface that willsupport the Poi performance, which is a style of performance art originatedwith the Maori people of New Zealand involving swinging tethered weightsthrough a variety of rhythmical and geometric patterns. As an extension of the body and the expansion of the movement, the mPoiutilizes the creative performance of Poi to make spatial and spiritual soundand music. The aims of the mPoi project are:to create a prototype of mPoi instrument that includes circuit board thatconnects the instrument to a sensor.to develop a software, which includes programming of the circuit board and forthe sound generation.to make a new artistic expression to refine the captured sound into artisticmusical notes. The creative part of the project is to design a unique method to translate theperformer’s gesture into sound. A unique algorithm was developed to extractfeatures of the swing motion and translate them into various patterns of sound.
@inproceedings{Nam2013, author = {Nam, Sangbong}, title = {Musical Poi (mPoi)}, pages = {148--151}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178622}, url = {http://www.nime.org/proceedings/2013/nime2013_254.pdf}, keywords = {mPoi, Musical Poi, Jwibulnori, Poi, sensor-based musical instrument} }
Reid Oda, Adam Finkelstein, and Rebecca Fiebrink. 2013. Towards Note-Level Prediction for Networked Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 94–97. http://doi.org/10.5281/zenodo.1178624
Abstract
Download PDF DOI
The Internet allows musicians and other artists to collaborate remotely.However, network latency presents a fundamental challenge for remotecollaborators who need to coordinate and respond to each other’s performancein real time. In this paper, we investigate the viability of predictingpercussion hits before they have occurred, so that information about thepredicted drum hit can be sent over a network, and the sound can be synthesizedat a receiver’s location at approximately the same moment the hit occurs atthe sender’s location. Such a system would allow two percussionists to playin perfect synchrony despite the delays caused by computer networks. Toinvestigate the feasibility of such an approach, we record vibraphone malletstrikes with a high-speed camera and track the mallet head position. We showthat 30 ms before the strike occurs, it is possible to predict strike time andvelocity with acceptable accuracy. Our method fits a second-order polynomial tothe data to produce a strike time prediction that is within the bounds ofperceptual synchrony, and a velocity estimate that will enable the soundpressure level of the synthesized strike to be accurate within 3 dB.
@inproceedings{Oda2013, author = {Oda, Reid and Finkelstein, Adam and Fiebrink, Rebecca}, title = {Towards Note-Level Prediction for Networked Music Performance}, pages = {94--97}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178624}, url = {http://www.nime.org/proceedings/2013/nime2013_258.pdf}, keywords = {Networked performance, prediction, computer vision} }
Leonardo Jenkins, Shawn Trail, George Tzanetakis, Peter Driessen, and Wyatt Page. 2013. An Easily Removable, wireless Optical Sensing System (EROSS) for the Trumpet. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 352–357. http://doi.org/10.5281/zenodo.1178562
Abstract
Download PDF DOI
This paper presents a minimally-invasive, wireless optical sensorsystem for use with any conventional piston valve acoustic trumpet. Itis designed to be easy to install and remove by any trumpeter. Ourgoal is to offer the extended control afforded by hyperinstrumentswithout the hard to reverse or irreversible invasive modificationsthat are typically used for adding digital sensing capabilities. Weutilize optical sensors to track the continuous position displacementvalues of the three trumpet valves. These values are trasmittedwirelessly and can be used by an external controller. The hardware hasbeen designed to be reconfigurable by having the housing 3D printed sothat the dimensions can be adjusted for any particular trumpetmodel. The result is a low cost, low power, easily replicable sensorsolution that offers any trumpeter the ability to augment their ownexisting trumpet without compromising the instrument’s structure orplaying technique. The extended digital control afforded by our systemis interweaved with the natural playing gestures of an acoustictrumpet. We believe that this seemless integration is critical forenabling effective and musical human computer interaction.Keywords: hyperinstrument, trumpet, minimally-invasive, gesture sensing,wireless, I2C
@inproceedings{Jenkins2013, author = {Jenkins, Leonardo and Trail, Shawn and Tzanetakis, George and Driessen, Peter and Page, Wyatt}, title = {An Easily Removable, wireless Optical Sensing System (EROSS) for the Trumpet}, pages = {352--357}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178562}, url = {http://www.nime.org/proceedings/2013/nime2013_261.pdf}, keywords = {hyperinstrument, trumpet, minimally-invasive, gesture sensing, wireless, I2C} }
Adrian Freed, John MacCallum, and Sam Mansfield. 2013. “Old” is the New “New”: a Fingerboard Case Study in Recrudescence as a NIME Development Strategy. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 441–445. http://doi.org/10.5281/zenodo.1178524
Abstract
Download PDF DOI
This paper positively addresses the problem that most NIME devices are ephemeralasting long enough to signal academic and technical prowess but rarely longerthan a few musical performances. We offer a case study that shows thatlongevity of use depends on stabilizing the interface and innovating theimplementation to maintain the required performance of the controller for theplayer.
@inproceedings{Freed2013a, author = {Freed, Adrian and MacCallum, John and Mansfield, Sam}, title = {``Old'' is the New ``New'': a Fingerboard Case Study in Recrudescence as a NIME Development Strategy}, pages = {441--445}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178524}, url = {http://www.nime.org/proceedings/2013/nime2013_265.pdf}, keywords = {Fingerboard controller, Best practices, Recrudescence, Organology, Unobtainium} }
Adrian Freed, John MacCallum, and David Wessel. 2013. Agile Interface Development using OSC Expressions and Process Migration. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 347–351. http://doi.org/10.5281/zenodo.1178526
Abstract
Download PDF DOI
We describe “o.expr” an expression language for dynamic, object- and agent-oriented computation of gesture signal processing workflows using OSC bundles. We illustrate the use of o.expr for a range of gesture processingtasks showing how stateless programming and homoiconicity simplify applications development and provide support for heterogeneous computational networks.
@inproceedings{Freed2013, author = {Freed, Adrian and MacCallum, John and Wessel, David}, title = {Agile Interface Development using OSC Expressions and Process Migration}, pages = {347--351}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178526}, url = {http://www.nime.org/proceedings/2013/nime2013_266.pdf}, keywords = {Gesture Signal Processing, Open Sound Control, Functional Programming, Homoiconicity, Process Migration.} }
Rob Hamilton. 2013. Sonifying Game-Space Choreographies With UDKOSC. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 446–449. http://doi.org/10.5281/zenodo.1178544
Abstract
Download PDF DOI
With a nod towards digital puppetry and game-based film genres such asmachinima, recent additions to UDKOSC of- fer an Open Sound Control (OSC)control layer for external control over both third-person ”pawn” entitiesand camera controllers in fully rendered game-space. Real-time OSC input,driven by algorithmic process or parsed from a human-readable timed scriptingsyntax allows users to shape choreographies of gesture, in this case actormotion and action, as well as an audiences view into the game-spaceenvironment. As UDKOSC outputs real-time coordinate and action data generatedby UDK pawns and players with OSC, individual as well as aggregate virtualactor gesture and motion can be leveraged as a driver for both creative andprocedural/adaptive gaming music and audio concerns.
@inproceedings{Hamilton2013, author = {Hamilton, Rob}, title = {Sonifying Game-Space Choreographies With UDKOSC}, pages = {446--449}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178544}, url = {http://www.nime.org/proceedings/2013/nime2013_268.pdf}, keywords = {procedural music, procedural audio, interactive sonification, game music, Open Sound Control} }
David John. 2013. Updating the Classifications of Mobile Music Projects. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 301–306. http://doi.org/10.5281/zenodo.1178568
Abstract
Download PDF DOI
This paper reviews the mobile music projects that have been presented at NIMEin the past ten years in order to assess whether the changes in technology haveaffected the activities of mobile music research. An overview of mobile musicprojects is presented using the categories that describe the main activities:projects that explore the influence and make use of location; applications thatshare audio or promote collaborative composition; interaction using wearabledevices; the use of mobile phones as performance devices; projects that exploreHCI design issues. The relative activity between different types of activity isassessed in order to identify trends. The classification according totechnological, social or geographic showed an overwhelming bias to thetechnological, followed by social investigations. An alternative classificationof survey product, or artifact reveals an increase in the number of productsdescribed with a corresponding decline in the number of surveys and artisticprojects. The increase in technical papers appears to be due to an enthusiasmto make use of increased capability of mobile phones, although there are signsthat the initial interest has already peaked, and researchers are againinterested to explore technologies and artistic expression beyond existingmobile phones.
@inproceedings{John2013, author = {John, David}, title = {Updating the Classifications of Mobile Music Projects}, pages = {301--306}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178568}, url = {http://www.nime.org/proceedings/2013/nime2013_273.pdf}, keywords = {Mobile Music, interactive music, proximity sensing, wearable devices, mobile phone performance, interaction design} }
Thomas Walther, Damir Ismailović, and Bernd Brügge. 2013. Rocking the Keys with a Multi-Touch Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 98–101. http://doi.org/10.5281/zenodo.1178684
Abstract
Download PDF DOI
Although multi-touch user interfaces have become a widespread form of humancomputer interaction in many technical areas, they haven’t found their way intolive performances of musicians and keyboarders yet. In this paper, we present anovel multi-touch interface method aimed at professional keyboard players. Themethod, which is inspired by computer trackpads, allows controlling up to tencontinuous parameters of a keyboard with one hand, without requiring the userto look at the touch area — a significant improvement over traditional keyboardinput controls. We discuss optimizations needed to make our interface reliable,and show in an evaluation with four keyboarders of different skill level thatthis method is both intuitive and powerful, and allows users to more quicklyalter the sound of their keyboard than they could with current input solutions.
@inproceedings{Walther2013, author = {Walther, Thomas and Ismailovi{\'c}, Damir and Br{\''u}gge, Bernd}, title = {Rocking the Keys with a Multi-Touch Interface}, pages = {98--101}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178684}, url = {http://www.nime.org/proceedings/2013/nime2013_275.pdf}, keywords = {multi-touch, mobile, keyboard, interface} }
Edgar Berdahl, Spencer Salazar, and Myles Borins. 2013. Embedded Networking and Hardware-Accelerated Graphics with Satellite CCRMA. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 325–330. http://doi.org/10.5281/zenodo.1178476
Abstract
Download PDF DOI
Satellite CCRMA is a platform for making embedded musical instruments andembedded installations. The project aims to help prototypes live longer byproviding a complete prototyping platform in a single, small, and stand-aloneembedded form factor. A set of scripts makes it easier for artists andbeginning technical students to access powerful features, while advanced usersenjoy the flexibility of the open-source software and open-source hardwareplatform.This paper focuses primarily on networking capabilities of Satellite CCRMA andnew software for enabling interactive control of the hardware-acceleratedgraphical output. In addition, some results are presented from robustness testsalongside specific advice and software support for increasing the lifespan ofthe flash memory.
@inproceedings{Berdahl2013, author = {Berdahl, Edgar and Salazar, Spencer and Borins, Myles}, title = {Embedded Networking and Hardware-Accelerated Graphics with Satellite CCRMA}, pages = {325--330}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178476}, url = {http://www.nime.org/proceedings/2013/nime2013_277.pdf}, keywords = {Satellite CCRMA, embedded musical instruments, embedded installations, Node.js, Interface.js, hardware-accelerated graphics, OpenGLES, SimpleGraphicsOSC, union file system, write endurance} }
Xiao Xiao, Anna Pereira, and Hiroshi Ishii. 2013. Conjuring the Recorded Pianist: A New Medium to Experience Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 7–12. http://doi.org/10.5281/zenodo.1178692
Abstract
Download PDF DOI
The body channels rich layers of information when playing music, from intricatemanipulations of the instrument to vivid personifications of expression. Butwhen music is captured and replayed across distance and time, the performer’sbody is too often trapped behind a small screen or absent entirely.This paper introduces an interface to conjure the recorded performer bycombining the moving keys of a player piano with life-sized projection of thepianist’s hands and upper body. Inspired by reflections on a lacquered grandpiano, our interface evokes the sense that the virtual pianist is playing thephysically moving keys.Through our interface, we explore the question of how to viscerally simulate aperformer’s presence to create immersive experiences. We discuss designchoices, outline a space of usage scenarios and report reactions from users.
@inproceedings{Xiao2013, author = {Xiao, Xiao and Pereira, Anna and Ishii, Hiroshi}, title = {Conjuring the Recorded Pianist: A New Medium to Experience Musical Performance}, pages = {7--12}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178692}, url = {http://www.nime.org/proceedings/2013/nime2013_28.pdf}, keywords = {piano performance, musical expressivity, body language, recorded music, player piano, augmented reality, embodiment} }
Ben Taylor and Jesse Allison. 2013. Plum St: Live Digital Storytelling with Remote Browsers. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 477–478. http://doi.org/10.5281/zenodo.1178672
Abstract
Download PDF DOI
What is the place for Internet Art within the paradigm of remote musicperformance? In this paper, we discuss techniques for live audiovisualstorytelling through the Web browsers of remote viewers. We focus on theincorporation of socket technology to create a real-time link between performerand audience, enabling manipulation of Web media directly within the eachaudience member’s browser. Finally, we describe Plum Street, an onlinemultimedia performance, and suggest that by involving remote performance,appropriating Web media such as Google Maps, social media, and Web Audio intothe work, we can tell stories in a way that more accurately addresses modernlife and holistically fulfills the Web browser’s capabilities as a contemporaryperformance instrument.
@inproceedings{Taylor2013, author = {Taylor, Ben and Allison, Jesse}, title = {Plum St: Live Digital Storytelling with Remote Browsers}, pages = {477--478}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178672}, url = {http://www.nime.org/proceedings/2013/nime2013_281.pdf}, keywords = {Remote Performance, Network Music, Internet Art, Storytelling} }
Charles Roberts, Graham Wakefield, and Matthew Wright. 2013. The Web Browser As Synthesizer And Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 313–318. http://doi.org/10.5281/zenodo.1178648
Abstract
Download PDF DOI
Web technologies provide an incredible opportunity to present new musicalinterfaces to new audiences. Applications written in JavaScript and designed torun in the browser offer remarkable performance, mobile/desktop portability andlongevity due to standardization. Our research examines the use and potentialof native web technologies for musical expression. We introduce two librariestowards this end: Gibberish.js, a heavily optimized audio DSP library, andInterface.js, a GUI toolkit that works with mouse, touch and motion events.Together these libraries provide a complete system for defining musicalinstruments that can be used in both desktop and mobile browsers. Interface.jsalso enables control of remote synthesis applications by including anapplication that translates the socket protocol used by browsers into both MIDIand OSC messages.
@inproceedings{Roberts2013a, author = {Roberts, Charles and Wakefield, Graham and Wright, Matthew}, title = {The Web Browser As Synthesizer And Interface}, pages = {313--318}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178648}, url = {http://www.nime.org/proceedings/2013/nime2013_282.pdf}, keywords = {mobile devices, javascript, browser-based NIMEs, web audio, websockets} }
Tobias Grosshauser and Gerhard Tröster. 2013. Finger Position and Pressure Sensing Techniques for String and Keyboard Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 479–484. http://doi.org/10.5281/zenodo.1178538
Abstract
Download PDF DOI
Several new technologies to capture motion, gesture and forces for musical instrument players’ analyses have been developed in the last years. In research and for augmented instruments one parameter is underrepresented so far. It is finger position and pressure measurement, applied by the musician while playing the musical instrument. In this paper we show a flexible linear-potentiometer and forcesensitive-resistor (FSR) based solution for position, pressure and force sensing between the contact point of the fingers and the musical instrument. A flexible matrix printed circuit board (PCB) is fixed on a piano key. We further introduce linear potentiometer based left hand finger position sensing for string instruments, integrated into a violin and a guitar finger board. Several calibration and measurement scenarios are shown. The violin sensor was evaluated with 13 music students regarding playability and robustness of the system. Main focus was a the integration of the sensors into these two traditional musical instruments as unobtrusively as possible to keep natural haptic playing sensation. The musicians playing the violin in different performance situations stated good playability and no differences in the haptic sensation while playing. The piano sensor is rated, due to interviews after testing it in a conventional keyboard quite unobtrusive, too, but still evokes a different haptic sensation.
@inproceedings{Grosshauser2013, author = {Grosshauser, Tobias and Tr{\''o}ster, Gerhard}, title = {Finger Position and Pressure Sensing Techniques for String and Keyboard Instruments}, pages = {479--484}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178538}, url = {http://www.nime.org/proceedings/2013/nime2013_286.pdf}, keywords = {Sensor, Piano, Violin, Guitar, Position, Pressure, Keyboard} }
Jesse Allison, Yemin Oh, and Benjamin Taylor. 2013. NEXUS: Collaborative Performance for the Masses, Handling Instrument Interface Distribution through the Web. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 1–6. http://doi.org/10.5281/zenodo.1178461
Abstract
Download PDF DOI
Distributed performance systems present many challenges to the artist inmanaging performance information, distribution and coordination of interface tomany users, and cross platform support to provide a reasonable level ofinteraction to the widest possible user base.Now that many features of HTML 5 are implemented, powerful browser basedinterfaces can be utilized for distribution across a variety of static andmobile devices. The author proposes leveraging the power of a web applicationto handle distribution of user interfaces and passing interactions via OSC toand from realtime audio/video processing software. Interfaces developed in thisfashion can reach potential performers by distributing a unique user interfaceto any device with a browser anywhere in the world.
@inproceedings{Allison2013, author = {Allison, Jesse and Oh, Yemin and Taylor, Benjamin}, title = {NEXUS: Collaborative Performance for the Masses, Handling Instrument Interface Distribution through the Web}, pages = {1--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178461}, url = {http://www.nime.org/proceedings/2013/nime2013_287.pdf}, keywords = {NIME, distributed performance systems, Ruby on Rails, collaborative performance, distributed instruments, distributed interface, HTML5, browser based interface} }
Stefano Baldan, Amalia De Götzen, and Stefania Serafin. 2013. Sonic Tennis: a rhythmic interaction game for mobile devices. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 200–201. http://doi.org/10.5281/zenodo.1178470
Abstract
Download PDF DOI
This paper presents an audio-based tennis simulation game for mobile devices, which uses motion input and non-verbal audio feedback as exclusive means of interaction. Players have to listen carefully to the provided auditory clues, like racquet hits and ball bounces, rhythmically synchronizing their movements in order to keep the ball into play. The device can be swung freely and act as a full-fledged motionbased controller, as the game does not rely at all on visual feedback and the device display can thus be ignored. The game aims to be entertaining but also effective for educational purposes, such as ear training or improvement of the sense of timing, and enjoyable both by visually-impaired and sighted users.
@inproceedings{Baldan2013, author = {Baldan, Stefano and G{\''o}tzen, Amalia De and Serafin, Stefania}, title = {Sonic Tennis: a rhythmic interaction game for mobile devices}, pages = {200--201}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178470}, url = {http://www.nime.org/proceedings/2013/nime2013_288.pdf}, keywords = {Audio game, mobile devices, sonic interaction design, rhythmic interaction, motion-based} }
Sang Won Lee and Jason Freeman. 2013. echobo : Audience Participation Using The Mobile Music Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 450–455. http://doi.org/10.5281/zenodo.1178594
Abstract
Download PDF DOI
This work aims at a music piece for large-scale audience participation usingmobile phones as musical instruments at a music performance. Utilizing theubiquity of smart phones, we attempted to accomplish audience engagement bycrafting an accessible musical instrument with which audience can be a part ofthe performance. Drawing lessons learnt from the creative works of mobilemusic, audience participation, and the networked instrument a mobile musicalinstrument application is developed so that audience can download the app atthe concert, play the instrument instantly, interact with other audiencemembers, and contribute to the music by sound generated from their mobilephones. The post-survey results indicate that the instrument was easy to use,and the audience felt connected to the music and other musicians.
@inproceedings{Lee2013, author = {Lee, Sang Won and Freeman, Jason}, title = {echobo : Audience Participation Using The Mobile Music Instrument}, pages = {450--455}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178594}, url = {http://www.nime.org/proceedings/2013/nime2013_291.pdf}, keywords = {mobile music, audience participation, networked instrument} }
Stefano Trento and Stefania Serafin. 2013. Flag beat: a novel interface for rhythmic musical expression for kids. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 456–459. http://doi.org/10.5281/zenodo.1178682
Abstract
Download PDF DOI
This paper describes the development of a prototype of a sonic toy forpre-scholar kids. The device, which is a modified version of a footballratchet, is based on the spinning gesture and it allows to experience fourdifferent types of auditory feedback. These algorithms let a kid play withmusic rhythm, generate a continuous sound feedback and control the pitch of apiece of music. An evaluation test of the device has been performed withfourteen kids in a kindergarten. Results and observations showed that kidspreferred the algorithms based on the exploration of the music rhythm and onpitch shifting.
@inproceedings{Trento2013, author = {Trento, Stefano and Serafin, Stefania}, title = {Flag beat: a novel interface for rhythmic musical expression for kids}, pages = {456--459}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178682}, url = {http://www.nime.org/proceedings/2013/nime2013_295.pdf}, keywords = {Sonic toy, children, auditory feedback.} }
Adam Place, Liam Lacey, and Thomas Mitchell. 2013. AlphaSphere. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 491–492. http://doi.org/10.5281/zenodo.1178642
Abstract
Download PDF DOI
The AlphaSphere is an electronic musical instrument featuring a series oftactile, pressure sensitive touch pads arranged in a spherical form. It isdesigned to offer a new playing style, while allowing for the expressivereal-time modulation of sound available in electronic-based music. It is alsodesigned to be programmable, enabling the flexibility to map a series ofdifferent notational arrangements to the pad-based interface. The AlphaSphere functions as an HID, MIDI and OSC device, which connects to acomputer and/or independent MIDI device, and its control messages can be mappedthrough the AlphaLive software. Our primary motivations for creating theAlphaSphere are to design an expressive music interface which can exploit thesound palate of synthesizers in a design which allows for the mapping ofnotational arrangements.
@inproceedings{Place2013, author = {Place, Adam and Lacey, Liam and Mitchell, Thomas}, title = {AlphaSphere}, pages = {491--492}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178642}, url = {http://www.nime.org/proceedings/2013/nime2013_300.pdf}, keywords = {AlphaSphere, MIDI, HID, polyphonic aftertouch, open source} }
Charles Roberts, Angus Forbes, and Tobias Höllerer. 2013. Enabling Multimodal Mobile Interfaces for Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 102–105. http://doi.org/10.5281/zenodo.1178646
Abstract
Download PDF DOI
We present research that extends the scope of the mobile application Control, aprototyping environment for defining multimodal interfaces that controlreal-time artistic and musical performances. Control allows users to rapidlycreate interfaces employing a variety of modalities, including: speechrecognition, computer vision, musical feature extraction, touchscreen widgets,and inertial sensor data. Information from these modalities can be transmittedwirelessly to remote applications. Interfaces are declared using JSON and canbe extended with JavaScript to add complex behaviors, including the concurrentfusion of multimodal signals. By simplifying the creation of interfaces viathese simple markup files, Control allows musicians and artists to make novelapplications that use and combine both discrete and continuous data from thewide range of sensors available on commodity mobile devices.
@inproceedings{Roberts2013, author = {Roberts, Charles and Forbes, Angus and H{\''o}llerer, Tobias}, title = {Enabling Multimodal Mobile Interfaces for Musical Performance}, pages = {102--105}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178646}, url = {http://www.nime.org/proceedings/2013/nime2013_303.pdf}, keywords = {Music, mobile, multimodal, interaction} }
Aristotelis Hadjakos and Tobias Grosshauser. 2013. Motion and Synchronization Analysis of Musical Ensembles with the Kinect. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 106–110. http://doi.org/10.5281/zenodo.1178540
Abstract
Download PDF DOI
Music ensembles have to synchronize themselves with a very high precision inorder to achieve the desired musical results. For that purpose the musicians donot only rely on their auditory perception but also perceive and interpret themovements and gestures of their ensemble colleges. In this paper we present aKinect-based method to analyze ensemble play based on head tracking. We discussfirst experimental results with a violin duo performance.
@inproceedings{Hadjakos2013, author = {Hadjakos, Aristotelis and Grosshauser, Tobias}, title = {Motion and Synchronization Analysis of Musical Ensembles with the Kinect}, pages = {106--110}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178540}, url = {http://www.nime.org/proceedings/2013/nime2013_304.pdf}, keywords = {Kinect, Ensemble, Synchronization, Strings, Functional Data Analysis, Cross-Correlogram} }
Saebyul Park, Seonghoon Ban, Dae Ryong Hong, and Woon Seung Yeo. 2013. Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 111–114. http://doi.org/10.5281/zenodo.1178636
Abstract
Download PDF DOI
SSN (Sound Surfing Network) is a performance system that provides a new musicalexperience by incorporating mobile phone-based spatial sound control tocollaborative music performance. SSN enables both the performer and theaudience to manipulate the spatial distribution of sound using the smartphonesof the audience as distributed speaker system. Proposing a new perspective tothe social aspect music appreciation, SSN will provide a new possibility tomobile music performances in the context of interactive audience collaborationas well as sound spatialization.
@inproceedings{Park2013b, author = {Park, Saebyul and Ban, Seonghoon and Hong, Dae Ryong and Yeo, Woon Seung}, title = {Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration}, pages = {111--114}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178636}, url = {http://www.nime.org/proceedings/2013/nime2013_305.pdf}, keywords = {Mobile music, smartphone, audience participation, spatial sound control, digital performance} }
Ryan McGee. 2013. VOSIS: a Multi-touch Image Sonification Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 460–463. http://doi.org/10.5281/zenodo.1178604
Abstract
Download PDF DOI
VOSIS is an interactive image sonification interface that creates complexwavetables by raster scanning greyscale image pixel data. Using a multi-touchscreen to play image regions of unique frequency content rather than a linearscale of frequencies, it becomes a unique performance tool for experimental andvisual music. A number of image filters controlled by multi-touch gestures addvariation to the sound palette. On a mobile device, parameters controlled bythe accelerometer add another layer expressivity to the resulting audio-visualmontages.
@inproceedings{McGee2013, author = {McGee, Ryan}, title = {VOSIS: a Multi-touch Image Sonification Interface}, pages = {460--463}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178604}, url = {http://www.nime.org/proceedings/2013/nime2013_310.pdf}, keywords = {image sonification, multi-touch, visual music} }
Lode Hoste and Beat Signer. 2013. Expressive Control of Indirect Augmented Reality During Live Music Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 13–18. http://doi.org/10.5281/zenodo.1178558
Abstract
Download PDF DOI
Nowadays many music artists rely on visualisations and light shows to enhanceand augment their live performances. However, the visualisation and triggeringof lights is normally scripted in advance and synchronised with the concert,severely limiting the artist’s freedom for improvisation, expression and ad-hocadaptation of their show. These scripts result in performances where thetechnology enforces the artist and their music to stay in synchronisation withthe pre-programmed environment. We argue that these limitations can be overcomebased on emerging non-invasive tracking technologies in combination with anadvanced gesture recognition engine.We present a solution that uses explicit gestures and implicit dance moves tocontrol the visual augmentation of a live music performance. We furtherillustrate how our framework overcomes existing limitations of gestureclassification systems by delivering a precise recognition solution based on asingle gesture sample in combination with expert knowledge. The presentedsolution enables a more dynamic and spontaneous performance and, when combinedwith indirect augmented reality, results in a more intense interaction betweenthe artist and their audience.
@inproceedings{Hoste2013, author = {Hoste, Lode and Signer, Beat}, title = {Expressive Control of Indirect Augmented Reality During Live Music Performances}, pages = {13--18}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178558}, url = {http://www.nime.org/proceedings/2013/nime2013_32.pdf}, keywords = {Expressive control, augmented reality, live music performance, 3D gesture recognition, Kinect, declarative language} }
Jim Murphy, James McVay, Ajay Kapur, and Dale Carnegie. 2013. Designing and Building Expressive Robotic Guitars. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 557–562. http://doi.org/10.5281/zenodo.1178618
Abstract
Download PDF DOI
This paper provides a history of robotic guitars and bass guitars as well as adiscussion of the design, construction, and evaluation of two new roboticinstruments. Throughout the paper, a focus is made on different techniques toextend the expressivity of robotic guitars. Swivel and MechBass, two newrobots, are built and discussed. Construction techniques of likely interest toother musical roboticists are included. These robots use a variety oftechniques, both new and inspired by prior work, to afford composers andperformers with the ability to precisely control pitch and plucking parameters.Both new robots are evaluated to test their precision, repeatability, andspeed. The paper closes with a discussion of the compositional and performativeimplications of such levels of control, and how it might affect humans who wishto interface with the systems.
@inproceedings{Murphy2013, author = {Murphy, Jim and McVay, James and Kapur, Ajay and Carnegie, Dale}, title = {Designing and Building Expressive Robotic Guitars}, pages = {557--562}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178618}, url = {http://www.nime.org/proceedings/2013/nime2013_36.pdf}, keywords = {musical robotics, kinetic sculpture, mechatronics} }
Erfan Abdi Dezfouli and Edwin van der Heide. 2013. Notesaaz: a new controller and performance idiom. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 115–117. http://doi.org/10.5281/zenodo.1178498
Abstract
Download PDF DOI
Notesaaz is both a new physical interface meant for musical performance and aproposal for a three-stage process where the controller is used to navigatewithin a graphical score that on its turn controls the sound generation. It canbe seen as a dynamic and understandable way of using dynamic mapping betweenthe sensor input and the sound generation. Furthermore by presenting thegraphical score to both the performer and the audience a new engagement of theaudience can be established.
@inproceedings{Dezfouli2013, author = {Dezfouli, Erfan Abdi and van der Heide, Edwin}, title = {Notesaaz: a new controller and performance idiom}, pages = {115--117}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178498}, url = {http://www.nime.org/proceedings/2013/nime2013_4.pdf}, keywords = {musical instrument, custom controller, gestural input, dynamic score} }
Anton Fuhrmann, Johannes Kretz, and Peter Burwik. 2013. Multi Sensor Tracking for Live Sound Transformation. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 358–362. http://doi.org/10.5281/zenodo.1178530
Abstract
Download PDF DOI
This paper demonstrates how to use multiple Kinect(TM) sensors to map aperformers motion to music. We merge skeleton data streams from multiplesensors to compensate for occlusions of the performer. The skeleton jointpositions drive the performance via open sound control data. We discuss how toregister the different sensors to each other and how to smoothly merge theresulting data streams and how to map position data in a general framework tothe live electronics applied to a chamber music ensemble.
@inproceedings{Fuhrmann2013, author = {Fuhrmann, Anton and Kretz, Johannes and Burwik, Peter}, title = {Multi Sensor Tracking for Live Sound Transformation}, pages = {358--362}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178530}, url = {http://www.nime.org/proceedings/2013/nime2013_44.pdf}, keywords = {kinect, multi sensor, sensor fusion, open sound control, motion tracking, parameter mapping, live electronics} }
Tom Mudd. 2013. Feeling for Sound: Mapping Sonic Data to Haptic Perceptions. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 369–372. http://doi.org/10.5281/zenodo.1293003
Abstract
Download PDF DOI
This paper presents a system for exploring different dimensions of a soundthrough the use of haptic feedback. The Novint Falcon force feedback interfaceis used to scan through soundfiles as a subject moves their hand horizontallyfrom left to right, and to relay information about volume, frequency content,noisiness, or potentially any analysable parameter back to the subject throughforces acting on their hand. General practicalities of mapping sonic elements to physical forces areconsidered, such as the problem of representing detailed data through vaguephysical sensation, approaches to applying forces to the hand that do notinterfering with the smooth operation of the device, and the relative merits ofdiscreet and continuous mappings. Three approaches to generating the forcevector are discussed: 1) the use of simulated detents to identify areas of anaudio parameter over a certain threshold, 2) applying friction proportional tothe level of the audio parameter along the axis of movement, and 3) creatingforces perpendicular to the subject’s hand movements.Presentation of audio information in this manner could be beneficial for‘pre-feeling’ as a method for selecting material to play during a liveperformance, assisting visually impaired audio engineers, and as a generalaugmentation of standard audio editing environments.
@inproceedings{Mudd2013, author = {Mudd, Tom}, title = {Feeling for Sound: Mapping Sonic Data to Haptic Perceptions}, pages = {369--372}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1293003}, url = {http://www.nime.org/proceedings/2013/nime2013_46.pdf}, keywords = {Haptics, force feedback, mapping, human-computer interaction} }
Matan Ben-Asher and Colby Leider. 2013. Toward an Emotionally Intelligent Piano: Real-Time Emotion Detection and Performer Feedback via Kinesthetic Sensing in Piano Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 21–24. http://doi.org/10.5281/zenodo.1178474
Abstract
Download PDF DOI
A system is presented for detecting common gestures, musical intentions andemotions of pianists in real-time using only kinesthetic data retrieved bywireless motion sensors. The algorithm can detect common Western musicalstructures such as chords, arpeggios, scales, and trills as well as musicallyintended emotions: cheerful, mournful, vigorous, dreamy, lyrical, and humorouscompletely and solely based on low-sample-rate motion sensor data. Thealgorithm can be trained per performer in real-time or can work based onprevious training sets. The system maps the emotions to a color set andpresents them as a flowing emotional spectrum on the background of a pianoroll. This acts as a feedback mechanism for emotional expression as well as aninteractive display of the music. The system was trained and tested on a numberof pianists and it classified structures and emotions with promising results ofup to 92% accuracy.
@inproceedings{BenAsher2013, author = {Ben-Asher, Matan and Leider, Colby}, title = {Toward an Emotionally Intelligent Piano: Real-Time Emotion Detection and Performer Feedback via Kinesthetic Sensing in Piano Performance}, pages = {21--24}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178474}, url = {http://www.nime.org/proceedings/2013/nime2013_48.pdf}, keywords = {Motion Sensors, IMUs, Expressive Piano Performance, Machine Learning, Computer Music, Music and Emotion} }
Dimitri Diakopoulos and Ajay Kapur. 2013. Netpixl: Towards a New Paradigm for Networked Application Development. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 206–209. http://doi.org/10.5281/zenodo.1178500
Abstract
Download PDF DOI
Netpixl is a new micro-toolkit built to network devices within interactiveinstallations and environments. Using a familiar client-server model, Netpixlcentrally wraps an important aspect of ubiquitous computing: real-timemessaging. In the context of sound and music computing, the role of Netpixl isto fluidly integrate endpoints like OSC and MIDI within a larger multi-usersystem. This paper considers useful design principles that may be applied totoolkits like Netpixl while also emphasizing recent approaches to applicationdevelopment via HTML5 and Javascript, highlighting an evolution in networkedcreative computing.
@inproceedings{Diakopoulos2013, author = {Diakopoulos, Dimitri and Kapur, Ajay}, title = {Netpixl: Towards a New Paradigm for Networked Application Development}, pages = {206--209}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178500}, url = {http://www.nime.org/proceedings/2013/nime2013_49.pdf}, keywords = {networking, ubiquitious computing, toolkits, html5} }
Stefano Fasciani and Lonce Wyse. 2013. A Self-Organizing Gesture Map for a Voice-Controlled Instrument Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 507–512. http://doi.org/10.5281/zenodo.4582292
Abstract
Download PDF DOI
Mapping gestures to digital musical instrument parameters is not trivial when the dimensionality of the sensor-captured data is high and the model relating the gesture to sensor data is unknown. In these cases, a front-end processing system for extracting gestural information embedded in the sensor data is essential. In this paper we propose an unsupervised offline method that learns how to reduce and map the gestural data to a generic instrument parameter control space. We make an unconventional use of the Self-Organizing Maps to obtain only a geometrical transformation of the gestural data, while dimensionality reduction is handled separately. We introduce a novel training procedure to overcome two main Self-Organizing Maps limitations which otherwise corrupt the interface usability. As evaluation, we apply this method to our existing Voice-Controlled Interface for musical instruments, obtaining sensible usability improvements.
@inproceedings{Fasciani2013, author = {Fasciani, Stefano and Wyse, Lonce}, title = {A Self-Organizing Gesture Map for a Voice-Controlled Instrument Interface}, pages = {507--512}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.4582292}, url = {http://www.nime.org/proceedings/2013/nime2013_50.pdf}, keywords = {Self-Organizing Maps, Gestural Controller, Multi Dimensional Control, Unsupervised Gesture Mapping, Voice Control} }
Florent Berthaut, Mark T. Marshall, Sriram Subramanian, and Martin Hachet. 2013. Rouages: Revealing the Mechanisms of Digital Musical Instruments to the Audience. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 164–169. http://doi.org/10.5281/zenodo.1178478
Abstract
Download PDF DOI
Digital musical instruments bring new possibilities for musical performance.They are also more complex for the audience to understand, due to the diversityof their components and the magical aspect of the musicians’ actions whencompared to acoustic instruments. This complexity results in a loss of livenessand possibly a poor experience for the audience. Our approach, called Rouages,is based on a mixed-reality display system and a 3D visualization application.It reveals the mechanisms of digital musical instruments by amplifyingmusicians’ gestures with virtual extensions of the sensors, by representingthe sound components with 3D shapes and specific behaviors and by showing theimpact ofmusicians gestures on these components. We believe that Rouages opens up newperspectives to help instrument makers and musicians improve audienceexperience with their digital musical instruments.
@inproceedings{Berthaut2013, author = {Berthaut, Florent and Marshall, Mark T. and Subramanian, Sriram and Hachet, Martin}, title = {Rouages: Revealing the Mechanisms of Digital Musical Instruments to the Audience}, pages = {164--169}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178478}, url = {http://www.nime.org/proceedings/2013/nime2013_51.pdf}, keywords = {rouages, digital musical instruments, mappings, 3D interface, mixed-reality,} }
Thomas Resch. 2013. note for Max — An extension for Max/MSP for Media Arts & music. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 210–212. http://doi.org/10.5281/zenodo.1178644
Abstract
Download PDF DOI
note for Max consists of four objects for the Software Max/MSP which allow sequencing in floating point resolution and provide a Graphical User Interface and a Scripting Interface for generating events within a timeline. Due to the complete integration into Max/MSP it is possible to control almost every type of client like another software, audio and video or extern hardware by note or control note itself by other software and hardware.
@inproceedings{Resch2013, author = {Resch, Thomas}, title = {note~ for Max --- An extension for Max/MSP for Media Arts \& music}, pages = {210--212}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178644}, url = {http://www.nime.org/proceedings/2013/nime2013_57.pdf}, keywords = {Max/MSP, composing, timeline, GUI, sequencing, score, notation.} }
Yoonchang Han, Sejun Kwon, Kibeom Lee, and Kyogu Lee. 2013. A Musical Performance Evaluation System for Beginner Musician based on Real-time Score Following. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 120–121. http://doi.org/10.5281/zenodo.1178546
Abstract
Download PDF DOI
This paper proposes a musical performance feedback system based on real-time audio-score alignment for musical instrument education of beginner musicians. In the proposed system, we do not make use of symbolic data such as MIDI, but acquire a real-time audio input from on-board microphone of smartphone. Then, the system finds onset and pitch of the note from the signal, to align this information with the ground truth musical score. Real-time alignment allows the system to evaluate whether the user played the correct note or not, regardless of its timing, which enables user to play at their own speed, as playing same tempo with original musical score is problematic for beginners. As an output of evaluation, the system notifies the user about which part they are currently performing, and which note were played incorrectly.
@inproceedings{Han2013, author = {Han, Yoonchang and Kwon, Sejun and Lee, Kibeom and Lee, Kyogu}, title = {A Musical Performance Evaluation System for Beginner Musician based on Real-time Score Following}, pages = {120--121}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178546}, url = {http://www.nime.org/proceedings/2013/nime2013_60.pdf}, keywords = {Music performance analysis, Music education, Real-time score following} }
Abram Hindle. 2013. SWARMED: Captive Portals, Mobile Devices, and Audience Participation in Multi-User Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 174–179. http://doi.org/10.5281/zenodo.1178550
Abstract
Download PDF DOI
Audience participation in computer music has long been limited byresources such as sensor technology or the material goods necessary toshare such an instrument. A recent paradigm is to take advantageof the incredible popularity of the smart-phone, a pocket sizedcomputer, and other mobile devices, to provide the audience aninterface into a computer music instrument. In this paper we discuss amethod of sharing a computer music instrument’s interface with anaudience to allow them to interact via their smartphone. We propose amethod that is relatively cross-platform and device-agnostic, yetstill allows for a rich user-interactive experience. By emulating acaptive-portal or hotspot we reduce the adoptability issues and configurationproblems facing performers and their audience. We share ourexperiences with this system, as well as an implementation of thesystem itself.
@inproceedings{Hindle2013, author = {Hindle, Abram}, title = {{SW}ARMED: Captive Portals, Mobile Devices, and Audience Participation in Multi-User Music Performance}, pages = {174--179}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178550}, url = {http://www.nime.org/proceedings/2013/nime2013_62.pdf}, keywords = {Wifi, Smartphone, Audience Interaction, Adoption, Captive Portal, Multi-User, Hotspot} }
Brett Park and David Gerhard. 2013. Rainboard and Musix: Building dynamic isomorphic interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 319–324. http://doi.org/10.5281/zenodo.1178632
Abstract
Download PDF DOI
Since Euler’s development of the Tonnetz in 1739, musicians, composers and instrument designers have been fascinated with the concept of musicalisomorphism, the idea that by arranging tones by their harmonic relationships rather than by their physical properties, the common shapes of musical constructs will appear, facilitating learning and new ways of exploring harmonic spaces. The construction of isomorphic instruments, beyond limited square isomorphisms present in many stringed instruments, has been a challenge in the past for two reasons: The first problem, that of re-arranging note actuators from their sounding elements, has been solved by digital instrument design. The second, more conceptual problem, is that only a single isomorphism can be designed for any one instrument, requiring the instrument designer (as well as composer and performer) to "lock in" to a single isomorphism, or to have a different instrument for each isomorphism in order to experiment. Musix (an iOS application) and Rainboard (a physical device) are two new musical instruments built to overcome this and other limitations of existing isomorphic instruments. Musix was developed to allow experimentation with a wide variety of different isomorphic layouts, to assess the advantages and disadvantages of each. The Rainboard consists of a hexagonal array of arcade buttons embedded with RGB-LEDs, which are used to indicate characteristics of the isomorphism currently in use on the Rainboard. The creation of these two instruments/experimentation platforms allows for isomorphic layouts to be explored in waysthat are not possible with existing instruments.
@inproceedings{Park2013, author = {Park, Brett and Gerhard, David}, title = {Rainboard and Musix: Building dynamic isomorphic interfaces}, pages = {319--324}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178632}, url = {http://www.nime.org/proceedings/2013/nime2013_65.pdf}, keywords = {isomorphic, mobile application, hexagon, keyboard} }
Dalia El-Shimy and Jeremy R. Cooperstock. 2013. Reactive Environment for Network Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 158–163. http://doi.org/10.5281/zenodo.1178506
Abstract
Download PDF DOI
For a number of years, musicians in different locations have been able toperform with one another over a network as though present on the same stage.However, rather than attempt to re-create an environment for Network MusicPerformance (NMP) that mimics co-present performance as closely as possible, wepropose focusing on providing musicians with additional controls that can helpincrease the level of interaction between them. To this end, we have developeda reactive environment for distributed performance that provides participantsdynamic, real-time control over several aspects of their performance, enablingthem to change volume levels and experience exaggerated stereo panning. Inaddition, our reactive environment reinforces a feeling of a “shared space” between musicians. It differs most notably from standard ventures into thedesign of novel musical interfaces and installations in its reliance onuser-centric methodologies borrowed from the field of Human-ComputerInteraction (HCI). Not only does this research enable us to closely examine thecommunicative aspects of performance, it also allows us to explore newinterpretations of the network as a performance space. This paper describes themotivation and background behind our project, the work that has been undertakentowards its realization and the future steps that have yet to be explored.
@inproceedings{ElShimy2013, author = {El-Shimy, Dalia and Cooperstock, Jeremy R.}, title = {Reactive Environment for Network Music Performance}, pages = {158--163}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178506}, url = {http://www.nime.org/proceedings/2013/nime2013_66.pdf} }
Bridget Johnson and Ajay Kapur. 2013. MULTI-TOUCH INTERFACES FOR PHANTOM SOURCE POSITIONING IN LIVE SOUND DIFFUSION. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 213–216. http://doi.org/10.5281/zenodo.1178570
Abstract
Download PDF DOI
This paper presents a new technique for interface-driven diffusion performance. Details outlining the development of a new tabletop surface-based performance interface, named tactile.space, are discussed. User interface and amplitude panning processes employed in the creation of tactile.space are focused upon,and are followed by a user study-based evaluation of the interface. It is hoped that the techniques described in this paper afford performers and composers an enhanced level of creative expression in the diffusion performance practice. This paper introduces and evaluates tactile.space, a multi-touch performance interface for diffusion built on the BrickTable. It describes how tactile.space implements Vector Base Amplitude Panning to achieve real-time source positioning. The final section of this paper presents the findings of a userstudy that was conducted by those who performed with the interface, evaluating the interface as a performance tool with a focus on the increased creative expression the interface affords, and directly comparing it to the traditional diffusion user interface.
@inproceedings{Johnson2013, author = {Johnson, Bridget and Kapur, Ajay}, title = {MULTI-TOUCH INTERFACES FOR PHANTOM SOURCE POSITIONING IN LIVE SOUND DIFFUSION}, pages = {213--216}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178570}, url = {http://www.nime.org/proceedings/2013/nime2013_75.pdf}, keywords = {Multi touch, diffusion, VBAP, tabletop surface} }
Simon Lui. 2013. A Compact Spectrum-Assisted Human Beatboxing Reinforcement Learning Tool On Smartphone. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 25–28. http://doi.org/10.5281/zenodo.1178600
Abstract
Download PDF DOI
Music is expressive and hard to be described by words. Learning music istherefore not a straightforward task especially for vocal music such as humanbeatboxing. People usually learn beatboxing in the traditional way of imitatingaudio sample without steps and instructions. Spectrogram contains a lot ofinformation about audio, but it is too complicated to be understood inreal-time. Reinforcement learning is a psychological method, which makes use ofreward and/or punishment as stimulus to train the decision-making process ofhuman. We propose a novel music learning approach based on the reinforcementlearning method, which makes use of compact and easy-to-read spectruminformation as visual clue to assist human beatboxing learning on smartphone.Experimental result shows that the visual information is easy to understand inreal-time, which improves the effectiveness of beatboxing self-learning.
@inproceedings{Lui2013, author = {Lui, Simon}, title = {A Compact Spectrum-Assisted Human Beatboxing Reinforcement Learning Tool On Smartphone}, pages = {25--28}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178600}, url = {http://www.nime.org/proceedings/2013/nime2013_79.pdf}, keywords = {Audio analysis, music learning tool, reinforcement learning, smartphone app, audio information retrieval.} }
Kenneth W.K. Lo, Chi Kin Lau, Michael Xuelin Huang, Wai Wa Tang, Grace Ngai, and Stephen C.F. Chan. 2013. Mobile DJ: a Tangible, Mobile Platform for Active and Collaborative Music Listening. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 217–222. http://doi.org/10.5281/zenodo.1178598
Abstract
Download PDF DOI
Mobile DJ is a music-listening system that allows multiple users to interactand collaboratively contribute to a single song over a social network. Activelistening through a tangible interface facilitates users to manipulate musicaleffects, such as incorporating chords or “scratching” the record. Acommunication and interaction server further enables multiple users to connectover the Internet and collaborate and interact through their music. User testsindicate that the device is successful at facilitating user immersion into theactive listening experience, and that users enjoy the added sensory input aswell as the novel way of interacting with the music and each other.
@inproceedings{Lo2013, author = {Lo, Kenneth W.K. and Lau, Chi Kin and Huang, Michael Xuelin and Tang, Wai Wa and Ngai, Grace and Chan, Stephen C.F.}, title = {Mobile DJ: a Tangible, Mobile Platform for Active and Collaborative Music Listening}, pages = {217--222}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178598}, url = {http://www.nime.org/proceedings/2013/nime2013_81.pdf}, keywords = {Mobile, music, interaction design, tangible user interface} }
Baptiste Caramiaux and Atau Tanaka. 2013. Machine Learning of Musical Gestures. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 513–518. http://doi.org/10.5281/zenodo.1178490
Abstract
Download PDF DOI
We present an overview of machine learning (ML) techniques and theirapplication in interactive music and new digital instruments design. We firstgive to the non-specialist reader an introduction to two ML tasks,classification and regression, that are particularly relevant for gesturalinteraction. We then present a review of the literature in current NIMEresearch that uses ML in musical gesture analysis and gestural sound control.We describe the ways in which machine learning is useful for creatingexpressive musical interaction, and in turn why live music performance presentsa pertinent and challenging use case for machine learning.
@inproceedings{Caramiaux2013, author = {Caramiaux, Baptiste and Tanaka, Atau}, title = {Machine Learning of Musical Gestures}, pages = {513--518}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178490}, url = {http://www.nime.org/proceedings/2013/nime2013_84.pdf}, keywords = {Machine Learning, Data mining, Musical Expression, Musical Gestures, Analysis, Control, Gesture, Sound} }
Jieun Oh and Ge Wang. 2013. LOLOL: Laugh Out Loud On Laptop. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 190–195. http://doi.org/10.5281/zenodo.1178626
Abstract
Download PDF DOI
Significant progress in the domains of speech- and singing-synthesis has enhanced communicative potential of machines. To make computers more vocallyexpressive, however, we need a deeper understanding of how nonlinguistic social signals are patterned and perceived. In this paper, we focus on laughter expressions: how a phrase of vocalized notes that we call ”laughter” may bemodeled and performed to implicate nuanced meaning imbued in the acousticsignal. In designing our model, we emphasize (1) using high-level descriptors as control parameters, (2) enabling real-time performable laughter, and (3) prioritizing expressiveness over realism. We present an interactive systemimplemented in ChucK that allows users to systematically play with the musicalingredients of laughter. A crowd sourced study on the perception of synthesized laughter showed that our model is capable of generating a range of laughter types, suggesting an exciting potential for expressive laughter synthesis.
@inproceedings{Oh2013, author = {Oh, Jieun and Wang, Ge}, title = {LOLOL: Laugh Out Loud On Laptop}, pages = {190--195}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178626}, url = {http://www.nime.org/proceedings/2013/nime2013_86.pdf}, keywords = {laughter, vocalization, synthesis model, real-time controller, interface for musical expression} }
Marco Donnarumma, Baptiste Caramiaux, and Atau Tanaka. 2013. Muscular Interactions. Combining EMG and MMG sensing for musical practice. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 128–131. http://doi.org/10.5281/zenodo.1178504
Abstract
Download PDF DOI
We present the first combined use of the electromyogram (EMG) andmechanomyogram (MMG), two biosignals that result from muscular activity, forinteractive music applications. We exploit differences between these twosignals, as reported in the biomedical literature, to create bi-modalsonification and sound synthesis mappings that allow performers to distinguishthe two components in a single complex arm gesture. We study non-expertplayers’ ability to articulate the different modalities. Results show thatpurposely designed gestures and mapping techniques enable novices to rapidlylearn to independently control the two biosignals.
@inproceedings{Donnarumma2013, author = {Donnarumma, Marco and Caramiaux, Baptiste and Tanaka, Atau}, title = {Muscular Interactions. Combining {EMG} and MMG sensing for musical practice}, pages = {128--131}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178504}, url = {http://www.nime.org/proceedings/2013/nime2013_90.pdf}, keywords = {NIME, sensorimotor system, EMG, MMG, biosignal, multimodal, mapping} }
Colin Honigman, Andrew Walton, and Ajay Kapur. 2013. The Third Room: A 3D Virtual Music Framework. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 29–34. http://doi.org/10.5281/zenodo.1178556
Abstract
Download PDF DOI
This paper describes a new framework for music creation using 3D audio andvisual techniques. It describes the Third Room, which uses a Kinect to placeusers in a virtual environment to interact with new instruments for musicalexpression. Users can also interact with smart objects, including the Ember(modified mbira digital interface) and the Fluid (a wireless six degrees offreedom and touch controller). This project also includes new techniques for 3Daudio connected to a 3D virtual space using multi-channel speakers anddistributed robotic instruments.
@inproceedings{Honigman2013, author = {Honigman, Colin and Walton, Andrew and Kapur, Ajay}, title = {The Third Room: A {3D} Virtual Music Framework}, pages = {29--34}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178556}, url = {http://www.nime.org/proceedings/2013/nime2013_92.pdf}, keywords = {Kinect Camera, Third Space, Interface, Virtual Reality, Natural Interaction, Robotics, Arduino} }
KatieAnna E Wolf and Rebecca Fiebrink. 2013. SonNet: A Code Interface for Sonifying Computer Network Data. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 503–506. http://doi.org/10.5281/zenodo.1178690
Abstract
Download PDF DOI
As any computer user employs the Internet to accomplish everyday activities, a flow of data packets moves across the network, forming their own patterns in response to his or her actions. Artists and sound designers who are interested in accessing that data to make music must currently possess low-level knowledge of Internet protocols and spend signifi-cant effort working with low-level networking code. We have created SonNet, a new software tool that lowers these practical barriers to experimenting and composing with network data. SonNet executes packet-sniffng and network connection state analysis automatically, and it includes an easy-touse ChucK object that can be instantiated, customized, and queried from a user’s own code. In this paper, we present the design and implementation of the SonNet system, and we discuss a pilot evaluation of the system with computer music composers. We also discuss compositional applications of SonNet and illustrate the use of the system in an example composition.
@inproceedings{Wolf2013, author = {Wolf, KatieAnna E and Fiebrink, Rebecca}, title = {SonNet: A Code Interface for Sonifying Computer Network Data}, pages = {503--506}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178690}, url = {http://www.nime.org/proceedings/2013/nime2013_94.pdf}, keywords = {Sonification, network data, compositional tools} }
Koray Tahiroğlu, Nuno N. Correia, and Miguel Espada. 2013. PESI Extended System: In Space, On Body, with 3 Musicians. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 35–40. http://doi.org/10.5281/zenodo.1178666
Abstract
Download PDF DOI
This paper introduces a novel collaborative environment (PESI) in whichperformers are not only free to move and interact with each other but wheretheir social interactions contribute to the sonic outcome. PESI system isdesigned for co-located collaboration and provides embodied and spatialopportunities for musical exploration. To evaluate PESI with skilled musicians,a user-test jam session was conducted. Musicians’ comments indicate that thesystem facilitates group interaction finely to bring up further intentions tomusical ideas. Results from our user-test jam session indicate that, through some modificationof the ’in-space’ response to the improvisation, and through more intuitiveinteractions with the ’on-body’ mobile instruments, we could make thecollaborative music activity a more engaging and active experience. Despitebeing only user-tested once with musicians, the group interview has raisedfruitful discussions on the precise details of the system components.Furthermore, the paradigms of musical interaction and social actions in groupactivities need to be questioned when we seek design requirements for such acollaborative environment. We introduced a system that we believe can open upnew ways of musical exploration in group music activity with a number ofmusicians. The system brings up the affordances of accessible technologieswhile creating opportunities for novel design applications to be explored. Ourresearch proposes further development of the system, focusing on movementbehavior in long-term interaction between performers. We plan to implement thisversion and evaluate design and implementation with distinct skilled musicians.
@inproceedings{Tahiroglu2013, author = {Tahiro{\u g}lu, Koray and Correia, Nuno N. and Espada, Miguel}, title = {PESI Extended System: In Space, On Body, with 3 Musicians}, pages = {35--40}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178666}, url = {http://www.nime.org/proceedings/2013/nime2013_97.pdf}, keywords = {Affordances, collaboration, social interaction, mobile music, extended system, NIME} }
Mayank Sanganeria and Kurt Werner. 2013. GrainProc: a real-time granular synthesis interface for live performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Graduate School of Culture Technology, KAIST, pp. 223–226. http://doi.org/10.5281/zenodo.1178652
Abstract
Download PDF DOI
GrainProc is a touchscreen interface for real-time granular synthesis designedfor live performance. The user provides a real-time audio input (electricguitar, for example) as a granularization source and controls various synthesisparameters with their fingers or toes. The control parameters are designed togive the user access to intuitive and expressive live granular manipulations.
@inproceedings{Sanganeria2013, author = {Sanganeria, Mayank and Werner, Kurt}, title = {GrainProc: a real-time granular synthesis interface for live performance}, pages = {223--226}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2013}, month = may, publisher = {Graduate School of Culture Technology, KAIST}, address = {Daejeon, Republic of Korea}, issn = {2220-4806}, doi = {10.5281/zenodo.1178652}, url = {http://www.nime.org/proceedings/2013/nime2013_99.pdf}, keywords = {Granular synthesis, touch screen interface, toe control, real-time, CCRMA} }
2012
Jim Murphy, Ajay Kapur, and Dale Carnegie. 2012. Better Drumming Through Calibration: Techniques for Pre-Performance Robotic Percussion Optimization. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180545
Abstract
Download PDF DOI
A problem with many contemporary musical robotic percussion systems lies in the fact that solenoids fail to respond lin-early to linear increases in input velocity. This nonlinearity forces performers to individually tailor their compositions to specific robotic drummers. To address this problem, we introduce a method of pre-performance calibration using metaheuristic search techniques. A variety of such techniques are introduced and evaluated and the results of the optimized solenoid-based percussion systems are presented and compared with output from non-calibrated systems.
@inproceedings{Murphy2012, author = {Murphy, Jim and Kapur, Ajay and Carnegie, Dale}, title = {Better Drumming Through Calibration: Techniques for Pre-Performance Robotic Percussion Optimization}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180545}, url = {http://www.nime.org/proceedings/2012/nime2012_100.pdf}, keywords = {musical robotics, human-robot interaction} }
N. Cameron Britt, Jeff Snyder, and Andrew McPherson. 2012. The EMvibe: An Electromagnetically Actuated Vibraphone. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178221
Abstract
Download PDF DOI
The EMvibe is an augmented vibraphone that allows for continuous control over the amplitude and spectrum of in-dividual notes. The system uses electromagnetic actuators to induce vibrations in the vibraphone’s aluminum tone bars. The tone bars and the electromagnetic actuators are coupled via neodymium magnets affixed to each bar. The acoustic properties of the vibraphone allowed us to develop a very simple, low-cost and powerful amplification solution that requires no heat sinking. The physical design is meant to be portable and robust, and the system can be easily installed on any vibraphone without interfering with normal performance techniques. The system supports multiple in-terfacing solutions, affording the performer and composer the ability to interact with the EMvibe in different ways depending on the musical context.
@inproceedings{Britt2012, author = {Britt, N. Cameron and Snyder, Jeff and McPherson, Andrew}, title = {The EMvibe: An Electromagnetically Actuated Vibraphone}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178221}, url = {http://www.nime.org/proceedings/2012/nime2012_101.pdf}, keywords = {Vibraphone, augmented instrument, electromagnetic actuation} }
William Brent. 2012. The Gesturally Extended Piano. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178219
Abstract
Download PDF DOI
This paper introduces the Gesturally Extended Piano—an augmented instrument controller that relies on information drawn from performer motion tracking in order to control real-time audiovisual processing and synthesis. Specifically, the positions, heights, velocities, and relative distances and angles of points on the hands and forearms are followed. Technical details and installation of the tracking system are covered, as well as strategies for interpreting and mapping the resulting data in relation to synthesis parameters. Design factors surrounding mapping choices and the interrelation between mapped parameters are also considered.
@inproceedings{Brent2012, author = {Brent, William}, title = {The Gesturally Extended Piano}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178219}, url = {http://www.nime.org/proceedings/2012/nime2012_102.pdf}, keywords = {Augmented instruments, controllers, motion tracking, mapping} }
Lonce Wyse, Suranga Nanayakkara, Paul Seekings, Sim Heng Ong, and Elizabeth Taylor. 2012. Palm-area sensitivity to vibrotactile stimuli above 1 kHz. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178453
Abstract
Download PDF DOI
The upper limit of frequency sensitivity for vibrotactile stimulation of the fingers and hand is commonly accepted as 1 kHz. However, during the course of our research to develop a full-hand vibrotactile musical communication device for the hearing-impaired, we repeatedly found evidence suggesting sensitivity to higher frequencies. Most of the studies on which vibrotactile sensitivity are based have been conducted using sine tones delivered by point-contact actuators. The current study was designed to investigate vibrotactile sensitivity using complex signals and full, open-hand contact with a flat vibrating surface representing more natural environmental conditions. Sensitivity to frequencies considerably higher than previously reported was demonstrated for all the signal types tested. Furthermore, complex signals seem to be more easily detected than sine tones, especially at low frequencies. Our findings are applicable to a general understanding of sensory physiology, and to the development of new vibrotactile display devices for music and other applications.
@inproceedings{Wyse2012, author = {Wyse, Lonce and Nanayakkara, Suranga and Seekings, Paul and Ong, Sim Heng and Taylor, Elizabeth}, title = {Palm-area sensitivity to vibrotactile stimuli above 1~{kHz}}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178453}, url = {http://www.nime.org/proceedings/2012/nime2012_105.pdf}, keywords = {Haptic Sensitivity, Hearing-impaired, Vibrotactile Threshold} }
Roberto Pugliese, Koray Tahiroglu, Callum Goddard, and James Nesfield. 2012. Augmenting human-human interaction in mobile group improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180573
Abstract
Download PDF DOI
In this paper strategies for augmenting the social dimension of collaborative music making, in particular in the form of bodily and situated interaction are presented. Mobile instruments are extended by means of relational descriptors democratically controlled by the group and mapped to sound parameters. A qualitative evaluation approach is described and a user test with participants playing in groups of three conducted. The results of the analysis show core-categories such as familiarity with instrument and situation, shift of focus in activity, family of interactions and different categories of the experience emerging from the interviews. Our evaluation shows the suitability of our approach but also the need for iterating on our design on the basis of the perspectives brought forth by the users. This latter observation confirms the importance of conducting a thorough interview session followed by data analysis on the line of grounded theory.
@inproceedings{Pugliese2012, author = {Pugliese, Roberto and Tahiroglu, Koray and Goddard, Callum and Nesfield, James}, title = {Augmenting human-human interaction in mobile group improvisation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180573}, url = {http://www.nime.org/proceedings/2012/nime2012_108.pdf}, keywords = {Collaborative music making, evaluation methods, mobile music, human-human interaction.} }
Benjamin R. Oliver, Rachel M. van Besouw, and David R. Nicholls. 2012. The ‘Interactive Music Awareness Program’ (IMAP) for Cochlear Implant Users. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180557
Abstract
Download PDF DOI
There is some evidence that structured training can benefit cochlear implant (CI) users’ appraisal of music as well as their music perception abilities. There are currently very limited music training resources available for CI users to explore. This demonstration will introduce delegates to the ‘Interactive Music Awareness Program’ (IMAP) for cochlear implant users, which was developed in response to the need for a client-centered, structured, interactive, creative, open-ended, educational and challenging music (re)habilitation resource.
@inproceedings{Oliver2012, author = {Oliver, Benjamin R. and van Besouw, Rachel M. and Nicholls, David R.}, title = {The `Interactive Music Awareness Program' (IMAP) for Cochlear Implant Users}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180557}, url = {http://www.nime.org/proceedings/2012/nime2012_109.pdf}, keywords = {music, cochlear implants, perception, rehabilitation, auditory training, interactive learning, client-centred software} }
Martin Piñeyro. 2012. Electric Slide Organistrum. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180571
Abstract
Download PDF DOI
The Electric Slide Organistrum (Figure 1) is an acoustic stringed instrument played through a video capture system. The vibration of the instrument string is generated electro-magnetically and the pitch variation is achieved by movements carried out by the player in front of a video camera. This instrument results from integrating an ancient technique for the production of sounds as it is the vibration of a string on a soundbox and actual human-computer interaction technology such as motion detection.
@inproceedings{Pineyro2012, author = {Pi{\~n}eyro, Martin}, title = {Electric Slide Organistrum}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180571}, url = {http://www.nime.org/proceedings/2012/nime2012_114.pdf}, keywords = {Gestural Interface, eBow, Pickup, Bowed string, Electromagnetic actuation} }
Andrew McPherson. 2012. Techniques and Circuits for Electromagnetic Instrument Actuation. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180533
Abstract
Download PDF DOI
There is growing interest in the field of augmented musical instruments, which extend traditional acoustic instruments using new sensors and actuators. Several designs use electromagnetic actuation to induce vibrations in the acoustic mechanism, manipulating the traditional sound of the in-strument without external speakers. This paper presents techniques and guidelines for the use of electromagnetic actuation in augmented instruments, including actuator design and selection, interfacing with the instrument, and cir-cuits for driving the actuators. The material in this pa-per forms the basis of the magnetic resonator piano, an electromagnetically-augmented acoustic grand piano now in its second design iteration. In addition to discussing applications to the piano, this paper aims to provide a toolbox to accelerate the design of new hybrid acoustic-electronic instruments.
@inproceedings{McPherson2012a, author = {McPherson, Andrew}, title = {Techniques and Circuits for Electromagnetic Instrument Actuation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180533}, url = {http://www.nime.org/proceedings/2012/nime2012_117.pdf}, keywords = {augmented instruments, electromagnetic actuation, circuit design, hardware} }
Sidharth Subramanian, Jason Freeman, and Scott McCoid. 2012. LOLbot: Machine Musicianship in Laptop Ensembles. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178425
Abstract
Download PDF DOI
This paper describes a recent addition to LOLC, a text-based environment for collaborative improvisation for laptop ensembles, incorporating a machine musician that plays along with human performers. The machine musician LOLbot analyses the patterns created by human performers and the composite music they create as they are layered in performance. Based on user specified settings, LOLbot chooses appropriate patterns to play with the ensemble, either to add contrast to the existing performance or to be coherent with the rhythmic structure of the performance. The paper describes the background and motivations of the project, outlines the design of the original LOLC environment and describes the architecture and implementation of LOLbot.
@inproceedings{Subramanian2012, author = {Subramanian, Sidharth and Freeman, Jason and McCoid, Scott}, title = {LOLbot: Machine Musicianship in Laptop Ensembles}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178425}, url = {http://www.nime.org/proceedings/2012/nime2012_119.pdf}, keywords = {Machine Musicianship, Live Coding, Laptop Orchestra} }
Diemo Schwarz. 2012. The Sound Space as Musical Instrument: Playing Corpus-Based Concatenative Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180593
Abstract
Download PDF DOI
Corpus-based concatenative synthesis is a fairly recent sound synthesis method, based on descriptor analysis of any number of existing or live-recorded sounds, and synthesis by selection of sound segments from the database matching given sound characteristics. It is well described in the literature, but has been rarely examined for its capacity as a new interface for musical expression. The interesting outcome of such an examination is that the actual instrument is the space of sound characteristics, through which the performer navigates with gestures captured by various input devices. We will take a look at different types of interaction modes and controllers (positional, inertial, audio analysis) and the gestures they afford, and provide a critical assessment of their musical and expressive capabilities, based on several years of musical experience, performing with the CataRT system for real-time CBCS.
@inproceedings{Schwarz2012, author = {Schwarz, Diemo}, title = {The Sound Space as Musical Instrument: Playing Corpus-Based Concatenative Synthesis}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180593}, url = {http://www.nime.org/proceedings/2012/nime2012_120.pdf}, keywords = {CataRT, corpus-based concatenative synthesis, gesture} }
Anne-Marie Skriver Hansen, Hans Jørgen Andersen, and Pirkko Raudaskoski. 2012. Two Shared Rapid Turn Taking Sound Interfaces for Novices. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178275
Abstract
Download PDF DOI
This paper presents the results of user interaction with two explorative music environments (sound system A and B) that were inspired from the Banda Linda music tradition in two different ways. The sound systems adapted to how a team of two players improvised and made a melody together in an interleaved fashion: Systems A and B used a fuzzy logic algorithm and pattern recognition to respond with modifications of a background rhythms. In an experiment with a pen tablet interface as the music instrument, users aged 10-13 were to tap tones and continue each other’s melody. The sound systems rewarded users sonically, if they managed to add tones to their mutual melody in a rapid turn taking manner with rhythmical patterns. Videos of experiment sessions show that user teams contributed to a melody in ways that resemble conversation. Interaction data show that each sound system made player teams play in different ways, but players in general had a hard time adjusting to a non-Western music tradition. The paper concludes with a comparison and evaluation of the two sound systems. Finally it proposes a new approach to the design of collaborative and shared music environments that is based on ”listening applications”.
@inproceedings{Hansen2012, author = {Hansen, Anne-Marie Skriver and Andersen, Hans J{\o}rgen and Raudaskoski, Pirkko}, title = {Two Shared Rapid Turn Taking Sound Interfaces for Novices}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178275}, url = {http://www.nime.org/proceedings/2012/nime2012_123.pdf}, keywords = {Music improvisation, novices, social learning, interaction studies, interaction design.} }
Eyal Shahar. 2012. SoundStrand: a Tangible Interface for Composing Music with Limited Degrees of Freedom. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180595
Abstract
Download PDF DOI
SoundStrand is a tangible music composition tool. It demonstrates a paradigm developed to enable music composition through the use of tangible interfaces. This paradigm attempts to overcome the contrast between the relatively small of amount degrees of freedom usually demonstrated by tangible interfaces and the vast number of possibilities that musical composition presents. SoundStrand is comprised of a set of physical objects called cells, each representing a musical phrase. Cells can be sequentially connected to each other to create a musical theme. Cells can also be physically manipulated to access a wide range of melodic, rhythmic and harmonic variations. The SoundStrand software assures that as the cells are manipulated, the melodic flow, harmonic transitions and rhythmic patterns of the theme remain musically plausible while preserving the user’s intentions.
@inproceedings{Shahar2012, author = {Shahar, Eyal}, title = {SoundStrand: a Tangible Interface for Composing Music with Limited Degrees of Freedom}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180595}, url = {http://www.nime.org/proceedings/2012/nime2012_125.pdf}, keywords = {Tangible, algorithmic, composition, computer assisted} }
Nathan Weitzner, Jason Freeman, Stephen Garrett, and Yan-Ling Chen. 2012. massMobile -an Audience Participation Framework. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178449
Abstract
Download PDF DOI
massMobile is a client-server system for mass audience participation in live performances using smartphones. It was designed to flexibly adapt to a variety of participatory performance needs and to a variety of performance venues. It allows for real time bi-directional communication between performers and audiences utilizing existing wireless 3G, 4G, or WiFi networks. In this paper, we discuss the goals, design, and implementation of the framework, and we describe several projects realized with massMobile.
@inproceedings{Weitzner2012, author = {Weitzner, Nathan and Freeman, Jason and Garrett, Stephen and Chen, Yan-Ling}, title = {massMobile -an Audience Participation Framework}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178449}, url = {http://www.nime.org/proceedings/2012/nime2012_128.pdf}, keywords = {audience participation, network music, smartphone, performance, mobile} }
Jamie Henson, Benjamin Collins, Alexander Giles, Kathryn Webb, Matthew Livingston, and Thomas Mortensson. 2012. Kugelschwung -a Pendulum-based Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178285
Abstract
Download PDF DOI
This paper introduces the concept of Kugelschwung, a digital musical instrument centrally based around the use of pendulums and lasers to create unique and highly interactive electronic ambient soundscapes. Here, we explore the underlying design and physical construction of the instrument, as well as its implementation and feasibility as an instrument in the real world. To conclude, we outline potential expansions to the instrument, describing how its range of applications can be extended to accommodate a variety of musical styles.
@inproceedings{Henson2012, author = {Henson, Jamie and Collins, Benjamin and Giles, Alexander and Webb, Kathryn and Livingston, Matthew and Mortensson, Thomas}, title = {Kugelschwung -a Pendulum-based Musical Instrument}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178285}, url = {http://www.nime.org/proceedings/2012/nime2012_131.pdf}, keywords = {laser, pendulums, instrument design, electronic, sampler, soundscape, expressive performance} }
Patrick McGlynn, Victor Lazzarini, Gordon Delap, and Xiaoyu Chen. 2012. Recontextualizing the Multi-touch Surface. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178349
Abstract
Download PDF DOI
This paper contends that the development of expressive performance interfaces using multi-touch technology has been hindered by an over-reliance upon GUI paradigms. Despite offering rich and robust data output and multiple ways to interpret it, approaches towards using multi-touch technology in digit al musical inst rument design have been markedly conservative, showing a strong tendency towards modeling existing hardware. This not only negates many of the benefits of multi-touch technology but also creates specific difficulties in the context of live music performance. A case study of two other interface types that have seen considerable musical use –the XY pad and button grid –illustrates the manner in which the implicit characteristics of a device determine the conditions under which it will favorably perform. Accordingly, this paper proposes an alternative approach to multi-touch which emp hasizes the imp licit strengths of the technology and establishes a philosophy of design around them. Finally, we introduce two toolkits currently being used to assess the validity of this approach.
@inproceedings{McGlynn2012, author = {McGlynn, Patrick and Lazzarini, Victor and Delap, Gordon and Chen, Xiaoyu}, title = {Recontextualizing the Multi-touch Surface}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178349}, url = {http://www.nime.org/proceedings/2012/nime2012_132.pdf}, keywords = {Multi-touch, controllers, mapping, gesture, GUIs, physical interfaces, perceptual & cognitive issues} }
Marco Donnarumma. 2012. Music for Flesh II: informing interactive music performance with the viscerality of the body system. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178245
Abstract
Download PDF DOI
Performing music with a computer and loudspeakers represents always a challenge. The lack of a traditional instrument requires the performer to study idiomatic strategies by which musicianship becomes apparent. On the other hand, the audience needs to decode those strategies, so to achieve an understanding and appreciation of the music being played. The issue is particularly relevant to the performance of music that results from the mediation between biological signals of the human body and physical performance. The present article tackles this concern by demonstrating a new model of musical performance; what I define biophysical music. This is music generated and played in real time by amplifying and processing the acoustic sound of a performer’s muscle contractions. The model relies on an original and open source technology made of custom biosensors and a related software framework. The succesfull application of these tools is discussed in the practical context of a solo piece for sensors, laptop and loudspeakers. Eventually, the compositional strategies that characterize the piece are discussed along with a systematic description of the relevant mapping techniques and their sonic outcome.
@inproceedings{Donnarumma2012, author = {Donnarumma, Marco}, title = {Music for Flesh II: informing interactive music performance with the viscerality of the body system}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178245}, url = {http://www.nime.org/proceedings/2012/nime2012_133.pdf}, keywords = {Muscle sounds, biophysical music, augmented body, realtime performance, human-computer interaction, embodiment.} }
Graham Booth and Michael Gurevich. 2012. Collaborative composition and socially constituted instruments: Ensemble laptop performance through the lens of ethnography. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178215
Abstract
Download PDF DOI
In this paper, we argue that the design of New Interfaces for Musical Expression has much to gain from the study of interaction in ensemble laptop performance contexts using ethnographic techniques. Inspired by recent third-stream research in the field of human computer interaction, we describe a recent ethnomethodologically-informed study of the Birmingham Laptop Ensemble (BiLE), and detail our approach to thick description of the group’s working practices. Initial formal analysis of this material sheds light on the fluidity of composer, performer and designer roles within the ensemble and shows how confluences of these roles constitute member’s differing viewpoints. We go on to draw out a number of strands of interaction that highlight the essentially complex, socially constructed and value driven nature of the group’s practice and conclude by reviewing the implications of these factors on the design of software tools for laptop ensembles.
@inproceedings{Booth2012, author = {Booth, Graham and Gurevich, Michael}, title = {Collaborative composition and socially constituted instruments: Ensemble laptop performance through the lens of ethnography}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178215}, url = {http://www.nime.org/proceedings/2012/nime2012_136.pdf}, keywords = {Laptop Performance, Ethnography, Ethnomethodology, Human Computer Interaction.} }
Stelios Manousakis. 2012. Network spaces as collaborative instruments: WLAN trilateration for musical echolocation in sound art. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178341
Abstract
Download PDF DOI
This paper presents the system and technology developed for the distributed, micro-telematic, interactive sound art installation, The Network Is A Blind Space. The piece uses sound to explore the physical yet invisible electromagnetic spaces created by Wireless Local Area Networks (WLANs). To this end, the author created a framework for indoor WiFi localization, providing a variety of control data for various types of ‘musical echolocation’. This data, generated mostly by visitors exploring the installation while holding WiFi-enabled devices, is used to convey the hidden properties of wireless networks as dynamic spaces through an artistic experience.
@inproceedings{Manousakis2012, author = {Manousakis, Stelios}, title = {Network spaces as collaborative instruments: {WLAN} trilateration for musical echolocation in sound art}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178341}, url = {http://www.nime.org/proceedings/2012/nime2012_142.pdf}, keywords = {Network music, mobile music, distributed music, interactivity, sound art installation, collaborative instrument, site-specific, electromagnetic signals, WiFi, trilateration, traceroute, echolocation, SuperCollider, Pure Data, RjDj, mapping} }
Ryan McGee, Daniel Ashbrook, and Sean White. 2012. SenSynth: a Mobile Application for Dynamic Sensor to Sound Mapping. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178347
Abstract
Download PDF DOI
SenSynth is an open-source mobile application that allows for arbitrary, dynamic mapping between several sensors and sound synthesis parameters. In addition to synthesis techniques commonly found on mobile devices, SenSynth includes a scanned synthesis source for the audification of sensor data. Using SenSynth, we present a novel instrument based on the audification of accelerometer data and introduce a new means of mobile synthesis control via a wearable magnetic ring. SenSynth also employs a global pitch quantizer so one may adjust the level of virtuosity required to play any instruments created via mapping.
@inproceedings{McGee2012, author = {McGee, Ryan and Ashbrook, Daniel and White, Sean}, title = {SenSynth: a Mobile Application for Dynamic Sensor to Sound Mapping}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178347}, url = {http://www.nime.org/proceedings/2012/nime2012_149.pdf}, keywords = {mobile music, sonification, audification, mobile sensors} }
Ian Hattwick and Marcelo Wanderley. 2012. A Dimension Space for Evaluating Collaborative Musical Performance Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178281
Abstract
Download PDF DOI
The configurability and networking abilities of digital musical instruments increases the possibilities for collaboration in musical performances. Computer music ensembles such as laptop orchestras are becoming increasingly common and provide laboratories for the exploration of these possibilities. However, much of the literature regarding the creation of DMIs has been focused on individual expressivity, and their potential for collaborative performance has been under-utilized. This paper makes the case for the benefits of an approach to digital musical instrument design that begins with their collaborative potential, examines several frameworks and sets of principles for the creation of digital musical instruments, and proposes a dimension space representation of collaborative approaches which can be used to evaluate and guide future DMI creation. Several examples of DMIs and compositions are then evaluated and discussed in the context of this dimension space.
@inproceedings{Hattwick2012, author = {Hattwick, Ian and Wanderley, Marcelo}, title = {A Dimension Space for Evaluating Collaborative Musical Performance Systems}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178281}, url = {http://www.nime.org/proceedings/2012/nime2012_150.pdf}, keywords = {dimension space, collaborative, digital musical instrument, dmi, digital music ensemble, dme} }
Chris Carlson and Ge Wang. 2012. Borderlands -An Audiovisual Interface for Granular Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178229
Abstract
Download PDF DOI
Borderlands is a new interface for composing and performing with granular synthesis. The software enables flexible, realtime improvisation and is designed to allow users to engage with sonic material on a fundamental level, breaking free of traditional paradigms for interaction with this technique. The user is envisioned as an organizer of sound, simultaneously assuming the roles of curator, performer, and listener. This paper places the software within the context of painterly interfaces and describes the user interaction design and synthesis methodology.
@inproceedings{Carlson2012, author = {Carlson, Chris and Wang, Ge}, title = {Borderlands -An Audiovisual Interface for Granular Synthesis}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178229}, url = {http://www.nime.org/proceedings/2012/nime2012_152.pdf}, keywords = {Granular synthesis, painterly interfaces, improvisation, organized sound, NIME, CCRMA} }
Ian Hattwick and Kojiro Umezaki. 2012. Approaches to Interaction in a Digital Music Ensemble. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178279
Abstract
Download PDF DOI
The Physical Computing Ensemble was created in order to determine the viability of an approach to musical performance which focuses on the relationships and interactions of the performers. Three performance systems utilizing gestural controllers were designed and implemented, each with a different strategy for performer interaction. These strategies took advantage of the opportunities for collaborative performance inherent in digital musical instruments due to their networking abilities and reconfigurable software. These characteristics allow for the easy implementation of varying approaches to collaborative performance. Ensembles who utilize digital musical instruments provide a fertile environment for the design, testing, and utilization of collaborative performance systems. The three strategies discussed in this paper are the parameterization of musical elements, turn-based collaborative control of sound, and the interaction of musical systems created by multiple performers. Design principles, implementation, and a performance using these strategies are discussed, and the conclusion is drawn that performer interaction and collaboration as a primary focus for system design, composition, and performance is viable.
@inproceedings{Hattwick2012a, author = {Hattwick, Ian and Umezaki, Kojiro}, title = {Approaches to Interaction in a Digital Music Ensemble}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178279}, url = {http://www.nime.org/proceedings/2012/nime2012_153.pdf}, keywords = {Collaborative performance, interaction, digital musical instruments, gestural controller, digital music ensemble, Wii} }
Gabriel Vigliensoni and Marcelo M. Wanderley. 2012. A Quantitative Comparison of Position Trackers for the Development of a Touch-less Musical Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178445
Abstract
Download PDF DOI
This paper presents a comparison of three-dimensional (3D) position tracking systems in terms of some of their performance parameters such as static accuracy and precision, update rate, and shape of the space they sense. The underlying concepts and characteristics of position tracking tech-nologies are reviewed, and four position tracking systems (Vicon, Polhemus, Kinect, and Gametrak), based on dif-ferent technologies, are empirically compared according to their performance parameters and technical specifications. Our results show that, overall, the Vicon was the position tracker with the best performance.
@inproceedings{Vigliensoni2012, author = {Vigliensoni, Gabriel and Wanderley, Marcelo M.}, title = {A Quantitative Comparison of Position Trackers for the Development of a Touch-less Musical Interface}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178445}, url = {http://www.nime.org/proceedings/2012/nime2012_155.pdf}, keywords = {Position tracker, comparison, touch-less, gestural control} }
Martin Marier. 2012. Designing Mappings for Musical Interfaces Using Preset Interpolation. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178343
Abstract
Download PDF DOI
A new method for interpolating between presets is described. The interpolation algorithm called Intersecting N-Spheres Interpolation is simple to compute and its generalization to higher dimensions is straightforward. The current imple-mentation in the SuperCollider environment is presented as a tool that eases the design of many-to-many mappings for musical interfaces. Examples of its uses, including such mappings in conjunction with a musical interface called the sponge, are given and discussed.
@inproceedings{Marier2012, author = {Marier, Martin}, title = {Designing Mappings for Musical Interfaces Using Preset Interpolation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178343}, url = {http://www.nime.org/proceedings/2012/nime2012_159.pdf}, keywords = {Mapping, Preset, Interpolation, Sponge, SuperCollider} }
Alexander Refsum Jensenius and Arve Voldsund. 2012. The Music Ball Project: Concept, Design, Development, Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180579
Abstract
Download PDF DOI
We report on the Music Ball Project, a longterm, exploratory project focused on creating novel instruments/controllers with a spherical shape as the common denominator. Besides a simple and attractive geometrical shape, balls afford many different types of use, including play. This has made our music balls popular among widely different groups of people, from toddlers to seniors, including those that would not otherwise engage with a musical instrument. The paper summarises our experience of designing, constructing and using a number of music balls of various sizes and with different types of sound-producing elements.
@inproceedings{Jensenius2012, author = {Jensenius, Alexander Refsum and Voldsund, Arve}, title = {The Music Ball Project: Concept, Design, Development, Performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180579}, url = {http://www.nime.org/proceedings/2012/nime2012_161.pdf}, keywords = {music balls, instruments, controllers, inexpensive} }
James Nesfield. 2012. Strategies for Engagement in Computer-Mediated Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180549
Abstract
Download PDF DOI
A general strategy for encouraging embodied engagement within musical interface design is introduced. A pair of ex-ample implementations of this strategy are described, one tangible and one graphical. As part of a potentially larger set within our general approach, two separate relationships are described termed ‘decay and contribution’ and ‘instability and adjustment’, which are heavily dependent on the action requirements and timeliness of the interaction. By suggesting this process occurs on a timescale of less than one second it is hoped attentiveness and engagement can be en-couraged to the possible benefit of future developments in digital musical instrument design.
@inproceedings{Nesfield2012, author = {Nesfield, James}, title = {Strategies for Engagement in Computer-Mediated Musical Performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180549}, url = {http://www.nime.org/proceedings/2012/nime2012_162.pdf}, keywords = {engagement, embodiment, flow, decay, instability, design, NIME} }
Maria Astrinaki, Nicolas d’Alessandro, and Thierry Dutoit. 2012. MAGE –A Platform for Tangible Speech Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178207
Abstract
Download PDF DOI
In this paper, we describe our pioneering work in developing speech synthesis beyond the Text-To-Speech paradigm. We introduce tangible speech synthesis as an alternate way of envisioning how artificial speech content can be produced. Tangible speech synthesis refers to the ability, for a given system, to provide some physicality and interactivity to important speech production parameters. We present MAGE, our new software platform for high-quality reactive speech synthesis, based on statistical parametric modeling and more particularly hidden Markov models. We also introduce a new HandSketch-based musical instrument. This instrument brings pen and posture based interaction on the top of MAGE, and demonstrates a first proof of concept.
@inproceedings{Astrinaki2012, author = {Astrinaki, Maria and d'Alessandro, Nicolas and Dutoit, Thierry}, title = {MAGE --A Platform for Tangible Speech Synthesis}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178207}, url = {http://www.nime.org/proceedings/2012/nime2012_164.pdf}, keywords = {speech synthesis, Hidden Markov Models, tangible interaction, software library, MAGE, HTS, performative} }
Adinda Rosa van ’t Klooster. 2012. The body as mediator of music in the Emotion Light. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178307
Abstract
Download PDF DOI
This paper describes the development of the Emotion Light, an interactive biofeedback artwork where the user listens to a piece of electronic music whilst holding a semi-transparent sculpture that tracks his/her bodily responses and translates these into changing light patterns that emerge from the sculpture. The context of this work is briefly described and the questions it poses are derived from interviews held with audience members.
@inproceedings{tKlooster2012, author = {van 't Klooster, Adinda Rosa}, title = {The body as mediator of music in the Emotion Light}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178307}, url = {http://www.nime.org/proceedings/2012/nime2012_167.pdf}, keywords = {Interactive biofeedback artwork, music and emotion, novel interfaces, practice based research, bodily response, heart rate, biosignals, affective computing, aesthetic interaction, mediating body, biology inspired system} }
Andreas Bergsland and Tone Åse. 2012. Using a seeing/blindfolded paradigm to study audience experiences of live-electronic performances with voice. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178213
Abstract
Download PDF DOI
As a part of the research project Voice Meetings, a solo live-electronic vocal performance was presented for 63 students. Through a mixed method approach applying both written and oral response, feedback from one blindfolded and one seeing audience group was collected and analyzed. There were marked differences between the groups regarding focus, in that the participants in blindfolded group tended to focus on fewer aspects, have a heightened focus and be less distracted than the seeing group. The seeing group, on its part, focused more on the technological instruments applied in the performance, the performer herself and her actions. This study also shows that there were only minor differences between the groups regarding the experience of skill and control, and argues that this observation can be explained by earlier research on skill in NIMEs.
@inproceedings{Bergsland2012, author = {Bergsland, Andreas and {\AA}se, Tone}, title = {Using a seeing/blindfolded paradigm to study audience experiences of live-electronic performances with voice}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178213}, url = {http://www.nime.org/proceedings/2012/nime2012_168.pdf}, keywords = {Performance, audience reception, acousmatic listening, live-electronics, voice, qualitative research} }
Vangelis Lympouridis. 2012. EnActor: A Blueprint for a Whole Body Interaction Design Software Platform. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178333
Abstract
Download PDF DOI
Through a series of collaborative research projects usingOrient, a wireless, inertial sensor-based motion capture system,I have studied the requirements of musicians, dancers,performers and choreographers and identified various design strategies for the realization of Whole Body Interactive (WBI)performance systems. The acquired experience and knowledge led to the design and development of EnActor, prototypeWhole Body Interaction Design software. The software has been realized as a collection of modules that were proved valuable for the design of interactive performance systems that are directly controlled by the body.This paper presents EnActor’s layout as a blueprint for the design and development of more sophisticated descendants.Complete video archive of my research projects in WBI performance systems at: http://www.inter-axions.com
@inproceedings{Lympouridis2012, author = {Lympouridis, Vangelis}, title = {EnActor: A Blueprint for a Whole Body Interaction Design Software Platform}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178333}, url = {http://www.nime.org/proceedings/2012/nime2012_169.pdf}, keywords = {Whole Body Interaction, Motion Capture, Interactive Performance Systems, Interaction Design, Software Prototype} }
Bongjun Kim and Woon Seung Yeo. 2012. Interactive Mobile Music Performance with Digital Compass. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178303
Abstract
Download PDF DOI
In this paper we introduce an interactive mobile music performance system using the digital compass of mobile phones. Compass-based interface can detect the aiming orientation of performers on stage, allowing us to obtain information on interactions between performers and use it for both musical mappings and visualizations on screen for the audience. We document and discuss the result of a compass-based mobile music performance, Where Are You Standing, and present an algorithm for a new app to track down the performers’ positions in real-time.
@inproceedings{Kim2012, author = {Kim, Bongjun and Yeo, Woon Seung}, title = {Interactive Mobile Music Performance with Digital Compass}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178303}, url = {http://www.nime.org/proceedings/2012/nime2012_170.pdf}, keywords = {Mobile music, mobile phone, smartphone, compass, magnetometer, aiming gesture, musical mapping, musical sonification} }
Michael Rotondo, Nick Kruge, and Ge Wang. 2012. Many-Person Instruments for Computer Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180583
Abstract
Download PDF DOI
In this paper we explore the concept of instruments which are played by more than one person, and present two case studies. We designed, built and performed with Feedbørk, a two-player instrument comprising two iPads which form a video feedback loop, and Barrel, a nine-player instrument made up of eight Gametrak controllers fastened to a steel industrial barrel. By splitting the control of these instruments into distinct but interdependent roles, we allow each individual to easily play a part while retaining a rich complexity of output for the whole system. We found that the relationships between those roles had a significant effect on how the players communicated with each other, and on how the performance was perceived by the audience.
@inproceedings{Rotondo2012, author = {Rotondo, Michael and Kruge, Nick and Wang, Ge}, title = {Many-Person Instruments for Computer Music Performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180583}, url = {http://www.nime.org/proceedings/2012/nime2012_171.pdf}, keywords = {Many person musical instruments, cooperative music, asymmetric interfaces, transmodal feedback} }
Jerônimo Barbosa, Filipe Calegario, Verônica Teichrieb, Geber Ramalho, and Patrick McGlynn. 2012. Considering Audience’s View Towards an Evaluation Methodology for Digital Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178209
Abstract
Download PDF DOI
The authors propose the development of a more complete Digital Music Instrument (DMI) evaluation methodology, which provides structured tools for the incremental development of prototypes based on user feedback. This paper emphasizes an important but often ignored stakeholder present in the context of musical performance: the audience. We demonstrate the practical application of an audience focused methodology through a case study (‘Illusio’), discuss the obtained results and possible improvements for future works.
@inproceedings{Barbosa2012, author = {Barbosa, Jer{\^o}nimo and Calegario, Filipe and Teichrieb, Ver{\^o}nica and Ramalho, Geber and McGlynn, Patrick}, title = {Considering Audience's View Towards an Evaluation Methodology for Digital Musical Instruments}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178209}, url = {http://www.nime.org/proceedings/2012/nime2012_174.pdf}, keywords = {Empirical methods, quantitative, usability testing and evaluation, digital musical instruments, evaluation methodology, Illusio} }
Kirsty Beilharz and Aengus Martin. 2012. The ‘Interface’ in Site-Specific Sound Installation. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178211
Abstract
Download PDF DOI
In site-specific installation or situated media, a significant part of the "I" in NIME is the environment, the site and the implicit features of site such as humans, weather, materials, natural acoustics, etc. These could be viewed as design constraints, or features, even agency determining the outcome of responsive sound installation works. This paper discusses the notion of interface in public (especially outdoor) installation, starting with the authors’ Sculpture by the Sea Windtraces work using this recent experience as the launch-pad, with reference to ways in which others have approached it (focusing on sensor, weather-activated outdoor installations in a brief traverse of related cases, e.g. works by Garth Paine, James Bulley and Daniel Jones, and David Bowen). This is a dialogical paper on the topic of interface and ‘site’ as the aetiology of interaction/interface/instrument and its type of response (e.g. to environment and audience). While the focus here is on outdoor factors (particularly the climatic environment), indoor site-specific installation also experiences the effects of ambient noise, acoustic context, and audience as integral agents in the interface and perception of the work, its musical expression. The way in which features of the situation are integrated has relevance for others in the NIME community in the design of responsive spaces, art installation, and large-scale or installed instruments in which users, participants, acoustics play a significant role.
@inproceedings{Beilharz2012, author = {Beilharz, Kirsty and Martin, Aengus}, title = {The `Interface' in Site-Specific Sound Installation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178211}, url = {http://www.nime.org/proceedings/2012/nime2012_175.pdf}, keywords = {NIME, site-specific installation, outdoor sound installation} }
Alexander Müller-Rakow and Jochen Fuchs. 2012. The Human Skin as an Interface for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178335
Abstract
Download PDF DOI
This paper discusses the utilization of human skin as a tangible interface for musical expression and collaborative performance. We present an overview of existing different instrument designs that include the skin as the main input. As a further development of a previous exploration [16] we outline the setup and interaction methods of ‘Skintimacy’, an instrument that appropriates the skin for low voltage power transmission in multi-player interaction. Observations deriving from proof-of-concept exploration and performances using the instrument are brought into the reflection and discussion concerning the capabilities and limitations of skin as an input surface.
@inproceedings{Muller2012, author = {M{\''u}ller-Rakow, Alexander and Fuchs, Jochen}, title = {The Human Skin as an Interface for Musical Expression}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178335}, url = {http://www.nime.org/proceedings/2012/nime2012_177.pdf}, keywords = {Skin-based instruments, skin conductivity, collaborative interfaces, embodiment, intimacy, multi-player performance} }
Myunghee Lee, Youngsun Kim, and Gerard Kim. 2012. Empathetic Interactive Music Video Experience. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178313
Abstract
Download PDF DOI
Empatheater is a video playing system that is controlled by multimodal interaction. As the video is played, the user must interact and emulate predefined “events” for the video to continue on. The user is given the illusion of playing an active role in the unraveling video content and can empathize with the performer. In this paper, we report about user experiences with Empatheater when applied to musical video contents.
@inproceedings{Lee2012, author = {Lee, Myunghee and Kim, Youngsun and Kim, Gerard}, title = {Empathetic Interactive Music Video Experience}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178313}, url = {http://www.nime.org/proceedings/2012/nime2012_179.pdf}, keywords = {Music video, Empathy, Interactive video, Musical event, Multimodal interaction.} }
Alexis Clay, Nadine Couture, Myriam Desainte-Catherine, Pierre-Henri Vulliard, Joseph Larralde, and Elodie Decarsin. 2012. Movement to emotions to music: using whole body emotional expression as an interaction for electronic music generation. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178237
Abstract
Download PDF DOI
The augmented ballet project aims at gathering research from several fields and directing them towards a same application case: adding virtual elements (visual and acoustic) to a dance live performance, and allowing the dancer to interact with them. In this paper, we describe a novel interaction that we used in the frame of this project: using the dancer’s movements to recognize the emotions he expresses, and use these emotions to generate musical audio flows evolving in real-time. The originality of this interaction is threefold. First, it covers the whole interaction cycle from the input (the dancer’s movements) to the output (the generated music). Second, this interaction isn’t direct but goes through a high level of abstraction: dancer’s emotional expression is recognized and is the source of music generation. Third, this interaction has been designed and validated through constant collaboration with a choreographer, culminating in an augmented ballet performance in front of a live audience.
@inproceedings{Clay2012, author = {Clay, Alexis and Couture, Nadine and Desainte-Catherine, Myriam and Vulliard, Pierre-Henri and Larralde, Joseph and Decarsin, Elodie}, title = {Movement to emotions to music: using whole body emotional expression as an interaction for electronic music generation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178237}, url = {http://www.nime.org/proceedings/2012/nime2012_180.pdf}, keywords = {Interactive sonification, motion, gesture and music, interaction, live performance, musical human-computer interaction} }
Christoph Trappe. 2012. Making Sound Synthesis Accessible for Children. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178439
Abstract
Download PDF DOI
In this paper we present our project to make sound synthesis and music controller construction accessible to children in a technology design workshop. We present the work we have carried out to develop a graphical user interface, and give account of the workshop we conducted in collaboration with a local primary school. Our results indicate that the production of audio events by means of digital synthesis and algorithmic composition provides a rich and interesting field to be discovered for pedagogical workshops taking a Constructionist approach.
@inproceedings{Trappe2012, author = {Trappe, Christoph}, title = {Making Sound Synthesis Accessible for Children}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178439}, url = {http://www.nime.org/proceedings/2012/nime2012_181.pdf}, keywords = {Child Computer Interaction, Constructionism, Sound and Music Computing, Human-Computer Interface Design, Mu-sic Composition and Generation, Interactive Audio Sys-tems, Technology Design Activities.} }
Ståle A. Skogstad, Kristian Nymoen, Yago de Quay, and Alexander Refsum Jensenius. 2012. Developing the Dance Jockey System for Musical Interaction with the Xsens MVN Suit. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180601
Abstract
Download PDF DOI
In this paper we present the Dance Jockey System, a system developed for using a full body inertial motion capture suit (Xsens MVN) in music/dance performances. We present different strategies for extracting relevant postures and actions from the continuous data, and how these postures and actions can be used to control sonic and musical features. The system has been used in several public performances, and we believe it has great potential for further exploration. However, to overcome the current practical and technical challenges when working with the system, it is important to further refine tools and software in order to facilitate making of new performance pieces.
@inproceedings{Skogstad2012, author = {Skogstad, St{\aa}le A. and Nymoen, Kristian and de Quay, Yago and Jensenius, Alexander Refsum}, title = {Developing the Dance Jockey System for Musical Interaction with the Xsens {MV}N Suit}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180601}, url = {http://www.nime.org/proceedings/2012/nime2012_182.pdf} }
Sertan Şentürk, Sang Won Lee, Avinash Sastry, Anosh Daruwalla, and Gil Weinberg. 2012. Crossole: A Gestural Interface for Composition, Improvisation and Performance using Kinect. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178201
Abstract
Download PDF DOI
Meaning crossword of sound, Crossole is a musical meta-instrument where the music is visualized as a set of virtual blocks that resemble a crossword puzzle. In Crossole, the chord progressions are visually presented as a set of virtual blocks. With the aid of the Kinect sensing technology, a performer controls music by manipulating the crossword blocks using hand movements. The performer can build chords in the high level, traverse over the blocks, step into the low level to control the chord arpeggiations note by note, loop a chord progression or map gestures to various processing algorithms to enhance the timbral scenery.
@inproceedings{Senturk2012, author = {{\c S}ent{\''u}rk, Sertan and Lee, Sang Won and Sastry, Avinash and Daruwalla, Anosh and Weinberg, Gil}, title = {Crossole: A Gestural Interface for Composition, Improvisation and Performance using Kinect}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178201}, url = {http://www.nime.org/proceedings/2012/nime2012_185.pdf}, keywords = {Kinect, meta-instrument, chord progression, body gesture} }
Jeff Snyder and Andrew McPherson. 2012. The JD-1: an Implementation of a Hybrid Keyboard/Sequencer Controller for Analog Synthesizers. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178421
Abstract
Download PDF DOI
This paper presents the JD-1, a digital controller for analog modular synthesizers. The JD-1 features a capacitive touch-sensing keyboard that responds to continuous variations in finger contact, high-accuracy polyphonic control-voltage outputs, a built-in sequencer, and digital interfaces for connection to MIDI and OSC devices. Design goals include interoperability with a wide range of synthesizers, very high-resolution pitch control, and intuitive control of the sequencer from the keyboard.
@inproceedings{Snyder2012, author = {Snyder, Jeff and McPherson, Andrew}, title = {The JD-1: an Implementation of a Hybrid Keyboard/Sequencer Controller for Analog Synthesizers}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178421}, url = {http://www.nime.org/proceedings/2012/nime2012_187.pdf}, keywords = {keyboard, sequencer, analog synthesizer, capacitive touch sensing} }
Liam O’Sullivan, Dermot Furlong, and Frank Boland. 2012. Introducing CrossMapper: Another Tool for Mapping Musical Control Parameters. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180555
Abstract
Download PDF DOI
Development of new musical interfaces often requires experimentation with the mapping of available controller inputs to output parameters. Useful mappings for a particular application may be complex in nature, with one or more inputs being linked to one or more outputs. Existing development environments are commonly used to program such mappings, while code libraries provide powerful data-stream manipulation. However, room exists for a standalone application with a simpler graphical user interface for dynamically patching between inputs and outputs. This paper presents an early prototype version of a software tool that allows the user to route control signals in real time, using various messaging formats. It is cross-platform and runs as a standalone application in desktop and Android OS versions. The latter allows the users of mobile devices to experiment with mapping signals to and from physical computing components using the inbuilt multi-touch screen. Potential uses therefore include real-time mapping during performance in a more expressive manner than facilitated by existing tools.
@inproceedings{OSullivan2012, author = {O'Sullivan, Liam and Furlong, Dermot and Boland, Frank}, title = {Introducing CrossMapper: Another Tool for Mapping Musical Control Parameters}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180555}, url = {http://www.nime.org/proceedings/2012/nime2012_189.pdf}, keywords = {Mapping, Software Tools, Android.} }
Sébastien Schiesser and Jan C. Schacher. 2012. SABRe: The Augmented Bass Clarinet. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180587
Abstract
Download PDF DOI
An augmented bass clarinet is developed in order to extend the performance and composition potential of the instru-ment. Four groups of sensors are added: key positions, inertial movement, mouth pressure and trigger switches. The instrument communicates wirelessly with a receiver setup which produces an OSC data stream, usable by any appli-cation on a host computer. The SABRe projects intention is to be neither tied to its inventors nor to one single player but to offer a reference design for a larger community of bass clarinet players and composers. For this purpose, several instruments are made available and a number of composer residencies, workshops, presentations and concerts are organized. These serve for evaluation and improvement purposes in order to build a robust and user friendly extended musical instrument, that opens new playing modalities.
@inproceedings{Schiesser2012, author = {Schiesser, S{\'e}bastien and Schacher, Jan C.}, title = {SABRe: The Augmented Bass Clarinet}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180587}, url = {http://www.nime.org/proceedings/2012/nime2012_193.pdf}, keywords = {augmented instrument, bass clarinet, sensors, air pressure, gesture, OSC} }
Dan Overholt. 2012. Musical Interaction Design with the CUI32Stem: Wireless Options and the GROVE system for prototyping new interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180561
Abstract
Download PDF DOI
The Create USB Interface is an open source microcontroller board that can be programmed in C, BASIC, or Arduino languages. The latest version is called the CUI32Stem, and it is designed to work ‘hand-in-hand’ with the GROVE prototyping system that includes a wide range of sensors and actuators. It utilizes a high-performance Microchip® PIC32 microcontroller unit to allow programmable user interfaces. Its development and typical uses are described, focusing on musical interaction design scenarios. Several options for wireless connectivity are described as well, enabling the CUI32Stem to pair with a smartphone and/or a normal computer. Finally, SeeedStudio’s GROVE system is explained, which provides a prototyping system comprised of various elements that incorporate simple plugs, allowing the CUI32Stem to easily connect to the growing collection of open source GROVE transducers.
@inproceedings{Overholt2012, author = {Overholt, Dan}, title = {Musical Interaction Design with the CUI32{S}tem: Wireless Options and the GROVE system for prototyping new interfaces}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180561}, url = {http://www.nime.org/proceedings/2012/nime2012_194.pdf}, keywords = {Musical Interaction Design, NIME education, Microcontroller, Arduino language, StickOS BASIC, Open Sound Control, Microchip PIC32, Wireless, Zigflea, Wifi, 802.11g, Bluetooth, CUI32, CUI32Stem} }
Andrew McPherson. 2012. TouchKeys: Capacitive Multi-Touch Sensing on a Physical Keyboard. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180531
Abstract
Download PDF DOI
Capacitive touch sensing is increasingly used in musical con-trollers, particularly those based on multi-touch screen interfaces. However, in contrast to the venerable piano-style keyboard, touch screen controllers lack the tactile feedback many performers find crucial. This paper presents an augmentation system for acoustic and electronic keyboards in which multi-touch capacitive sensors are added to the surface of each key. Each key records the position of fingers on the surface, and by combining this data with MIDI note onsets and aftertouch from the host keyboard, the system functions as a multidimensional polyphonic controller for a wide variety of synthesis software. The paper will discuss general capacitive touch sensor design, keyboard-specific implementation strategies, and the development of a flexible mapping engine using OSC and MIDI.
@inproceedings{McPherson2012, author = {McPherson, Andrew}, title = {TouchKeys: Capacitive Multi-Touch Sensing on a Physical Keyboard}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180531}, url = {http://www.nime.org/proceedings/2012/nime2012_195.pdf}, keywords = {augmented instruments, keyboard, capacitive sensing, multitouch} }
Chi-Hsia Lai and Koray Tahiroglu. 2012. A Design Approach to Engage with Audience with Wearable Musical Instruments: Sound Gloves. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178309
Abstract
Download PDF DOI
This paper addresses the issue of engaging the audience with new musical instruments in live performance context. We introduce design concerns that we consider influential to enhance the communication flow between the audience and the performer. We also propose and put in practice a design approach that considers the use of performance space as a way to engage with the audience. A collaborative project, Sound Gloves, presented here exemplifies such a concept by dissolving the space between performers and audience. Our approach resulted in a continuous interaction between audience and performers, in which the social dynamics was changed in a positive way in a live performance context of NIMEs. Such an approach, we argue, may be considered as one way to further engage and interact with the audience.
@inproceedings{Lai2012, author = {Lai, Chi-Hsia and Tahiroglu, Koray}, title = {A Design Approach to Engage with Audience with Wearable Musical Instruments: Sound Gloves}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178309}, url = {http://www.nime.org/proceedings/2012/nime2012_197.pdf}, keywords = {NIME, wearable electronics, performance, design approach} }
Kristian Nymoen, Arve Voldsund, Ståle A. Skogstad, Alexander Refsum Jensenius, and Jim Torresen. 2012. Comparing Motion Data from an iPod Touch to a High-End Optical Infrared Marker-Based Motion Capture System. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180553
Abstract
Download PDF DOI
The paper presents an analysis of the quality of motion data from an iPod Touch (4th gen.). Acceleration and orientation data derived from internal sensors of an iPod is com-pared to data from a high end optical infrared marker-based motion capture system (Qualisys) in terms of latency, jitter, accuracy and precision. We identify some rotational drift in the iPod, and some time lag between the two systems. Still, the iPod motion data is quite reliable, especially for describing relative motion over a short period of time.
@inproceedings{Nymoen2012, author = {Nymoen, Kristian and Voldsund, Arve and Skogstad, St{\aa}le A. and Jensenius, Alexander Refsum and Torresen, Jim}, title = {Comparing Motion Data from an iPod Touch to a High-End Optical Infrared Marker-Based Motion Capture System}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180553}, url = {http://www.nime.org/proceedings/2012/nime2012_198.pdf} }
Yongki Park, Hoon Heo, and Kyogu Lee. 2012. Voicon: An Interactive Gestural Microphone For Vocal Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180565
Abstract
Download PDF DOI
This paper describes an interactive gestural microphone for vocal performance named Voicon. Voicon is a non-invasive and gesture-sensitive microphone which allows vocal performers to use natural gestures to create vocal augmentations and modifications by using embedded sensors in a microphone. Through vocal augmentation and modulation, the performers can easily generate desired amount of the vibrato and achieve wider vocal range. These vocal en-hancements will deliberately enrich the vocal performance both in its expressiveness and the dynamics. Using Voicon, singers can generate additional vibrato, control the pitch and activate customizable vocal effect by simple and intuitive gestures in live and recording context.
@inproceedings{Park2012, author = {Park, Yongki and Heo, Hoon and Lee, Kyogu}, title = {Voicon: An Interactive Gestural Microphone For Vocal Performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180565}, url = {http://www.nime.org/proceedings/2012/nime2012_199.pdf}, keywords = {Gesture, Microphone, Vocal Performance, Performance In-terface} }
Tomas Henriques. 2012. SONIK SPRING. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178283
Abstract
Download PDF DOI
The Sonik Spring is a portable and wireless digital instrument, created for real-time synthesis and control of sound. It brings together different types of sensory input, linking gestural motion and kinesthetic feedback to the production of sound. The interface consists of a 15-inch spring with unique flexibility, which allows multiple degrees of variation in its shape and length. The design of the instrument is described and its features discussed. Three performance modes are detailed highlighting the instrument’s expressive potential and wide range of functionality.
@inproceedings{Henriques2012, author = {Henriques, Tomas}, title = {SONIK SPRING}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178283}, url = {http://www.nime.org/proceedings/2012/nime2012_20.pdf}, keywords = {Interface for sound and music, Gestural control of sound, Kinesthetic and visual feedback} }
Duncan Menzies and Andrew McPherson. 2012. An Electronic Bagpipe Chanter for Automatic Recognition of Highland Piping Ornamentation. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180537
Abstract
Download PDF DOI
The Highland piping tradition requires the performer to learn and accurately reproduce a diverse array of ornaments, which can be a daunting prospect to the novice piper. This paper presents a system which analyses a player’s technique using sensor data obtained from an electronic bagpipe chanter interface. Automatic recognition of a broad range of piping embellishments allows real-time visual feedback to be generated, enabling the learner to ensure that they are practicing each movement correctly. The electronic chanter employs a robust and responsive infrared (IR) sensing strategy, and uses audio samples from acoustic recordings to produce a high quality bagpipe sound. Moreover, the continuous nature of the IR sensors offers the controller a considerable degree of flexibility, indicating sig-nificant potential for the inclusion of extended and novel techniques for musical expression in the future.
@inproceedings{Menzies2012, author = {Menzies, Duncan and McPherson, Andrew}, title = {An Electronic Bagpipe Chanter for Automatic Recognition of Highland Piping Ornamentation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180537}, url = {http://www.nime.org/proceedings/2012/nime2012_200.pdf}, keywords = {Great Highland Bagpipe, continuous infrared sensors, ornament recognition, practice tool, SuperCollider, OSC.} }
Nan-Wei Gong, Nan Zhao, and Joseph Paradiso. 2012. A Customizable Sensate Surface for Music Control. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178267
Abstract
Download PDF DOI
This paper describes a novel music control sensate surface, which enables integration between any musical instruments with a v ersatile, customizable, and essentially cost-effective user interface. This sensate surface is based on c onductive inkjet printing technology which allows capacitive sensor electrodes and connections between electronics components to be printed onto a large roll of flexible substrate that is unrestricted in length. The high dynamic range capacitive sensing electrodes can not only infer touch, but near-range, non-contact gestural nuance in a music performance. With this sensate surface, users can “cut” out their desired shapes, “paste” the number of inputs, and customize their controller interface, which can then send signals wirelessly to effects or software synthesizers. We seek to find a solution for integrating the form factor of traditional music controllers seamlessly on top of one’s music instrument and meanwhile adding expressiveness to the music performance by sensing and incorporating movements and gestures to manipulate the musical output. We present an example of implementation on an electric ukulele and provide several design examples to demonstrate the versatile capabilities of this system.
@inproceedings{Gong2012, author = {Gong, Nan-Wei and Zhao, Nan and Paradiso, Joseph}, title = {A Customizable Sensate Surface for Music Control}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178267}, url = {http://www.nime.org/proceedings/2012/nime2012_201.pdf}, keywords = {Sensate surface, music controller skin, customizable controller surface, flexible electronics} }
Dan Moses Schlessinger. 2012. Concept Tahoe: Microphone Midi Control. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180591
Abstract
Download PDF DOI
We have developed a prototype wireless microphone that provides vocalists with control over their vocal effects directly from the body of the microphone. A wireless microphone has been augmented with six momentary switches, one fader, and three axes of motion and position sensors, all of which provide MIDI output from the wireless receiver. The MIDI data is used to control external vocal effects units such as live loopers, reverbs, distortion pedals, etc. The goal was to to provide dramatically increased expressive control to vocal performances, and address some of the shortcomings of pedal-controlled effects. The addition of gestural controls from the motion sensors opens up new performance possibilities such as panning the voice simply by pointing the microphone in one direction or another. The result is a hybrid microphone-musical instrument which has recieved extremely positive results from vocalists in numerous infor-mal workshops.
@inproceedings{Schlessinger2012, author = {Schlessinger, Dan Moses}, title = {Concept Tahoe: Microphone Midi Control}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180591}, url = {http://www.nime.org/proceedings/2012/nime2012_202.pdf}, keywords = {NIME, Sennheiser, Concept Tahoe, MIDI, control, microphone} }
Qi Yang and Georg Essl. 2012. Augmented Piano Performance using a Depth Camera. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178455
Abstract
Download PDF DOI
We augment the piano keyboard with a 3D gesture space using Microsoft Kinect for sensing and top-down projection for visual feedback. This interface provides multi-axial gesture controls to enable continuous adjustments to multiple acoustic parameters such as those on the typical digital synthesizers. We believe that using gesture control is more visceral and aesthetically pleasing, especially during concert performance where the visibility of the performer’s action is important. Our system can also be used for other types of gesture interaction as well as for pedagogical applications.
@inproceedings{Yang2012, author = {Yang, Qi and Essl, Georg}, title = {Augmented Piano Performance using a Depth Camera}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178455}, url = {http://www.nime.org/proceedings/2012/nime2012_203.pdf}, keywords = {NIME, piano, depth camera, musical instrument, gesture, tabletop projection} }
Jim Torresen, Øyvind N. Hauback, Dan Overholt, and Alexander Refsum Jensenius. 2012. Development and Evaluation of a ZigFlea-based Wireless Transceiver Board for CUI32. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178433
Abstract
Download PDF DOI
We present a new wireless transceiver board for the CUI32 sensor interface, aimed at creating a solution that is flexible, reliable, and with little power consumption. Communica-tion with the board is based on the ZigFlea protocol and it has been evaluated on a CUI32 using the StickOS oper-ating system. Experiments show that the total sensor data collection time is linearly increasing with the number of sensor samples used. A data rate of 0.8 kbit/s is achieved for wirelessly transmitting three axes of a 3D accelerometer. Although this data rate is low compared to other systems, our solution benefits from ease-of-use and stability, and is useful for applications that are not time-critical.
@inproceedings{Torresen2012, author = {Torresen, Jim and Hauback, Øyvind N. and Overholt, Dan and Jensenius, Alexander Refsum}, title = {Development and Evaluation of a ZigFlea-based Wireless Transceiver Board for CUI32}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178433}, url = {http://www.nime.org/proceedings/2012/nime2012_205.pdf}, keywords = {wireless sensing, CUI32, StickOS, ZigBee, ZigFlea} }
Nicolas Makelberge, Álvaro Barbosa, André Perrotta, and Ferreira Luı́s Sarmento. 2012. Perfect Take: Experience design and new interfaces for musical expression. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178339
Abstract
Download PDF DOI
”Perfect Take” is a public installation out of networked acoustic instruments that let composers from all over the world exhibit their MIDI-works by means of the Internet. The primary aim of this system is to offer composers a way to have works exhibited and recorded in venues and with technologies not accessible to him/her under normal circumstances. The Secondary aim of this research is to highlight experience design as a complement to interaction design, and a shift of focus from functionality of a specific gestural controller, towards the environments, events and processes that they are part of.
@inproceedings{Makelberge2012, author = {Makelberge, Nicolas and {\'A}lvaro Barbosa and Perrotta, Andr{\'e} and Ferreira, Lu{\'\i}s Sarmento}, title = {Perfect Take: Experience design and new interfaces for musical expression}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178339}, url = {http://www.nime.org/proceedings/2012/nime2012_208.pdf}, keywords = {NIME, Networked Music, MIDI, Disklavier, music collaboration, creativity} }
Yoonchang Han, Jinsoo Na, and Kyogu Lee. 2012. FutureGrab: A wearable subtractive synthesizer using hand gesture. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178271
Abstract
Download PDF DOI
FutureGrab is a new wearable musical instrument for live performance that is highly intuitive while still generating an interesting sound by subtractive synthesis. Its sound effects resemble the human vowel pronunciation, which were mapped to hand gestures that are similar to the mouth shape of human to pronounce corresponding vowel. FutureGrab also provides all necessary features for a lead musical instrument such as pitch control, trigger, glissando and key adjustment. In addition, pitch indicator was added to give visual feedback to the performer, which can reduce the mistakes during live performances. This paper describes the motivation, system design, mapping strategy and implementation of FutureGrab, and evaluates the overall experience.
@inproceedings{Han2012a, author = {Han, Yoonchang and Na, Jinsoo and Lee, Kyogu}, title = {FutureGrab: A wearable subtractive synthesizer using hand gesture}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178271}, url = {http://www.nime.org/proceedings/2012/nime2012_209.pdf}, keywords = {Wearable musical instrument, Pure Data, gestural synthesis, formant synthesis, data-glove, visual feedback, subtractive synthesis} }
Red Wierenga. 2012. A New Keyboard-Based, Sensor-Augmented Instrument For Live Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178451
Abstract
Download PDF DOI
In an attempt to utilize the expert pianist’s technique and spare bandwidth, a new keyboard-based instrument augmented by sensors suggested by the examination of existing acoustic instruments is introduced. The complete instrument includes a keyboard, various pedals and knee levers, several bowing controllers, and breath and embouchure sensors connected to an Arduino microcontroller that sends sensor data to a laptop running Max/MSP, where custom software maps the data to synthesis algorithms. The audio is output to a digital amplifier powering a transducer mounted on a resonator box to which several of the sensors are attached. Careful sensor selection and mapping help to facilitate performance mode.
@inproceedings{Wierenga2012, author = {Wierenga, Red}, title = {A New Keyboard-Based, Sensor-Augmented Instrument For Live Performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178451}, url = {http://www.nime.org/proceedings/2012/nime2012_211.pdf}, keywords = {Gesture, controllers, Digital Musical Instrument, keyboard} }
Matthieu Savary, Diemo Schwarz, and Denis Pellerin. 2012. DIRTI —Dirty Tangible Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180585
Abstract
Download PDF DOI
Dirty Tangible Interfaces (DIRTI) are a new concept in interface design that forgoes the dogma of repeatability in favor of a richer and more complex experience, constantly evolving, never reversible, and infinitely modifiable. We built a prototype based on granular or liquid interaction material placed in a glass dish, that is analyzed by video tracking for its 3D relief. This relief, and the dynamic changes applied to it by the user, are interpreted as activation profiles to drive corpus-based concatenative sound synthesis, allowing one or more players to mold sonic landscapes and to plow through them in an inherently collaborative, expressive, and dynamic experience.
@inproceedings{Savary2012, author = {Savary, Matthieu and Schwarz, Diemo and Pellerin, Denis}, title = {DIRTI ---Dirty Tangible Interfaces}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180585}, url = {http://www.nime.org/proceedings/2012/nime2012_212.pdf}, keywords = {Tangible interface, Corpus-based concatenative synthesis, Non-standard interaction} }
Reboursière Loı̈c, Otso Lähdeoja, Thomas Drugman, Stéphane Dupont, Cécile Picard-Limpens, and Nicolas Riche. 2012. Left and right-hand guitar playing techniques detection. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180575
Abstract
Download PDF DOI
In this paper we present a series of algorithms developed to detect the following guitar playing techniques : bend, hammer-on, pull-off, slide, palm muting and harmonic. Detection of playing techniques can be used to control exter-nal content (i.e audio loops and effects, videos, light events, etc.), as well as to write real-time score or to assist guitar novices in their learning process. The guitar used is a Godin Multiac with an under-saddle RMC hexaphonic piezo pickup (one pickup per string, i.e six mono signals).
@inproceedings{Reboursiere2012, author = {Reboursi{\`e}re, Lo{\''\i}c and L{\''a}hdeoja, Otso and Drugman, Thomas and Dupont, St{\'e}phane and Picard-Limpens, C{\'e}cile and Riche, Nicolas}, title = {Left and right-hand guitar playing techniques detection}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180575}, url = {http://www.nime.org/proceedings/2012/nime2012_213.pdf}, keywords = {Guitar audio analysis, playing techniques, hexaphonic pickup, controller, augmented guitar} }
Hongchan Choi, John Granzow, and Joel Sadler. 2012. The Deckle Project : A Sketch of Three Sensors. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178235
Abstract
Download PDF DOI
The Deckle Group1 is an ensemble that designs, builds and performs on electroacoustic drawing boards. These draw-ing surfaces are augmented with Satellite CCRMA Beagle-Boards and Arduinos2.[1] Piezo microphones are used in conjunction with other sensors to produce sounds that are coupled tightly to mark-making gestures. Position tracking is achieved with infra-red object tracking, conductive fabric and a magnetometer.
@inproceedings{Choi2012, author = {Choi, Hongchan and Granzow, John and Sadler, Joel}, title = {The Deckle Project : A Sketch of Three Sensors}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178235}, url = {http://www.nime.org/proceedings/2012/nime2012_214.pdf}, keywords = {Deckle, BeagleBoard, Drawing, Sonification, Performance, Audiovisual, Gestural Interface} }
Zacharias Vamvakousis and Rafael Ramirez. 2012. Temporal Control In the EyeHarp Gaze-Controlled Musical Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178443
Abstract
Download PDF DOI
In this paper we describe the EyeHarp, a new gaze-controlled musical instrument, and the new features we recently added to its design. In particular, we report on the EyeHarp new controls, the arpeggiator, the new remote eye-tracking device, and the EyeHarp capacity to act as a MIDI controller for any VST plugin virtual instrument. We conducted an evaluation of the EyeHarp Temporal accuracy by monitor-ing 10 users while performing a melody task, and comparing their gaze control accuracy with their accuracy using a com-puter keyboard. We report on the results of the evaluation.
@inproceedings{Vamvakousis2012, author = {Vamvakousis, Zacharias and Ramirez, Rafael}, title = {Temporal Control In the EyeHarp Gaze-Controlled Musical Interface}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178443}, url = {http://www.nime.org/proceedings/2012/nime2012_215.pdf}, keywords = {Eye-tracking systems, music interfaces, gaze interaction} }
Yoon Chung Han and Byeong-jun Han. 2012. Virtual Pottery: An Interactive Audio-Visual Installation. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178273
Abstract
Download PDF DOI
Virtual Pottery is an interactive audiovisual piece that uses hand gesture to create 3D pottery objects and sound shape. Using the OptiTrack motion capture (Rigid Body) system at TransLab in UCSB, performers can take a glove with attached trackers, move the hand in x, y, and z axis and create their own sound pieces. Performers can also manipulate their pottery pieces in real time and change arrangement on the musical score interface in order to create a continuous musical composition. In this paper we address the relationship between body, sound and 3D shapes. We also describe the origin of Virtual Pottery, its design process, discuss its aesthetic value and musical sound synthesis system, and evaluate the overall experience.
@inproceedings{Han2012, author = {Han, Yoon Chung and Han, Byeong-jun}, title = {Virtual Pottery: An Interactive Audio-Visual Installation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178273}, url = {http://www.nime.org/proceedings/2012/nime2012_216.pdf}, keywords = {Virtual Pottery, virtual musical instrument, sound synthesis, motion and gesture, pottery, motion perception, interactive sound installation.} }
Chris Nash and Alan Blackwell. 2012. Liveness and Flow in Notation Use. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180547
Abstract
Download PDF DOI
This paper presents concepts, models, and empirical findings relating to liveness and flow in the user experience of systems mediated by notation. Results from an extensive two-year field study of over 1,000 sequencer and tracker users, combining interaction logging, user surveys, and a video study, are used to illustrate the properties of notations and interfaces that facilitate greater immersion in musical activities and domains, borrowing concepts from programming to illustrate the role of visual and musical feedback, from the notation and domain respectively. The Cognitive Dimensions of Notations framework and Csikszentmihalyi’s flow theory are combined to demonstrate how non-realtime, notation-mediated interaction can support focused, immersive, energetic, and intrinsically-rewarding musical experiences, and to what extent they are supported in the interfaces of music production software. Users are shown to maintain liveness through a rapid, iterative edit-audition cycle that integrates audio and visual feedback.
@inproceedings{Nash2012, author = {Nash, Chris and Blackwell, Alan}, title = {Liveness and Flow in Notation Use}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180547}, url = {http://www.nime.org/proceedings/2012/nime2012_217.pdf}, keywords = {notation, composition, liveness, flow, feedback, sequencers, DAWs, soundtracking, performance, user studies, programming} }
Shawn Trail, Tiago Fernandes Tavares, Dan Godlovitch, and George Tzanetakis. 2012. Direct and surrogate sensing for the Gyil african xylophone. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178437
Abstract
Download PDF DOI
The Gyil is a pentatonic African wooden xylophone with 14-15 keys. The work described in this paper has been motivated by three applications: computer analysis of Gyil performance, live improvised electro-acoustic music incorporating the Gyil, and hybrid sampling and physical mod-eling. In all three of these cases, detailed information about what is played on the Gyil needs to be digitally captured in real-time. We describe a direct sensing apparatus that can be used to achieve this. It is based on contact microphones and is informed by the specific characteristics of the Gyil. An alternative approach based on indirect acquisition is to apply polyphonic transcription on the signal acquired by a microphone without requiring the instrument to be modified. The direct sensing apparatus we have developed can be used to acquire ground truth for evaluating different approaches to polyphonic transcription and help create a “surrogate” sensor. Some initial results comparing different strategies to polyphonic transcription are presented.
@inproceedings{Trail2012, author = {Trail, Shawn and Tavares, Tiago Fernandes and Godlovitch, Dan and Tzanetakis, George}, title = {Direct and surrogate sensing for the Gyil african xylophone}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178437}, url = {http://www.nime.org/proceedings/2012/nime2012_222.pdf}, keywords = {hyperinstruments, indirect acquisition, surrogate sensors, computational ethnomusicology, physical modeling, perfor-mance analysis} }
David Gerhard and Brett Park. 2012. Instant Instrument Anywhere: A Self-Contained Capacitive Synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178261
Abstract
Download PDF DOI
The Instant Instrument Anywhere (IIA) is a small device which can be attached to any metal object to create an electronic instrument. The device uses capacitive sensing to detect proximity of the player’s body to the metal object, and sound is generated through a surface transducer which can be attached to any flat surface. Because the capacitive sensor can be any shape or size, absolute capacitive thresholding is not possible since the baseline capacitance will change. Instead, we use a differential-based moving sum threshold which can rapidly adjust to changes in the environment or be re-calibrated to a new metal object. We show that this dynamic threshold is effective in rejecting environmental noise and rapidly adapting to new objects. We also present details for constructing Instant Instruments Anywhere, including using smartphone as the synthesis engine and power supply.
@inproceedings{Gerhard2012, author = {Gerhard, David and Park, Brett}, title = {Instant Instrument Anywhere: A Self-Contained Capacitive Synthesizer}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178261}, url = {http://www.nime.org/proceedings/2012/nime2012_223.pdf}, keywords = {Capacitive Sensing, Arduino} }
Matti Luhtala, Ilkka Niemeläinen, Johan Plomp, Markku Turunen, and Julius Tuomisto. 2012. Studying Aesthetics in a Musical Interface Design Process Through ‘Aesthetic Experience Prism.’ Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178331
Abstract
Download PDF DOI
This paper introduces ‘The Aesthetic Experience Prism’, a framework for studying how components of aesthetic experience materialize in the model’s of interaction of novel musical interfaces as well as how the role of aesthetics could be made more explicit in the processes of designing interaction for musical technologies. The Aesthetic Experience Prism makes use of Arthur Danto’s framework of aesthetic experience that consists of three conceptual entities: (1) metaphor; (2) expression; and (3) style. In this paper we present key questions driving the research, theoretical background, artistic research approach and user research activities. In the DIYSE project a proof-of-concept music creation system prototype was developed in a collaborative design setting. The prototype provides means to the performer to create music with minimum effort while allowing for versatile interaction. We argue that by using an artistic research approach specifically targeting designing for aesthetic experience we were able to transform the knowledge from early design ideas to resulting technology products in which model’s of interaction metaphors, expression and style are in an apparent role.
@inproceedings{Luhtala2012, author = {Luhtala, Matti and Niemel{\''a}inen, Ilkka and Plomp, Johan and Turunen, Markku and Tuomisto, Julius}, title = {Studying Aesthetics in a Musical Interface Design Process Through `Aesthetic Experience Prism'}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178331}, url = {http://www.nime.org/proceedings/2012/nime2012_226.pdf}, keywords = {Aesthetics, Interaction Design, Artistic Research, Exploration} }
Avrum Hollinger and Marcelo M. Wanderley. 2012. Optoelectronic Acquisition and Control Board for Musical Applications. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178289
Abstract
Download PDF DOI
A modular and reconfigurable hardware platform for analog optoelectronic signal acquisition is presented. Its intended application is for fiber optic sensing in electronic musical interfaces, however the flexible design enables its use with a wide range of analog and digital sensors. Multiple gain and multiplexing stages as well as programmable analog and digital hardware blocks allow for the acquisition, processing, and communication of single-ended and differential signals. Along with a hub board, multiple acquisition boards can be connected to modularly extend the system’s capabilities to suit the needs of the application. Fiber optic sensors and their application in DMIs are briefly discussed, as well as the use of the hardware platform with specific musical interfaces.
@inproceedings{Hollinger2012, author = {Hollinger, Avrum and Wanderley, Marcelo M.}, title = {Optoelectronic Acquisition and Control Board for Musical Applications}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178289}, url = {http://www.nime.org/proceedings/2012/nime2012_228.pdf}, keywords = {fiber optic sensing, analog signal acquisition, musical interface, MRI-compatible} }
Gascia Ouzounian, R. Benjamin Knapp, Eric Lyon, and Luke DuBois. 2012. Music for Sleeping & Waking Minds. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180559
Abstract
Download PDF DOI
Music for Sleeping & Waking Minds (2011-2012) is a new, overnight work in which four performers fall asleep while wearing custom designed EEG sensors which monitor their brainwave activity. The data gathered from the EEG sensors is applied in real time to different audio and image signal processing functions, resulting in continuously evolving multi-channel sound environment and visual projection. This material serves as an audiovisual description of the individual and collective neurophysiological state of the ensemble. Audiences are invited to experience the work in different states of attention: while alert and asleep, resting and awakening.
@inproceedings{Ouzounian2012, author = {Ouzounian, Gascia and Knapp, R. Benjamin and Lyon, Eric and DuBois, Luke}, title = {Music for Sleeping \& Waking Minds}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180559}, url = {http://www.nime.org/proceedings/2012/nime2012_229.pdf}, keywords = {EEG, sleep, dream, biosignals, bio art, consciousness, BCI} }
Kevin Schlei. 2012. TC-11: A Programmable Multi-Touch Synthesizer for the iPad. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180589
Abstract
Download PDF DOI
This paper describes the design and realization of TC-11, a software instrument based on programmable multi-point controllers. TC-11 is a modular synthesizer for the iPad that uses multi-touch and device motion sensors for control. It has a robust patch programming interface that centers around multi-point controllers, providing powerful flexibility. This paper details the origin, design principles, programming implementation, and performance result of TC-11.
@inproceedings{Schlei2012, author = {Schlei, Kevin}, title = {TC-11: A Programmable Multi-Touch Synthesizer for the iPad}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180589}, url = {http://www.nime.org/proceedings/2012/nime2012_230.pdf}, keywords = {TC-11, iPad, multi-touch, multi-point, controller mapping, synthesis programming} }
Yuya Kikukawa, Takaharu Kanai, Tatsuhiko Suzuki, Toshiki Yoshiike, Tetsuaki Baba, and Kumiko Kushiyama. 2012. PocoPoco: A Kinetic Musical Interface With Electro-Magnetic Levitation Units. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178301
Abstract
Download PDF DOI
We developed original solenoid actuator units with several built-in sensors, and produced a box-shaped musical inter-face “PocoPoco” using 16 units of them as a universal input/output device. We applied up-and-down movement of the solenoid-units and user’s intuitive input to musical interface. Using transformation of the physical interface, we can apply movement of the units to new interaction design. At the same time we intend to suggest a new interface whose movement itself can attract the user.
@inproceedings{Kikukawa2012, author = {Kikukawa, Yuya and Kanai, Takaharu and Suzuki, Tatsuhiko and Yoshiike, Toshiki and Baba, Tetsuaki and Kushiyama, Kumiko}, title = {PocoPoco: A Kinetic Musical Interface With Electro-Magnetic Levitation Units}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178301}, url = {http://www.nime.org/proceedings/2012/nime2012_232.pdf}, keywords = {musical interface, interaction design, tactile, moving, kinetic} }
Doug Van Nort, Jonas Braasch, and Pauline Oliveros. 2012. Mapping to musical actions in the FILTER system. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180551
Abstract
Download PDF DOI
In this paper we discuss aspects of our work in develop-ing performance systems that are geared towards human-machine co-performance with a particular emphasis on improvisation. We present one particular system, FILTER, which was created in the context of a larger project related to artificial intelligence and performance, and has been tested in the context of our electro-acoustic performance trio. We discuss how this timbrally rich and highly non-idiomatic musical context has challenged the design of the system, with particular emphasis on the mapping of machine listening parameters to higher-level behaviors of the system in such a way that spontaneity and creativity are encouraged while maintaining a sense of novel dialogue.
@inproceedings{Nort2012, author = {Nort, Doug Van and Braasch, Jonas and Oliveros, Pauline}, title = {Mapping to musical actions in the FILTER system}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180551}, url = {http://www.nime.org/proceedings/2012/nime2012_235.pdf}, keywords = {Electroacoustic Improvisation, Machine Learning, Mapping, Sonic Gestures, Spatialization} }
Nathan Magnus and David Gerhard. 2012. Musician Assistance and Score Distribution (MASD). Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178337
Abstract
Download PDF DOI
The purpose of the Musician Assistance and Score Distribution (MASD) system is to assist novice musicians with playing in an orchestra, concert band, choir or other musical ensemble. MASD helps novice musicians in three ways. It removes the confusion that results from page turns, aides a musician’s return to the proper location in the music score after the looking at the conductor and notifies musicians of conductor instructions. MASD is currently verified by evaluating the time between sending beats or conductor information and this information being rendered for the musician. Future work includes user testing of this system. There are three major components to the MASD system. These components are Score Distribution, Score Rendering and Information Distribution. Score Distribution passes score information to clients and is facilitated by the Internet Communication Engine (ICE). Score Rendering uses the GUIDO Library to display the musical score. Information Distribution uses ICE and the IceStorm service to pass beat and instruction information to musicians.
@inproceedings{Magnus2012, author = {Magnus, Nathan and Gerhard, David}, title = {Musician Assistance and Score Distribution (MASD)}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178337}, url = {http://www.nime.org/proceedings/2012/nime2012_237.pdf}, keywords = {score distribution, score-following, score rendering, musician assistance} }
Atau Tanaka, Adam Parkinson, Zack Settel, and Koray Tahiroglu. 2012. A Survey and Thematic Analysis Approach as Input to the Design of Mobile Music GUIs. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178431
Abstract
Download PDF DOI
Mobile devices represent a growing research field within NIME, and a growing area for commercial music software. They present unique design challenges and opportunities, which are yet to be fully explored and exploited. In this paper, we propose using a survey method combined with qualitative analysis to investigate the way in which people use mobiles musically. We subsequently present as an area of future research our own PDplayer, which provides a completely self contained end application in the mobile device, potentially making the mobile a more viable and expressive tool for musicians.
@inproceedings{Tanaka2012, author = {Tanaka, Atau and Parkinson, Adam and Settel, Zack and Tahiroglu, Koray}, title = {A Survey and Thematic Analysis Approach as Input to the Design of Mobile Music GUIs}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178431}, url = {http://www.nime.org/proceedings/2012/nime2012_240.pdf}, keywords = {NIME, Mobile Music, Pure Data} }
Nate Derbinsky and Georg Essl. 2012. Exploring Reinforcement Learning for Mobile Percussive Collaboration. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178243
Abstract
Download PDF DOI
This paper presents a system for mobile percussive collaboration. We show that reinforcement learning can incrementally learn percussive beat patterns played by humans and supports realtime collaborative performance in the absence of one or more performers. This work leverages an existing integration between urMus and Soar and addresses multiple challenges involved in the deployment of machine-learning algorithms for mobile music expression, including tradeoffs between learning speed & quality; interface design for human collaborators; and real-time performance and improvisation.
@inproceedings{Derbinsky2012, author = {Derbinsky, Nate and Essl, Georg}, title = {Exploring Reinforcement Learning for Mobile Percussive Collaboration}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178243}, url = {http://www.nime.org/proceedings/2012/nime2012_241.pdf}, keywords = {Mobile music, machine learning, cognitive architecture} }
Olivier Tache, Stephen Sinclair, Jean-Loup Florens, and Marcelo Wanderley. 2012. Exploring audio and tactile qualities of instrumentality with bowed string simulations. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178429
Abstract
Download PDF DOI
Force-feedback and physical modeling technologies now allow to achieve the same kind of relation with virtual instruments as with acoustic instruments, but the design of such elaborate models needs guidelines based on the study of the human sensory-motor system and behaviour. This article presents a qualitative study of a simulated instrumental interaction in the case of the virtual bowed string, using both waveguide and mass-interaction models. Subjects were invited to explore the possibilities of the simulations and to express themselves verbally at the same time, allowing us to identify key qualities of the proposed systems that determine the construction of an intimate and rich relationship with the users.
@inproceedings{Tache2012, author = {Tache, Olivier and Sinclair, Stephen and Florens, Jean-Loup and Wanderley, Marcelo}, title = {Exploring audio and tactile qualities of instrumentality with bowed string simulations}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178429}, url = {http://www.nime.org/proceedings/2012/nime2012_243.pdf}, keywords = {Instrumental interaction, presence, force-feedback, physical modeling, simulation, haptics, bowed string.} }
Hans Leeuw and Jorrit Tamminga. 2012. NIME Education at the HKU, Emphasizing performance. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178321
Abstract
Download PDF DOI
This position paper likes to stress the role and importance of performance based education in NIME like subjects. It describes the ‘klankontwerp’ learning line at the ‘school of the arts Utrecht’ in its department Music Technology. Our educational system also reflects the way that we could treat performance in the NIME community as a whole. The importance of performing with our instruments other then in the form of a mere demonstration should get more emphasis.
@inproceedings{Leeuw2012a, author = {Leeuw, Hans and Tamminga, Jorrit}, title = {{NIME} Education at the {HKU}, Emphasizing performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178321}, url = {http://www.nime.org/proceedings/2012/nime2012_247.pdf}, keywords = {NIME, education, position paper, live electronics, performance} }
Nicholas Gillian and Joseph A. Paradiso. 2012. Digito: A Fine-Grain Gesturally Controlled Virtual Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178263
Abstract
Download PDF DOI
This paper presents Digito, a gesturally controlled virtual musical instrument. Digito is controlled through a number of intricate hand gestures, providing both discrete and continuous control of Digito’s sound engine; with the fine-grain hand gestures captured by a 3D depth sensor and recognized using computer vision and machine learning algorithms. We describe the design and initial iterative development of Digito, the hand and finger tracking algorithms and gesture recognition algorithms that drive the system, and report the insights gained during the initial development cycles and user testing of this gesturally controlled virtual musical instrument.
@inproceedings{Gillian2012, author = {Gillian, Nicholas and Paradiso, Joseph A.}, title = {Digito: A Fine-Grain Gesturally Controlled Virtual Musical Instrument}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178263}, url = {http://www.nime.org/proceedings/2012/nime2012_248.pdf}, keywords = {Gesture Recognition, Virtual Musical Instrument} }
Paul Lehrman. 2012. Multiple Pianolas in Antheil’s Ballet mécanique. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178323
Abstract
Download PDF DOI
George Antheil’s notorious Ballet mécanique (1924-1925) was originally scored for percussion ensemble, sound effects, and 16 pianolas. He was never able to perform the piece with those forces, however, due to his inability to synchronize multiple pianolas. Thus all performances of the piece in his lifetime, and for decades after, were done with a single pianola or player piano.* The author traces the origin of the concept of synchronizing multiple pianolas, and explains the attendant technological issues. He examines attempts to synchronize mechanical pianos and other time-based devices at the time of Ballet mécanique’s composition, and suggests that Antheil’s vision for his piece was not as farfetched as has long been thought.
@inproceedings{Lehrman2012, author = {Lehrman, Paul}, title = {Multiple Pianolas in Antheil's Ballet m{\'e}canique}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178323}, url = {http://www.nime.org/proceedings/2012/nime2012_25.pdf}, keywords = {Antheil, Stravinsky, player piano, pianola, mechanical instruments, synchronization} }
A. Cavan Fyans, Adnan Marquez-Borbon, Paul Stapleton, and Michael Gurevich. 2012. Ecological considerations for participatory design of DMIs. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178257
Abstract
Download PDF DOI
A study is presented examining the participatory design of digital musical interactions. The study takes into consideration the entire ecology of digital musical interactions including the designer, performer and spectator. A new instrument is developed through iterative participatory design involving a group of performers. Across the study the evolution of creative practice and skill development in an emerging community of practice is examined and a spectator study addresses the cognition of performance and the perception of skill with the instrument. Observations are presented regarding the cognition of a novel interaction and evolving notions of skill. The design process of digital musical interactions is reflected on focusing on involvement of the spectator in design contexts.
@inproceedings{Fyans2012, author = {Fyans, A. Cavan and Marquez-Borbon, Adnan and Stapleton, Paul and Gurevich, Michael}, title = {Ecological considerations for participatory design of DMIs}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178257}, url = {http://www.nime.org/proceedings/2012/nime2012_253.pdf}, keywords = {participatory design, DMIs, skill, cognition, spectator} }
Javier Jaimovich, Miguel Ortiz, Niall Coghlan, and R. Benjamin Knapp. 2012. The Emotion in Motion Experiment: Using an Interactive Installation as a Means for Understanding Emotional Response to Music. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178295
Abstract
Download PDF DOI
In order to further understand our emotional reaction to music, a museum-based installation was designed to collect physiological and self-report data from people listening to music. This demo will describe the technical implementation of this installation as a tool for collecting large samples of data in public spaces. The Emotion in Motion terminal is built upon a standard desktop computer running Max/MSP and using sensors that measure physiological indicators of emotion that are connected to an Arduino. The terminal has been installed in museums and galleries in Europe and the USA, helping create the largest database of physiology and self-report data while listening to music.
@inproceedings{Jaimovich2012, author = {Jaimovich, Javier and Ortiz, Miguel and Coghlan, Niall and Knapp, R. Benjamin}, title = {The Emotion in Motion Experiment: Using an Interactive Installation as a Means for Understanding Emotional Response to Music}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178295}, url = {http://www.nime.org/proceedings/2012/nime2012_254.pdf}, keywords = {Biosignals, EDA, SC, GSR, HR, POX, Self-Report, Database, Physiological Signals, Max/MSP, FTM, SAM, GEMS} }
Tobias Grosshauser, Victor Candia, Horst Hildebrand, and Gerhard Tröster. 2012. Sensor Based Measurements of Musicians’ Synchronization Issues. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178269
Abstract
Download PDF DOI
From a technical point of view, instrumental music mak-ing involves audible, visible and hidden playing parameters. Hidden parameters like force, pressure and fast movements, happening within milliseconds are particularly difficult to capture. Here, we present data focusing on movement coordination parameters of the left hand fingers with the bow hand in violinists and between two violinists in group playing. Data was recorded with different position sensors, a micro camcorder fixed on a violin and an acceleration sensor placed on the bow. Sensor measurements were obtained at a high sampling rate, gathering the data with a small mi-crocontroller unit, connected with a laptop computer. To capture bow’s position, rotation and angle directly on the bow to string contact point, the micro camcorder was fixed near the bridge. Main focuses of interest were the changes of the left hand finger, the temporal synchronization between left hand fingers with the right hand, the close up view to the bow to string contact point and the contact of the left hand finger and/or string to the fingerboard. Seven violinists, from beginners to master class students played scales in different rhythms, speeds and bowings and music excerpts of free choice while being recorded. One measure-ment with 2 violinists was made to see the time differences between two musicians while playing together. For simple integration of a conventional violin into electronic music environments, left hand sensor data were exemplary converted to MIDI and OSC.
@inproceedings{Grosshauser2012, author = {Grosshauser, Tobias and Candia, Victor and Hildebrand, Horst and Tr{\''o}ster, Gerhard}, title = {Sensor Based Measurements of Musicians' Synchronization Issues}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178269}, url = {http://www.nime.org/proceedings/2012/nime2012_256.pdf}, keywords = {Strings, violin, coordination, left, finger, right, hand} }
Mathieu Bosi and Sergi Jordà. 2012. Towards fast multi-point force and hit detection in tabletops using mechanically intercoupled force sensing resisors. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178217
Abstract
Download PDF DOI
Tangible tabletop musical interfaces allowing for a collabo-rative real-time interaction in live music performances are one of the promising fields in NIMEs. At present, this kind of interfaces present at least some of the following charac-teristics that limit their musical use: latency in the inter-action, and partial or complete lack of responsiveness to gestures such as tapping, scrubbing or pressing force. Our current research is exploring ways of improving the quality of interaction with this kind of interfaces, and in particular with the tangible tabletop instrument Reactable . In this paper we present a system based on a circular array of me-chanically intercoupled force sensing resistors used to obtain a low-latency, affordable, and easily embeddable hardware system able to detect surface impacts and pressures on the tabletop perimeter. We also consider the option of com-pleting this detected gestural information with the sound information coming from a contact microphone attached to the mechanical coupling layer, to control physical modelling synthesis of percussion instruments.
@inproceedings{Bosi2012, author = {Bosi, Mathieu and Jord{\`a}, Sergi}, title = {Towards fast multi-point force and hit detection in tabletops using mechanically intercoupled force sensing resisors}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178217}, url = {http://www.nime.org/proceedings/2012/nime2012_257.pdf}, keywords = {tangible tabletop interfaces, force sensing resistor, mechanical coupling, fast low-noise analog to digital conversion, low-latency sensing, micro controller, multimodal systems, complementary sensing.} }
Francisco Zamorano. 2012. Simpletones: A System of Collaborative Physical Controllers for Novices. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178459
Abstract
Download PDF DOI
This paper introduces Simpletones, an interactive sound system that enables a sense of musical collaboration for non-musicians. Participants can easily create simple sound compositions in real time by collaboratively operating physical artifacts as sound controllers. The physical configuration of the artifacts requires coordinated actions between participants to control sound (thus requiring, and emphasizing collaboration). Simpletones encourages playful human-to-human interaction by introducing a simple interface and a set of basic rules [1]. This enables novices to focus on the collaborative aspects of making music as a group (such as synchronization and taking collective decisions through non-verbal communication) to ultimately engage a state of group flow[2]. This project is relevant to a contemporary discourse on musical expression because it allows novices to experience the social aspects of group music making, something that is usually reserved only for trained performers [3].
@inproceedings{Zamorano2012, author = {Zamorano, Francisco}, title = {Simpletones: A System of Collaborative Physical Controllers for Novices}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178459}, url = {http://www.nime.org/proceedings/2012/nime2012_258.pdf}, keywords = {Collaboration, Artifacts, Computer Vision, Color Tracking, State of Flow.} }
Luke Dahl. 2012. Wicked Problems and Design Considerations in Composing for Laptop Orchestra. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178239
Abstract
Download PDF DOI
Composing music for ensembles of computer-based instruments, such as laptop orchestra or mobile phone orchestra, is a multi-faceted and challenging endeavor whose parameters and criteria for success are ill-defined. In the design community, tasks with these qualities are known as wicked problems. This paper frames composing for computer-based ensemble as a design task, shows how Buchanan’s four domains of design are present in the task, and discusses its wicked properties. The themes of visibility, risk, and embodiment, as formulated by Klemmer, are shown to be implicitly present in this design task. Composers are encouraged to address them explicitly and to take advantage of the practices of prototyping and iteration.
@inproceedings{Dahl2012, author = {Dahl, Luke}, title = {Wicked Problems and Design Considerations in Composing for Laptop Orchestra}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178239}, url = {http://www.nime.org/proceedings/2012/nime2012_259.pdf}, keywords = {Design, laptop orchestra, mobile phone orchestra, instrument design, interaction design, composition} }
Christian Frisson, Stéphane Dupont, Julien Leroy, et al. 2012. LoopJam: turning the dance floor into a collaborative instrumental map. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178255
Abstract
Download PDF DOI
This paper presents the LoopJam installation which allows participants to interact with a sound map using a 3D com-puter vision tracking system. The sound map results from similarity-based clustering of sounds. The playback of these sounds is controlled by the positions or gestures of partic-ipants tracked with a Kinect depth-sensing camera. The beat-inclined bodily movements of participants in the in-stallation are mapped to the tempo of played sounds, while the playback speed is synchronized by default among all sounds. We presented and tested an early version of the in-stallation to three exhibitions in Belgium, Italy and France. The reactions among participants ranged between curiosity and amusement.
@inproceedings{Frisson2012, author = {Frisson, Christian and Dupont, St{\'e}phane and Leroy, Julien and Moinet, Alexis and Ravet, Thierry and Siebert, Xavier and Dutoit, Thierry}, title = {LoopJam: turning the dance floor into a collaborative instrumental map}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178255}, url = {http://www.nime.org/proceedings/2012/nime2012_260.pdf}, keywords = {Interactive music systems and retrieval, user interaction and interfaces, audio similarity, depth sensors} }
Jonh Melo, Daniel Gómez, and Miguel Vargas. 2012. Gest-O: Performer gestures used to expand the sounds of the saxophone. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180535
Abstract
Download PDF DOI
This paper describes the conceptualization and development of an open source tool for controlling the sound of a saxophone via the gestures of its performer. The motivation behind this work is the need for easy access tools to explore, compose and perform electroacoustic music in Colombian music schools and conservatories. This work led to the adaptation of common hardware to be used as a sensor attached to an acoustic instrument and the development of software applications to record, visualize and map performers gesture data into signal processing parameters. The scope of this work suggested that focus was to be made on a specific instrument so the saxophone was chosen. Gestures were selected in an iterative process with the performer, although a more ambitious strategy to figure out main gestures of an instruments performance was first defined. Detailed gesture-to-sound processing mappings are exposed in the text. An electroacoustic musical piece was successfully rehearsed and recorded using the Gest-O system.
@inproceedings{Melo2012, author = {Melo, Jonh and G{\'o}mez, Daniel and Vargas, Miguel}, title = {Gest-O: Performer gestures used to expand the sounds of the saxophone}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180535}, url = {http://www.nime.org/proceedings/2012/nime2012_262.pdf}, keywords = {Electroacoustic music, saxophone, expanded instrument, gesture.} }
Adrian Freed. 2012. The Fingerphone: a Case Study of Sustainable Instrument Redesign. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178253
Abstract
Download PDF DOI
The Fingerphone, a reworking of the Stylophone in conductive paper, is presented as an example of new design approaches for sustainability and playability of electronic musical instruments.
@inproceedings{Freed2012, author = {Freed, Adrian}, title = {The Fingerphone: a Case Study of Sustainable Instrument Redesign}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178253}, url = {http://www.nime.org/proceedings/2012/nime2012_264.pdf}, keywords = {Stylophone, Conductive Paper, Pressure Sensing, Touch Sensing, Capacitive Sensing, Plurifunctionality, Fingerphone, Sustainable Design} }
Hans Leeuw. 2012. The electrumpet, additions and revisions. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178319
Abstract
Download PDF DOI
This short paper follows an earlier NIME paper [1] describing the invention and construction of the Electrumpet. Revisions and playing experience are both part of the current paper. The Electrumpet can be heard in the performance given by Hans Leeuw and Diemo Schwarz at this NIME conference.
@inproceedings{Leeuw2012, author = {Leeuw, Hans}, title = {The electrumpet, additions and revisions}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178319}, url = {http://www.nime.org/proceedings/2012/nime2012_271.pdf}, keywords = {NIME, Electrumpet, live-electronics, hybrid instruments.} }
Thomas Mitchell, Sebastian Madgwick, and Imogen Heap. 2012. Musical Interaction with Hand Posture and Orientation: A Toolbox of Gestural Control Mechanisms. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180543
Abstract
Download PDF DOI
This paper presents a toolbox of gestural control mechanisms which are available when the input sensing apparatus is a pair of data gloves fitted with orientation sensors. The toolbox was developed in advance of a live music performance in which the mapping from gestural input to audio output was to be developed rapidly in collaboration with the performer. The paper begins with an introduction to the associated literature before introducing a range of continuous, discrete and combined control mechanisms, enabling a flexible range of mappings to be explored and modified easily. An application of the toolbox within a live music performance is then described with an evaluation of the system with ideas for future developments.
@inproceedings{Mitchell2012, author = {Mitchell, Thomas and Madgwick, Sebastian and Heap, Imogen}, title = {Musical Interaction with Hand Posture and Orientation: A Toolbox of Gestural Control Mechanisms}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180543}, url = {http://www.nime.org/proceedings/2012/nime2012_272.pdf}, keywords = {Computer Music, Gestural Control, Data Gloves} }
Palle Dahlstedt. 2012. Pencil Fields: An Expressive Low-Tech Performance Interface for Analog Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178241
Abstract
Download PDF DOI
I present a novel low-tech multidimensional gestural con-troller, based on the resistive properties of a 2D field of pencil markings on paper. A set of movable electrodes (+, -, ground) made from soldered stacks of coins create a dynamic voltage potential field in the carbon layer, and an-other set of movable electrodes tap voltages from this field. These voltages are used to control complex sound engines in an analogue modular synthesizer. Both the voltage field and the tap electrodes can be moved freely. The design was inspired by previous research in complex mappings for advanced digital instruments, and provides a similarly dynamic playing environment for analogue synthesis. The interface is cheap to build, and provides flexible control over a large set of parameters. It is musically satisfying to play, and allows for a wide range of playing techniques, from wild exploration to subtle expressions. I also present an inven-tory of the available playing techniques, motivated by the interface design, musically, conceptually and theatrically. The performance aspects of the interface are also discussed. The interface has been used in a number of performances in Sweden and Japan in 2011, and is also used by other musicians.
@inproceedings{Dahlstedt2012, author = {Dahlstedt, Palle}, title = {Pencil Fields: An Expressive Low-Tech Performance Interface for Analog Synthesis}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178241}, url = {http://www.nime.org/proceedings/2012/nime2012_275.pdf}, keywords = {gestural interface, 2d, analog synthesis, performance, improvisation} }
Mari Kimura, Nicolas Rasamimanana, Frédéric Bevilacqua, Norbert Schnell, Bruno Zamborlin, and Emmanuel Fléty. 2012. Extracting Human Expression For Interactive Composition with the Augmented Violin. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178305
Abstract
Download PDF DOI
As a 2010 Artist in Residence in Musical Research at IRCAM, Mari Kimura used the Augmented Violin to develop new compositional approaches, and new ways of creating interactive performances [1]. She contributed her empirical and historical knowledge of violin bowing technique, working with the Real Time Musical Interactions Team at IRCAM. Thanks to this residency, her ongoing long-distance collaboration with the team since 2007 dramatically accelerated, and led to solving several compositional and calibration issues of the Gesture Follower (GF) [2]. Kimura was also the first artist to develop projects between the two teams at IRCAM, using OMAX (Musical Representation Team) with GF. In the past year, the performance with Augmented Violin has been expanded in larger scale interactive audio/visual projects as well. In this paper, we report on the various techniques developed for the Augmented Violin and compositions by Kimura using them, offering specific examples and scores.
@inproceedings{Kimura2012, author = {Kimura, Mari and Rasamimanana, Nicolas and Bevilacqua, Fr{\'e}d{\'e}ric and Schnell, Norbert and Zamborlin, Bruno and Fl{\'e}ty, Emmanuel}, title = {Extracting Human Expression For Interactive Composition with the Augmented Violin}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178305}, url = {http://www.nime.org/proceedings/2012/nime2012_279.pdf}, keywords = {Augmented Violin, Gesture Follower, Interactive Performance} }
Greg Shear and Matthew Wright. 2012. Further Developments in the Electromagnetically Sustained Rhodes Piano. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180599
Abstract
Download PDF DOI
The Electromagnetically Sustained Rhodes Piano is an orig-inal Rhodes Piano modified to provide control over the amplitude envelope of individual notes through aftertouch pressure. Although there are many opportunities to shape the amplitude envelope before loudspeaker amplification, they are all governed by the ever-decaying physical vibra-tions of the tone generating mechanism. A single-note proof of concept for electromagnetic control over this vibrating mechanism was presented at NIME 2011. In the past year, virtually every aspect of the system has been improved. We use a different vibration sensor that is immune to electromagnetic interference, thus eliminat-ing troublesome feedback. For control, we both reduce cost and gain continuous position sensing throughout the entire range of key motion in addition to aftertouch pressure. Finally, the entire system now fits within the space constraints presented by the original piano, allowing it to be installed on adjacent notes.
@inproceedings{Shear2012, author = {Shear, Greg and Wright, Matthew}, title = {Further Developments in the Electromagnetically Sustained Rhodes Piano}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180599}, url = {http://www.nime.org/proceedings/2012/nime2012_284.pdf}, keywords = {Rhodes, piano, mechanical synthesizer, electromagnetic, sustain, feedback} }
Johnty Wang, Nicolas d’Alessandro, Sidney Fels, and Robert Pritchard. 2012. Investigation of Gesture Controlled Articulatory Vocal Synthesizer using a Bio-Mechanical Mapping Layer. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178447
Abstract
Download PDF DOI
We have added a dynamic bio-mechanical mapping layer that contains a model of the human vocal tract with tongue muscle activations as input and tract geometry as output to a real time gesture controlled voice synthesizer system used for musical performance and speech research. Using this mapping layer, we conducted user studies comparing controlling the model muscle activations using a 2D set of force sensors with a position controlled kinematic input space that maps directly to the sound. Preliminary user evaluation suggests that it was more difficult to using force input but the resultant output sound was more intelligible and natural compared to the kinematic controller. This result shows that force input is a potentially feasible for browsing through a vowel space for an articulatory voice synthesis system, although further evaluation is required.
@inproceedings{Wang2012, author = {Wang, Johnty and d'Alessandro, Nicolas and Fels, Sidney and Pritchard, Robert}, title = {Investigation of Gesture Controlled Articulatory Vocal Synthesizer using a Bio-Mechanical Mapping Layer}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178447}, url = {http://www.nime.org/proceedings/2012/nime2012_291.pdf}, keywords = {Gesture, Mapping, Articulatory, Speech, Singing, Synthesis} }
Benjamin Carey. 2012. Designing for Cumulative Interactivity: The _derivations System. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178227
Abstract
Download PDF DOI
This paper presents the author’s derivations system, an interactive performance system for solo improvising instrumentalist. The system makes use of a combination of real-time audio analysis, live sampling and spectral re-synthesis to build a vocabulary of possible performative responses to live instrumental input throughout an improvisatory performance. A form of timbral matching is employed to form a link between the live performer and an expanding database of musical materials. In addition, the system takes into account the unique nature of the rehearsal/practice space in musical performance through the implementation of performer-configurable cumulative rehearsal databases into the final design. This paper discusses the system in detail with reference to related work in the field, making specific reference to the system’s interactive potential both inside and outside of a real-time performance context.
@inproceedings{Carey2012, author = {Carey, Benjamin}, title = {Designing for Cumulative Interactivity: The {\_}derivations System}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178227}, url = {http://www.nime.org/proceedings/2012/nime2012_292.pdf}, keywords = {Interactivity, performance systems, improvisation} }
Brian Mayton, Gershon Dublon, Nicholas Joliat, and Joseph A. Paradiso. 2012. Patchwork: Multi-User Network Control of a Massive Modular Synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178345
Abstract
Download PDF DOI
We present Patchwerk, a networked synthesizer module with tightly coupled web browser and tangible interfaces. Patchwerk connects to a pre-existing modular synthesizer using the emerging cross-platform HTML5 WebSocket standard to enable low-latency, high-bandwidth, concurrent control of analog signals by multiple users. Online users control physical outputs on a custom-designed cabinet that reflects their activity through a combination of motorized knobs and LEDs, and streams the resultant audio. In a typical installation, a composer creates a complex physical patch on the modular synth that exposes a set of analog and digital parameters (knobs, buttons, toggles, and triggers) to the web-enabled cabinet. Both physically present and online audiences can control those parameters, simultane-ously seeing and hearing the results of each other’s actions. By enabling collaborative interaction with a massive analog synthesizer, Patchwerk brings a broad audience closer to a rare and historically important instrument. Patchwerk is available online at http://synth.media.mit.edu.
@inproceedings{Mayton2012, author = {Mayton, Brian and Dublon, Gershon and Joliat, Nicholas and Paradiso, Joseph A.}, title = {Patchwork: Multi-User Network Control of a Massive Modular Synthesizer}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178345}, url = {http://www.nime.org/proceedings/2012/nime2012_293.pdf}, keywords = {Modular synthesizer, HTML5, tangible interface, collaborative musical instrument} }
Shawn Trail, Michael Dean, Gabrielle Odowichuk, et al. 2012. Non-invasive sensing and gesture control for pitched percussion hyper-instruments using the Kinect. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178435
Abstract
Download PDF DOI
Hyper-instruments extend traditional acoustic instruments with sensing technologies that capture digitally subtle and sophisticated aspects of human performance. They leverage the long training and skills of performers while simultaneously providing rich possibilities for digital control. Many existing hyper-instruments suffer from being one of a kind instruments that require invasive modifications to the underlying acoustic instrument. In this paper we focus on the pitched percussion family and describe a non-invasive sensing approach for extending them to hyper-instruments. Our primary concern is to retain the technical integrity of the acoustic instrument and sound production methods while being able to intuitively interface the computer. This is accomplished by utilizing the Kinect sensor to track the position of the mallets without any modification to the instrument which enables easy and cheap replication of the pro-posed hyper-instrument extensions. In addition we describe two approaches to higher-level gesture control that remove the need for additional control devices such as foot pedals and fader boxes that are frequently used in electro-acoustic performance. This gesture control integrates more organically with the natural flow of playing the instrument providing user selectable control over filter parameters, synthesis, sampling, sequencing, and improvisation using a commer-cially available low-cost sensing apparatus.
@inproceedings{Trail2012a, author = {Trail, Shawn and Dean, Michael and Odowichuk, Gabrielle and Tavares, Tiago Fernandes and Driessen, Peter and Schloss, W. Andrew and Tzanetakis, George}, title = {Non-invasive sensing and gesture control for pitched percussion hyper-instruments using the Kinect}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178435}, url = {http://www.nime.org/proceedings/2012/nime2012_297.pdf} }
Lawrence Fyfe, Adam Tindale, and Sheelagh Carpendale. 2012. Node and Message Management with the JunctionBox Interaction Toolkit. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178259
Abstract
Download PDF DOI
Message mapping between control interfaces and sound engines is an important task that could benefit from tools that streamline development. A new Open Sound Control (OSC) namespace called Nexus Data Exchange Format (NDEF) streamlines message mapping by offering developers the ability to manage sound engines as network nodes and to query those nodes for the messages in their OSC address spaces. By using NDEF, developers will have an eas-ier time managing nodes and their messages, especially for scenarios in which a single application or interface controls multiple sound engines. NDEF is currently implemented in the JunctionBox interaction toolkit but could easily be implemented in other toolkits.
@inproceedings{Fyfe2012, author = {Fyfe, Lawrence and Tindale, Adam and Carpendale, Sheelagh}, title = {Node and Message Management with the JunctionBox Interaction Toolkit}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178259}, url = {http://www.nime.org/proceedings/2012/nime2012_299.pdf}, keywords = {OSC, namespace, interaction, node} }
Julien Castet. 2012. Performing experimental music by physical simulation. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178231
Abstract
Download PDF DOI
This paper presents ongoing work on methods dedicated torelations between composers and performers in the contextof experimental music. The computer music community hasover the last decade paid a strong interest on various kindsof gestural interfaces to control sound synthesis processes.The mapping between gesture and sound parameters hasspecially been investigated in order to design the most relevant schemes of sonic interaction. In fact, this relevanceresults in an aesthetic choice that encroaches on the process of composition. This work proposes to examine therelations between composers and performers in the contextof the new interfaces for musical expression. It aims to define a theoretical and methodological framework clarifyingthese relations. In this project, this paper is the first experimental study about the use of physical models as gesturalmaps for the production of textural sounds.
@inproceedings{Castet2012, author = {Castet, Julien}, title = {Performing experimental music by physical simulation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178231}, url = {http://www.nime.org/proceedings/2012/nime2012_30.pdf}, keywords = {Simulation, Interaction, Sonic textures} }
Jesse Allison and Christian Dell. 2012. AuRal: A Mobile Interactive System for Geo-Locative Audio Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178203
Abstract
Download PDF DOI
Aural -of or relateing to the ear or hearing Aura -an invisible breath, emanation, or radiation AR -Augmented Reality AuRal is an environmental audio system in which individual participants form ad hoc ensembles based on geolocation and affect the overall sound of the music associated with the location that they are in. The AuRal environment binds physical location and the choices of multiple, simultaneous performers to act as the generative force of music tied to the region. Through a mobile device interface, musical participants, or agents, have a degree of input into the generated music essentially defining the sound of a given region. The audio landscape is superimposed onto the physical one. The resultant musical experience is not tied simply to the passage of time, but through the incorporation of participants over time and spatial proximity, it becomes an aural location as much as a piece of music. As a result, walking through the same location at different times results in unique collaborative listening experiences.
@inproceedings{Allison2012, author = {Allison, Jesse and Dell, Christian}, title = {AuRal: A Mobile Interactive System for Geo-Locative Audio Synthesis}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178203}, url = {http://www.nime.org/proceedings/2012/nime2012_301.pdf}, keywords = {AuRal, sonic environment, distributed performance system, mobile music, android, ruby on rails, supercollider} }
Charles Roberts, Graham Wakefield, and Matt Wright. 2012. Mobile Controls On-The-Fly: An Abstraction for Distributed NIMEs. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180581
Abstract
Download PDF DOI
Designing mobile interfaces for computer-based musical performance is generally a time-consuming task that can be exasperating for performers. Instead of being able to experiment freely with physical interfaces’ affordances, performers must spend time and attention on non-musical tasks including network configuration, development environments for the mobile devices, defining OSC address spaces, and handling the receipt of OSC in the environment that will control and produce sound. Our research seeks to overcome such obstacles by minimizing the code needed to both generate and read the output of interfaces on mobile devices. For iOS and Android devices, our implementation extends the application Control to use a simple set of OSC messages to define interfaces and automatically route output. On the desktop, our implementations in Max/MSP/Jitter, LuaAV, and Su-perCollider allow users to create mobile widgets mapped to sonic parameters with a single line of code. We believe the fluidity of our approach will encourage users to incorporate mobile devices into their everyday performance practice.
@inproceedings{Roberts2012, author = {Roberts, Charles and Wakefield, Graham and Wright, Matt}, title = {Mobile Controls On-The-Fly: An Abstraction for Distributed {NIME}s}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180581}, url = {http://www.nime.org/proceedings/2012/nime2012_303.pdf}, keywords = {NIME, OSC, Zeroconf, iOS, Android, Max/MSP/Jitter, LuaAV, SuperCollider, Mobile} }
Jiffer Harriman. 2012. Sinkapater -An Untethered Beat Sequencer. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178277
Abstract
Download PDF DOI
This paper provides an overview of a new method for approaching beat sequencing. As we have come to know them drum machines provide means to loop rhythmic patterns over a certain interval. Usually with the option to specify different beat divisions. What I developed and propose for consideration is a rethinking of the traditional drum machine confines. The Sinkapater is an untethered beat sequencer in that the beat division, and the loop length can be arbitrarily modified for each track. The result is the capability to create complex syncopated patterns which evolve over time as different tracks follow their own loop rate. To keep cohesion all channels can be locked to a master channel forcing a loop to be an integer number of "Master Beats". Further a visualization mode enables exploring the patterns in another new way. Using synchronized OpenGL a 3-Dimensional environment visualizes the beats as droplets falling from faucets of varying heights determined by the loop length. Waves form in the bottom as beats splash into the virtual "sink". By combining compelling visuals and a new approach to sequencing a new way of exploring beats and experiencing music has been created.
@inproceedings{Harriman2012, author = {Harriman, Jiffer}, title = {Sinkapater -An Untethered Beat Sequencer}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178277}, url = {http://www.nime.org/proceedings/2012/nime2012_308.pdf}, keywords = {NIME, proceedings, drum machine, sequencer, visualization} }
Jeong-seob Lee and Woon Seung Yeo. 2012. Real-time Modification of Music with Dancer’s Respiration Pattern. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178311
Abstract
Download PDF DOI
This research aims to improve the correspondence between music and dance, and explores the use of human respiration pattern for musical applications with focus on the motional aspect of breathing. While respiration is frequently considered as an indicator of the metabolic state of human body that contains meaningful information for medicine or psychology, motional aspect of respiration has been relatively unnoticed in spite of its strong correlation with muscles and the brain. This paper introduces an interactive system to control music playback for dance performances based on the respiration pattern of the dancer. A wireless wearable sensor device detects the dancer’s respiration, which is then utilized to modify the dynamic of music. Two different respiration-dynamic mappings were designed and evaluated through public performances and private tests by professional choreographers. Results from this research suggest a new conceptual approach to musical applications of respiration based on the technical characteristics of music and dance.
@inproceedings{Lee2012c, author = {Lee, Jeong-seob and Yeo, Woon Seung}, title = {Real-time Modification of Music with Dancer's Respiration Pattern}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178311}, url = {http://www.nime.org/proceedings/2012/nime2012_309.pdf}, keywords = {Music, dance, respiration, correspondence, wireless interface, interactive performance} }
Nicolas d’Alessandro, Aura Pon, Johnty Wang, David Eagle, Ehud Sharlin, and Sidney Fels. 2012. A Digital Mobile Choir: Joining Two Interfaces towards Composing and Performing Collaborative Mobile Music. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178199
Abstract
Download PDF DOI
We present the integration of two musical interfaces into a new music-making system that seeks to capture the expe-rience of a choir and bring it into the mobile space. This system relies on three pervasive technologies that each support a different part of the musical experience. First, the mobile device application for performing with an artificial voice, called ChoirMob. Then, a central composing and conducting application running on a local interactive display, called Vuzik. Finally, a network protocol to synchronize the two. ChoirMob musicians can perform music together at any location where they can connect to a Vuzik central conducting device displaying a composed piece of music. We explored this system by creating a chamber choir of ChoirMob performers, consisting of both experienced musicians and novices, that performed in rehearsals and live concert scenarios with music composed using the Vuzik interface.
@inproceedings{dAlessandro2012, author = {d'Alessandro, Nicolas and Pon, Aura and Wang, Johnty and Eagle, David and Sharlin, Ehud and Fels, Sidney}, title = {A Digital Mobile Choir: Joining Two Interfaces towards Composing and Performing Collaborative Mobile Music}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178199}, url = {http://www.nime.org/proceedings/2012/nime2012_310.pdf}, keywords = {singing synthesis, mobile music, interactive display, interface design, OSC, ChoirMob, Vuzik, social music, choir} }
Ivica Bukvic, Liesl Baum, Bennett Layman, and Kendall Woodard. 2012. Granular Learning Objects for Instrument Design and Collaborative Performance in K-12 Education. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178223
Abstract
Download PDF DOI
In the following paper we propose a new tiered granularity approach to developing modules or abstractions in the Pd-L2Ork visual multimedia programming environment with the specific goal of devising creative environments that scale their educational scope and difficulty to encompass several stages within the context of primary and secondary (K-12) education. As part of a preliminary study, the team designed modules targeting 4th and 5th grade students, the primary focus being exploration of creativity and collaborative learning. The resulting environment infrastructure -coupled with the Boys & Girls Club of Southwest Virginia Satellite Linux Laptop Orchestra -offers opportunities for students to design and build original instruments, master them through a series of rehearsals, and ultimately utilize them as part of an ensemble in a performance of a predetermined piece whose parameters are coordinated by instructor through an embedded networked module. The ensuing model will serve for the assessment and development of a stronger connection with content-area standards and the development of creative thinking and collaboration skills.
@inproceedings{Bukvic2012, author = {Bukvic, Ivica and Baum, Liesl and Layman, Bennett and Woodard, Kendall}, title = {Granular Learning Objects for Instrument Design and Collaborative Performance in K-12 Education}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178223}, url = {http://www.nime.org/proceedings/2012/nime2012_315.pdf}, keywords = {Granular, Learning Objects, K-12, Education, L2Ork, PdL2Ork} }
John Buschert. 2012. Musician Maker: Play expressive music without practice. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178225
Abstract
Download PDF DOI
Musician Maker is a system to allow novice players the opportunity to create expressive improvisational music. While the system plays an accompaniment background chord progression, each participant plays some kind of controller to make music through the system. The program takes the signals from the controllers and adjusts the pitches somewhat so that the players are limited to notes which fit the chord progression. The various controllers are designed to be very easy and intuitive so anyone can pick one up and quickly be able to play it. Since the computer is making sure that wrong notes are avoided, even inexperienced players can immediately make music and enjoy focusing on some of the more expressive elements and thus become musicians.
@inproceedings{Buschert2012, author = {Buschert, John}, title = {Musician Maker: Play expressive music without practice}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178225}, url = {http://www.nime.org/proceedings/2012/nime2012_36.pdf}, keywords = {Musical Instrument, Electronic, Computer Music, Novice, Controller} }
Marcello Giordano, Stephen Sinclair, and Marcelo M. Wanderley. 2012. Bowing a vibration-enhanced force feedback device. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178265
Abstract
Download PDF DOI
Force-feedback devices can provide haptic feedback duringinteraction with physical models for sound synthesis. However, low-end devices may not always provide high-fidelitydisplay of the acoustic characteristics of the model. This article describes an enhanced handle for the Phantom Omnicontaining a vibration actuator intended to display the highfrequency portion of the synthesized forces. Measurementsare provided to show that this approach achieves a morefaithful representation of the acoustic signal, overcominglimitations in the device control and dynamics.
@inproceedings{Giordano2012, author = {Giordano, Marcello and Sinclair, Stephen and Wanderley, Marcelo M.}, title = {Bowing a vibration-enhanced force feedback device}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178265}, url = {http://www.nime.org/proceedings/2012/nime2012_37.pdf}, keywords = {Haptics, force feedback, bowing, audio, interaction} }
Dale Parson and Phillip Reed. 2012. The Planetarium as a Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180567
Abstract
Download PDF DOI
With the advent of high resolution digital video projection and high quality spatial sound systems in modern planetariums, the planetarium can become the basis for a unique set of virtual musical instrument capabilities that go well beyond packaged multimedia shows. The dome, circular speaker and circular seating arrangements provide means for skilled composers and performers to create a virtual reality in which attendees are immersed in the composite instrument. This initial foray into designing an audio-visual computerbased instrument for improvisational performance in a planetarium builds on prior, successful work in mapping the rules and state of two-dimensional computer board games to improvised computer music. The unique visual and audio geometries of the planetarium present challenges and opportunities. The game tessellates the dome in mobile, colored hexagons that emulate both atoms and musical scale intervals in an expanding universe. Spatial activity in the game maps to spatial locale and instrument voices in the speakers, in essence creating a virtual orchestra with a string section, percussion section, etc. on the dome. Future work includes distribution of game play via mobile devices to permit attendees to participate in a performance. This environment is open-ended, with great educational and aesthetic potential.
@inproceedings{Parson2012, author = {Parson, Dale and Reed, Phillip}, title = {The Planetarium as a Musical Instrument}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180567}, url = {http://www.nime.org/proceedings/2012/nime2012_47.pdf}, keywords = {aleatory music, algorithmic improvisation, computer game, planetarium} }
Aisen Caro Chacin. 2012. Play-A-Grill: Music To Your Teeth. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178233
Abstract
Download PDF DOI
This paper is an in depth exploration of the fashion object and device, the Play-A-Grill. It details inspirations, socio-cultural implications, technical function and operation, and potential applications for the Play-A-Grill system.
@inproceedings{Chacin2012, author = {Chacin, Aisen Caro}, title = {Play-A-Grill: Music To Your Teeth}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178233}, url = {http://www.nime.org/proceedings/2012/nime2012_48.pdf}, keywords = {Digital Music Players, Hip Hop, Rap, Music Fashion, Grills, Mouth Jewelry, Mouth Controllers, and Bone Conduction Hearing.} }
STEFANO FASCIANI and LONCE WYSE. 2012. A Voice Interface for Sound Generators: adaptive and automatic mapping of gestures to sound. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178251
Abstract
Download PDF DOI
Sound generators and synthesis engines expose a large set of parameters, allowing run-time timbre morphing and exploration of sonic space. However, control over these high-dimensional interfaces is constrained by the physical limitations of performers. In this paper we propose the exploitation of vocal gesture as an extension or alternative to traditional physical controllers. The approach uses dynamic aspects of vocal sound to control variations in the timbre of the synthesized sound. The mapping from vocal to synthesis parameters is automatically adapted to information extracted from vocal examples as well as to the relationship between parameters and timbre within the synthesizer. The mapping strategy aims to maximize the breadth of the explorable perceptual sonic space over a set of the synthesizer’s real-valued parameters, indirectly driven by the voice-controlled interface.
@inproceedings{FASCIANI2012, author = {FASCIANI, STEFANO and WYSE, LONCE}, title = {A Voice Interface for Sound Generators: adaptive and automatic mapping of gestures to sound}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178251}, url = {http://www.nime.org/proceedings/2012/nime2012_57.pdf}, keywords = {Voice Control, Adaptive Interface, Automatic Mapping, Timbre Morphing, Sonic Space Exploration} }
Kamer Ali Yuksel, Sinan Buyukbas, and Elif Ayiter. 2012. An Interface for Emotional Expression in Audio-Visuals. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178457
Abstract
Download PDF DOI
In this work, a comprehensive study is performed on the relationship between audio, visual and emotion by applying the principles of cognitive emotion theory into digital creation. The study is driven by an audiovisual emotion library project that is named AVIEM, which provides an interactive interface for experimentation and evaluation of the perception and creation processes of audiovisuals. AVIEM primarily consists of separate audio and visual libraries and grows with user contribution as users explore different combinations between them. The library provides a wide range of experimentation possibilities by allowing users to create audiovisual relations and logging their emotional responses through its interface. Besides being a resourceful tool of experimentation, AVIEM aims to become a source of inspiration, where digitally created abstract virtual environments and soundscapes can elicit target emotions at a preconscious level, by building genuine audiovisual relations that would engage the viewer on a strong emotional stage. Lastly, various schemes are proposed to visualize information extracted through AVIEM, to improve the navigation and designate the trends and dependencies among audiovisual relations.
@inproceedings{Yuksel2012, author = {Yuksel, Kamer Ali and Buyukbas, Sinan and Ayiter, Elif}, title = {An Interface for Emotional Expression in Audio-Visuals}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178457}, url = {http://www.nime.org/proceedings/2012/nime2012_60.pdf}, keywords = {Designing emotive audiovisuals, cognitive emotion theory, audiovisual perception and interaction, synaesthesia} }
Sang Won Lee, Ajay Srinivasamurthy, Gregoire Tronel, Weibin Shen, and Jason Freeman. 2012. Tok! : A Collaborative Acoustic Instrument using Mobile Phones. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178317
Abstract
Download PDF DOI
Tok! is a collaborative acoustic instrument application for iOS devices aimed at real time percussive music making in a colocated setup. It utilizes the mobility of hand-held devices and transforms them into drumsticks to tap on flat surfaces and produce acoustic music. Tok! is also networked and consists of a shared interactive music score to which the players tap their phones, creating a percussion ensemble. Through their social interaction and real-time modifications to the music score, and through their creative selection of tapping surfaces, the players can collaborate and dynamically create interesting rhythmic music with a variety of timbres.
@inproceedings{Lee2012b, author = {Lee, Sang Won and Srinivasamurthy, Ajay and Tronel, Gregoire and Shen, Weibin and Freeman, Jason}, title = {Tok! : A Collaborative Acoustic Instrument using Mobile Phones}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178317}, url = {http://www.nime.org/proceedings/2012/nime2012_61.pdf}, keywords = {Mobile Phones, Collaboration, Social Interaction, Acoustic Musical Instrument} }
Sang Won Lee, Jason Freeman, and Andrew Collela. 2012. Real-Time Music Notation, Collaborative Improvisation, and Laptop Ensembles. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178315
Abstract
Download PDF DOI
This paper describes recent extensions to LOLC, a text-based environment for collaborative improvisation for laptop ensembles, which integrate acoustic instrumental musicians into the environment. Laptop musicians author short commands to create, transform, and share pre-composed musical fragments, and the resulting notation is digitally displayed, in real time, to instrumental musicians to sight-read in performance. The paper describes the background and motivations of the project, outlines the design of the original LOLC environment and describes its new real-time notation components in detail, and explains the use of these new components in a musical composition, SGLC, by one of the authors.
@inproceedings{Lee2012a, author = {Lee, Sang Won and Freeman, Jason and Collela, Andrew}, title = {Real-Time Music Notation, Collaborative Improvisation, and Laptop Ensembles}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178315}, url = {http://www.nime.org/proceedings/2012/nime2012_62.pdf}, keywords = {Real-time Music Notation, Live Coding, Laptop Orchestra} }
Raymond Migneco and Youngmoo Kim. 2012. A Component-Based Approach for Modeling Plucked-Guitar Excitation Signals. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180541
Abstract
Download PDF DOI
Platforms for mobile computing and gesture recognitionprovide enticing interfaces for creative expression on virtualmusical instruments. However, sound synthesis on thesesystems is often limited to sample-based synthesizers, whichlimits their expressive capabilities. Source-filter models areadept for such interfaces since they provide flexible, algorithmic sound synthesis, especially in the case of the guitar.In this paper, we present a data-driven approach for modeling guitar excitation signals using principal componentsderived from a corpus of excitation signals. Using thesecomponents as features, we apply nonlinear principal components analysis to derive a feature space that describesthe expressive attributes characteristic to our corpus. Finally, we propose using the reduced dimensionality space asa control interface for an expressive guitar synthesizer.
@inproceedings{Migneco2012, author = {Migneco, Raymond and Kim, Youngmoo}, title = {A Component-Based Approach for Modeling Plucked-Guitar Excitation Signals}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180541}, url = {http://www.nime.org/proceedings/2012/nime2012_63.pdf}, keywords = {Source-filter models, musical instrument synthesis, PCA, touch musical interfaces} }
Patrı́cio Pedro. 2012. MuDI - Multimedia Digital Instrument for Composing and Performing Digital Music for Films in Real-time. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180569
Abstract
Download PDF DOI
This article proposes a wireless handheld multimedia digital instrument, which allows one to compose and perform digital music for films in real-time. Not only does it allow the performer and the audience to follow the film images in question, but also the relationship between the gestures performed and the sound generated. Furthermore, it allows one to have an effective control over the sound, and consequently achieve great musical expression. In addition, a method for calibrating the multimedia digital instrument, devised to overcome the lack of a reliable reference point of the accelerometer and a process to obtain a video score are presented. This instrument has been used in a number of concerts (Portugal and Brazil) so as to test its robustness.
@inproceedings{Patricio2012, author = {Patr{\'\i}cio, Pedro}, title = {MuDI - Multimedia Digital Instrument for Composing and Performing Digital Music for Films in Real-time}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180569}, url = {http://www.nime.org/proceedings/2012/nime2012_64.pdf}, keywords = {Digital musical instrument, mobile music performance, real-time musical composition, digital sound synthesis.} }
Ayaka Endo, Takuma Moriyama, and Yasuo Kuhara. 2012. Tweet Harp: Laser Harp Generating Voice and Text of Real-time Tweets in Twitter. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178249
Abstract
Download PDF DOI
Tweet Harp is a musical instrument using Twitter and a laser harp. This instrument features the use of the human voice speaking tweets in Twitter as sounds for music. It is played by touching the six harp strings of laser beams. Tweet Harp gets the latest tweets from Twitter in real-time, and it creates music like a song with unexpected words. It also creates animation displaying the texts at the same time. The audience can visually enjoy this performance by sounds synchronized with animation. If the audience has a Twitter account, they can participate in the performance by tweeting.
@inproceedings{Endo2012, author = {Endo, Ayaka and Moriyama, Takuma and Kuhara, Yasuo}, title = {Tweet Harp: Laser Harp Generating Voice and Text of Real-time Tweets in Twitter}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178249}, url = {http://www.nime.org/proceedings/2012/nime2012_66.pdf}, keywords = {Twitter, laser harp, text, speech, voice, AppleScript, Quartz Composer, Max/MSP, TTS, Arduino} }
Benjamin D. Smith and Guy E. Garnett. 2012. Unsupervised Play: Machine Learning Toolkit for Max. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178419
Abstract
Download PDF DOI
Machine learning models are useful and attractive tools forthe interactive computer musician, enabling a breadth of interfaces and instruments. With current consumer hardwareit becomes possible to run advanced machine learning algorithms in demanding performance situations, yet expertiseremains a prominent entry barrier for most would-be users.Currently available implementations predominantly employsupervised machine learning techniques, while the adaptive,self-organizing capabilities of unsupervised models are notgenerally available. We present a free, new toolbox of unsupervised machine learning algorithms implemented in Max5 to support real-time interactive music and video, aimedat the non-expert computer artist.
@inproceedings{Smith2012, author = {Smith, Benjamin D. and Garnett, Guy E.}, title = {Unsupervised Play: Machine Learning Toolkit for Max}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178419}, url = {http://www.nime.org/proceedings/2012/nime2012_68.pdf}, keywords = {NIME, unsupervised machine learning, adaptive resonance theory, self-organizing maps, Max 5} }
Benjamin D. Smith and Guy E. Garnett. 2012. Unsupervised Play: Machine Learning Toolkit for Max. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178419
Abstract
Download PDF DOI
Machine learning models are useful and attractive tools for the interactive computer musician, enabling a breadth of interfaces and instruments. With current consumer hardware it becomes possible to run advanced machine learning algorithms in demanding performance situations, yet expertise remains a prominent entry barrier for most would-be users. Currently available implementations predominantly employ supervised machine learning techniques, while the adaptive, self-organizing capabilities of unsupervised models are not generally available. We present a free, new toolbox of unsupervised machine learning algorithms implemented in Max 5 to support real-time interactive music and video, aimed at the non-expert computer artist.
@inproceedings{Smith2012a, author = {Smith, Benjamin D. and Garnett, Guy E.}, title = {Unsupervised Play: Machine Learning Toolkit for Max}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178419}, url = {http://www.nime.org/proceedings/2012/nime2012_68.pdf}, keywords = {NIME, unsupervised machine learning, adaptive resonance theory, self-organizing maps, Max 5} }
Akito van Troyer. 2012. DrumTop: Playing with Everyday Objects. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178441
Abstract
Download PDF DOI
We introduce a prototype of a new tangible step sequencerthat transforms everyday objects into percussive musicalinstruments. DrumTop adapts our everyday task-orientedhand gestures with everyday objects as the basis of musicalinteraction, resulting in an easily graspable musical interfacefor musical novices. The sound, tactile, and visual feedbackcomes directly from everyday objects as the players programdrum patterns and rearrange the objects on the tabletopinterface. DrumTop encourages the players to explore themusical potentiality of their surroundings and be musicallycreative through rhythmic interactions with everyday objects. The interface consists of transducers that trigger ahit, causing the objects themselves to produce sound whenthey are in close contact with the transducers. We discusshow we designed and implemented our current DrumTopprototype and describe how players interact with the interface. We then highlight the players’ experience with Drumtop and our plans for future work in the fields of musiceducation and performance.
@inproceedings{Troyer2012, author = {van Troyer, Akito}, title = {DrumTop: Playing with Everyday Objects}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178441}, url = {http://www.nime.org/proceedings/2012/nime2012_70.pdf}, keywords = {Tangible User Interfaces, Playful Experience, Percussion, Step Sequencer, Transducers, Everyday Objects} }
Christopher Ariza. 2012. The Dual-Analog Gamepad as a Practical Platform for Live Electronics Instrument and Interface Design. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178205
Abstract
Download PDF DOI
This paper demonstrates the practical benefits and performance opportunities of using the dual-analog gamepad as a controller for real-time live electronics. Numerous diverse instruments and interfaces, as well as detailed control mappings, are described. Approaches to instrument and preset switching are also presented. While all of the instrument implementations presented are made available through the Martingale Pd library, resources for other synthesis languages are also described.
@inproceedings{Ariza2012, author = {Ariza, Christopher}, title = {The Dual-Analog Gamepad as a Practical Platform for Live Electronics Instrument and Interface Design}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178205}, url = {http://www.nime.org/proceedings/2012/nime2012_73.pdf}, keywords = {Controllers, live electronics, dual-analog, gamepad, joystick, computer music, instrument, interface} }
Bryan Pardo, David Little, and Darren Gergle. 2012. Towards Speeding Audio EQ Interface Building with Transfer Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1180563
Abstract
Download PDF DOI
Potential users of audio production software, such as parametric audio equalizers, may be discouraged by the complexity of the interface. A new approach creates a personalized on-screen slider that lets the user manipulate the audio in terms of a descriptive term (e.g. "warm"), without the user needing to learn or use the interface of an equalizer. This system learns mappings by presenting a sequence of sounds to the user and correlating the gain in each frequency band with the user’s preference rating. The system speeds learning through transfer learning. Results on a study of 35 participants show how an effective, personalized audio manipulation tool can be automatically built after only three ratings from the user.
@inproceedings{Pardo2012, author = {Pardo, Bryan and Little, David and Gergle, Darren}, title = {Towards Speeding Audio EQ Interface Building with Transfer Learning}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1180563}, url = {http://www.nime.org/proceedings/2012/nime2012_74.pdf}, keywords = {Human computer interaction, music, multimedia production, transfer learning} }
Alistair G. Stead, Alan F. Blackwell, and Samual Aaron. 2012. Graphic Score Grammars for End-Users. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178423
Abstract
Download PDF DOI
We describe a system that allows non-programmers to specify the grammar for a novel graphic score notation of their own design, defining performance notations suitable for drawing in live situations on a surface such as a whiteboard. Thescore can be interpreted via the camera of a smartphone,interactively scanned over the whiteboard to control the parameters of synthesisers implemented in Overtone. The visual grammar of the score, and its correspondence to the sound parameters, can be defined by the user with a simple visual condition-action language. This language can be edited on the touchscreen of an Android phone, allowing the grammar to be modified live in performance situations.Interactive scanning of the score is visible to the audience asa performance interface, with a colour classifier and visual feature recogniser causing the grammar-specified events to be sent using OSC messages via Wi-Fi from the hand-held smartphone to an audio workstation.
@inproceedings{Stead2012, author = {Stead, Alistair G. and Blackwell, Alan F. and Aaron, Samual}, title = {Graphic Score Grammars for End-Users}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178423}, url = {http://www.nime.org/proceedings/2012/nime2012_77.pdf}, keywords = {Graphic Notation, Disposable Notation, Live Coding, Com-puter Vision, Mobile Music} }
Jay Alan Jackson. 2012. Bubble Drum-agog-ing: Polyrhythm Games & Other Inter Activities. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178293
Abstract
Download PDF DOI
This paper describes the bubble drum set, along with several polyrhythm games and interactive music activities that have been developed to show its potential for use as an input controller. The bubble drum set combines various sizes of colorful exercise balls, held in place or suspended with conventional drum hardware and thus creating a trap kit configuration in which the spherical surfaces can be struck and stroked from varying angles using sticks, brushes, or even by hands alone. The acoustic properties of these fitness balls are surprisingly rich, capable of producing subtle differences in timbre while being responsive over a wide dynamic range. The entire set has been purposefully designed to provide a player with the means to achieve a rigorous and healthy physical workout, in addition to the achieving beneficial cognitive and sensory stimulation that comes from playing music with a sensitive and expressive instrument.
@inproceedings{Jackson2012, author = {Jackson, Jay Alan}, title = {Bubble Drum-agog-ing: Polyrhythm Games \& Other Inter Activities}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178293}, url = {http://www.nime.org/proceedings/2012/nime2012_8.pdf}, keywords = {Bubble Drums, WaveMachine Lab’s Drumagog, Polyrhythms.} }
Jordan Hochenbaum and Ajay Kapur. 2012. Drum Stroke Computing: Multimodal Signal Processing for Drum Stroke Identification and Performance Metrics. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178287
Abstract
Download PDF DOI
In this paper we present a multimodal system for analyzing drum performance. In the first example we perform automatic drum hand recognition utilizing a technique for automatic labeling of training data using direct sensors, and only indirect sensors (e.g. a microphone) for testing. Left/Right drum hand recognition is achieved with an average accuracy of 84.95% for two performers. Secondly we provide a study investigating multimodality dependent performance metrics analysis.
@inproceedings{Hochenbaum2012, author = {Hochenbaum, Jordan and Kapur, Ajay}, title = {Drum Stroke Computing: Multimodal Signal Processing for Drum Stroke Identification and Performance Metrics}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178287}, url = {http://www.nime.org/proceedings/2012/nime2012_82.pdf}, keywords = {Multimodality, Drum stroke identification, surrogate sensors, surrogate data training, machine learning, music information retrieval, performance metrics} }
Benjamin Levy, Georges Bloch, and Gerard Assayag. 2012. OMaxist Dialectics: Capturing, Visualizing and Expanding Improvisations. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178327
Abstract
Download PDF DOI
OMax is an improvisation software based on a graph representation encoding the pattern repetitions and structures of a sequence, built incrementally and in real-time from a live Midi or Audio source. We present in this paper a totally rewritten version of the software. The new design leads to refine the spectral listening of OMax and to consider different methods to build the symbolic alphabet labeling our symbolic units. The very modular and versatile architecture makes possible new musical configurations and we tried the software with different styles and musical situations. A novel visualization is proposed, which displays the current state of the learnt knowledge and allows to notice, both on the fly and a posteriori, points of musical interest and higher level structures.
@inproceedings{Levy2012, author = {Levy, Benjamin and Bloch, Georges and Assayag, Gerard}, title = {OMaxist Dialectics: Capturing, Visualizing and Expanding Improvisations}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178327}, url = {http://www.nime.org/proceedings/2012/nime2012_87.pdf}, keywords = {OMax, Improvisation, Machine Learning, Machine Listen-ing, Visualization, Sequence Model, Software Architecture} }
Dalia El-Shimy, Thomas Hermann, and Jeremy Cooperstock. 2012. A Reactive Environment for Dynamic Volume Control. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178247
Abstract
Download PDF DOI
In this paper, we discuss the design and testing of a reactive environment for musical performance. Driven by the interpersonal interactions amongst musicians, our system gives users, i.e., several musicians playing together in a band, real-time control over certain aspects of their performance, enabling them to change volume levels dynamically simply by moving around. It differs most notably from the majority of ventures into the design of novel musical interfaces and installations in its multidisciplinary approach, drawing on techniques from Human-Computer Interaction, social sciences and ludology. Our User-Centered Design methodology was central to producing an interactive environment that enhances traditional performance with novel functionalities. During a formal experiment, musicians reported finding our system exciting and enjoyable. We also introduce some additional interactions that can further enhance the interactivity of our reactive environment. In describing the particular challenges of working with such a unique and creative user as the musician, we hope that our approach can be of guidance to interface developers working on applications of a creative nature.
@inproceedings{ElShimy2012, author = {El-Shimy, Dalia and Hermann, Thomas and Cooperstock, Jeremy}, title = {A Reactive Environment for Dynamic Volume Control}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178247}, url = {http://www.nime.org/proceedings/2012/nime2012_88.pdf} }
Greg Surges. 2012. DIY Hybrid Analog/Digital Modular Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178427
Abstract
Download PDF DOI
This paper describes three hardware devices for integrating modular synthesizers with computers, each with a different approach to the relationship between hardware and software. The devices discussed are the USB-Octomod, an 8-channel OSC-compatible computer-controlled control-voltage generator, the tabulaRasa, a hardware table-lookup oscillator synthesis module with corresponding waveform design software, and the pucktronix.snake.corral, a dual 8x8 computer-controlled analog signal routing matrix. The devices make use of open-source hardware and software, and are designed around affordable micro-controllers and integrated circuits.
@inproceedings{Surges2012, author = {Surges, Greg}, title = {DIY Hybrid Analog/Digital Modular Synthesis}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178427}, url = {http://www.nime.org/proceedings/2012/nime2012_9.pdf}, keywords = {modular synthesis, interface, diy, open-source} }
Blake Johnston, Owen Vallis, and Ajay Kapur. 2012. A Comparative User Study of Two Methods of Control on a Multi-Touch Surface for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178297
Abstract
Download PDF DOI
Mapping between musical interfaces, and sound engines, is integral to the nature of an interface [3]. Traditionally, musical applications for touch surfaces have directly mapped touch coordinates to control parameters. However, recent work [9] is looking at new methods of control that use relational multi-point analysis. Instead of directly using touch coordinates, which are related to a global screen space, an initial touch is used as an ‘anchor’ to create a local coordinate space in which subsequent touches can be located and compared. This local coordinate space frees touches from being locked to one single relationship, and allows for more complex interaction between touch events. So far, this method has only been implemented on Apple computer’s small capacitive touch pads. Additionally, there has yet to be a user study that directly compares [9] against mappings of touch events within global coordinate spaces. With this in mind, we have developed and evaluated two interfaces with the aim of determining and quantifying some of these differences within the context of our custom large multi-touch surfaces [1].
@inproceedings{Johnston2012, author = {Johnston, Blake and Vallis, Owen and Kapur, Ajay}, title = {A Comparative User Study of Two Methods of Control on a Multi-Touch Surface for Musical Expression}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178297}, url = {http://www.nime.org/proceedings/2012/nime2012_94.pdf}, keywords = {Multi-Touch, User Study, Relational-point interface} }
Cory Levinson. 2012. TedStick: A Tangible Electrophonic Drumstick. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178325
Abstract
Download PDF DOI
TedStick is a new wireless musical instrument that processes acoustic sounds resonating within its wooden body and ma-nipulates them via gestural movements. The sounds are transduced by a piezoelectric sensor inside the wooden body, so any tactile contact with TedStick is transmitted as audio and further processed by a computer. The main method for performing with TedStick focuses on extracting diverse sounds from within the resonant properties of TedStick it-self. This is done by holding TedStick in one hand and a standard drumstick in the opposite hand while tapping, rubbing, or scraping the two against each other. Gestural movements of TedStick are then mapped to parameters for several sound effects including pitch shift, delay, reverb and low/high pass filters. Using this technique the hand holding the drumstick can control the acoustic sounds/interaction between the sticks while the hand holding TedStick can fo-cus purely on controlling the sound manipulation and effects parameters.
@inproceedings{Levinson2012, author = {Levinson, Cory}, title = {TedStick: A Tangible Electrophonic Drumstick}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178325}, url = {http://www.nime.org/proceedings/2012/nime2012_96.pdf}, keywords = {tangible user interface, piezoelectric sensors, gestural per-formance, digital sound manipulation} }
Jia-Liang Lu, Da-Lei Fang, Yi Qin, and Jiu-Qiang Tang. 2012. Wireless Interactive Sensor Platform for Real-Time Audio-Visual Experience. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178329
Abstract
Download PDF DOI
WIS platform is a wireless interactive sensor platform de-signed to support dynamic and interactive applications. The platform consists of a capture system which includes multi-ple on-body Zigbee compatible motion sensors, a processing unit and an audio-visual display control unit. It has a com-plete open architecture and provides interfaces to interact with other user-designed applications. Therefore, WIS plat-form is highly extensible. Through gesture recognitions by on-body sensor nodes and data processing, WIS platform can offer real-time audio and visual experiences to the users. Based on this platform, we set up a multimedia installation that presents a new interaction model between the partic-ipants and the audio-visual environment. Furthermore, we are also trying to apply WIS platform to other installations and performances.
@inproceedings{Lu2012, author = {Lu, Jia-Liang and Fang, Da-Lei and Qin, Yi and Tang, Jiu-Qiang}, title = {Wireless Interactive Sensor Platform for Real-Time Audio-Visual Experience}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178329}, url = {http://www.nime.org/proceedings/2012/nime2012_98.pdf}, keywords = {Interactive, Audio-visual experience} }
Ajay Kapur, Jim Murphy, and Dale Carnegie. 2012. Kritaanjali: A Robotic Harmonium for Performance, Pedogogy and Research. Proceedings of the International Conference on New Interfaces for Musical Expression, University of Michigan. http://doi.org/10.5281/zenodo.1178299
Abstract
Download PDF DOI
In this paper, we introduce Kritaanjli, a robotic harmo-nium. Details concerning the design, construction, and use of Kritaanjli are discussed. After an examination of related work, quantitative research concerning the hardware chosen in the construction of the instrument is shown, as is a thor-ough exposition of the design process and use of CAD/CAM techniques in the design lifecycle of the instrument. Addi-tionally, avenues for future work and compositional prac-tices are focused upon, with particular emphasis placed on human/robot interaction, pedagogical techniques afforded by the robotic instrument, and compositional avenues made accessible through the use of Kritaanjli.
@inproceedings{Kapur2012, author = {Kapur, Ajay and Murphy, Jim and Carnegie, Dale}, title = {Kritaanjali: A Robotic Harmonium for Performance, Pedogogy and Research}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2012}, publisher = {University of Michigan}, address = {Ann Arbor, Michigan}, issn = {2220-4806}, doi = {10.5281/zenodo.1178299}, url = {http://www.nime.org/proceedings/2012/nime2012_99.pdf}, keywords = {Musical Robotics, pedagogy, North Indian Classical Music, augmented instruments} }
2011
Dan Overholt. 2011. The Overtone Fiddle: an Actuated Acoustic Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 30–33. http://doi.org/10.5281/zenodo.1178127
Abstract
Download PDF DOI
The Overtone Fiddle is a new violin-family instrument that incorporates electronic sensors, integrated DSP, and physical actuation of the acoustic body. An embedded tactile sound transducer creates extra vibrations in the body of the Overtone Fiddle, allowing performer control and sensation via both traditional violin techniques, as well as extended playing techniques that incorporate shared man/machine control of the resulting sound. A magnetic pickup system is mounted to the end of the fiddle’s fingerboard in order to detect the signals from the vibrating strings, deliberately not capturing vibrations from the full body of the instrument. This focused sensing approach allows less restrained use of DSP-generated feedback signals, as there is very little direct leakage from the actuator embedded in the body of the instrument back to the pickup.
@inproceedings{Overholt2011, author = {Overholt, Dan}, title = {The Overtone Fiddle: an Actuated Acoustic Instrument}, pages = {30--33}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178127}, url = {http://www.nime.org/proceedings/2011/nime2011_004.pdf}, presentation-video = {https://vimeo.com/26795157/}, keywords = {Actuated Musical Instruments, Hybrid Instruments, Active Acoustics, Electronic Violin } }
Matthew Montag, Stefan Sullivan, Scott Dickey, and Colby Leider. 2011. A Low-Cost, Low-Latency Multi-Touch Table with Haptic Feedback for Musical Applications. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 8–13. http://doi.org/10.5281/zenodo.1178115
Abstract
Download PDF DOI
During the past decade, multi-touch surfaces have emerged as valuable tools for collaboration, display, interaction, and musical expression. Unfortunately, they tend to be costly and often suffer from two drawbacks for music performance:(1) relatively high latency owing to their sensing mechanism, and (2) lack of haptic feedback. We analyze the latency present in several current multi-touch platforms, and we describe a new custom system that reduces latency to an average of 30 ms while providing programmable haptic feed-back to the user. The paper concludes with a description of ongoing and future work.
@inproceedings{Montag2011, author = {Montag, Matthew and Sullivan, Stefan and Dickey, Scott and Leider, Colby}, title = {A Low-Cost, Low-Latency Multi-Touch Table with Haptic Feedback for Musical Applications}, pages = {8--13}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178115}, url = {http://www.nime.org/proceedings/2011/nime2011_008.pdf}, presentation-video = {https://vimeo.com/26799018/}, keywords = {multi-touch, haptics, frustrated total internal reflection, music performance, music composition, latency, DIY } }
Greg Shear and Matthew Wright. 2011. The Electromagnetically Sustained Rhodes Piano. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 14–17. http://doi.org/10.5281/zenodo.1178161
Abstract
Download PDF DOI
The Electromagnetically Sustained Rhodes Piano is an augmentation of the original instrument with additional control over the amplitude envelope of individual notes. Thisincludes slow attacks and infinite sustain while preservingthe familiar spectral qualities of this classic electromechanical piano. These additional parameters are controlled withaftertouch on the existing keyboard, extending standardpiano technique. Two sustain methods were investigated,driving the actuator first with a pure sine wave, and secondwith the output signal of the sensor. A special isolationmethod effectively decouples the sensors from the actuatorsand tames unruly feedback in the high-gain signal path.
@inproceedings{Shear2011, author = {Shear, Greg and Wright, Matthew}, title = {The Electromagnetically Sustained Rhodes Piano}, pages = {14--17}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178161}, url = {http://www.nime.org/proceedings/2011/nime2011_014.pdf}, presentation-video = {https://vimeo.com/26802504/}, keywords = {Rhodes, keyboard, electromagnetic, sustain, augmented instrument, feedback, aftertouch } }
Laurel S. Pardue, Andrew Boch, Matt Boch, Christine Southworth, and Alex Rigopulos. 2011. Gamelan Elektrika: An Electronic Balinese Gamelan. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 18–23. http://doi.org/10.5281/zenodo.1178131
Abstract
Download PDF DOI
This paper describes the motivation and construction of Gamelan Elektrika, a new electronic gamelan modeled after a Balinese Gong Kebyar. The first of its kind, Elektrika consists of seven instruments acting as MIDI controllers accompanied by traditional percussion and played by 11 or more performers following Balinese performance practice. Three main percussive instrument designs were executed using a combination of force sensitive resistors, piezos, and capacitive sensing. While the instrument interfaces are designedto play interchangeably with the original, the sound andt ravel possiblilities they enable are tremendous. MIDI enables a massive new sound palette with new scales beyond the quirky traditional tuning and non-traditional sounds. It also allows simplified transcription for an aurally taught tradition. Significantly, it reduces the transportation challenges of a previously large and heavy ensemble, creating opportunities for wider audiences to experience Gong Kebyar’s enchanting sound. True to the spirit of oneness in Balinese music, as one of the first large all-MIDI ensembles, ElekTrika challenges performers to trust silent instruments and develop an understanding of highly intricate and interlocking music not through the sound of the individual, but through the sound of the whole.
@inproceedings{Pardue2011, author = {Pardue, Laurel S. and Boch, Andrew and Boch, Matt and Southworth, Christine and Rigopulos, Alex}, title = {Gamelan Elektrika: An Electronic Balinese Gamelan}, pages = {18--23}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178131}, url = {http://www.nime.org/proceedings/2011/nime2011_018.pdf}, presentation-video = {https://vimeo.com/26803278/}, keywords = {bali, gamelan, musical instrument design, MIDI ensemble } }
Jeong-seob Lee and Woon Seung Yeo. 2011. Sonicstrument : A Musical Interface with Stereotypical Acoustic Transducers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 24–27. http://doi.org/10.5281/zenodo.1180259
Abstract
Download PDF DOI
This paper introduces Sonicstrument, a sound-based interface that traces the user’s hand motions. Sonicstrument utilizes stereotypical acoustic transducers (i.e., a pair of earphones and a microphone) for transmission and reception of acoustic signals whose frequencies are within the highest area of human hearing range that can rarely be perceived by most people. Being simpler in structure and easier to implement than typical ultrasonic motion detectors with special transducers, this system is robust and offers precise results without introducing any undesired sonic disturbance to users. We describe the design and implementation of Sonicstrument, evaluate its performance, and present two practical applications of the system in music and interactive performance.
@inproceedings{Lee2011, author = {Lee, Jeong-seob and Yeo, Woon Seung}, title = {Sonicstrument : A Musical Interface with Stereotypical Acoustic Transducers}, pages = {24--27}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1180259}, url = {http://www.nime.org/proceedings/2011/nime2011_024.pdf}, presentation-video = {https://vimeo.com/26804455/}, keywords = {Stereotypical transducers, audible sound, Doppler effect, handfree interface, musical instrument, interactive performance } }
Scott Smallwood. 2011. Solar Sound Arts: Creating Instruments and Devices Powered by Photovoltaic Technologies. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 28–31. http://doi.org/10.5281/zenodo.1178167
Abstract
Download PDF DOI
This paper describes recent developments in the creation of sound-making instruments and devices powered by photovoltaic (PV) technologies. With the rise of more efficient PV products in diverse packages, the possibilities for creating solar-powered musical instruments, sound installations, and loudspeakers are becoming increasingly realizable. This paper surveys past and recent developments in this area, including several projects by the , , author, and demonstrates how the use of PV technologies can influence the creative process in unique ways. In addition, this paper discusses how solar sound arts can enhance the aesthetic direction taken by recent work in soundscape studies and acoustic ecology. Finally, this paper will point towards future directions and possibilities as PV technologies continue to evolve and improve in terms of performance, and become more affordable.
@inproceedings{Smallwood2011, author = {Smallwood, Scott}, title = {Solar Sound Arts: Creating Instruments and Devices Powered by Photovoltaic Technologies}, pages = {28--31}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178167}, url = {http://www.nime.org/proceedings/2011/nime2011_028.pdf}, keywords = {Solar Sound Arts, Circuit Bending, Hardware Hacking, Human-Computer Interface Design, Acoustic Ecology, Sound Art, Electroacoustics, Laptop Orchestra, PV Technology } }
Niklas Klügel, Marc R. Frieß, Georg Groh, and Florian Echtler. 2011. An Approach to Collaborative Music Composition. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 32–35. http://doi.org/10.5281/zenodo.1178071
Abstract
Download PDF DOI
This paper provides a discussion of how the electronic, solely ITbased composition and performance of electronic music can besupported in realtime with a collaborative application on a tabletopinterface, mediating between single-user style music compositiontools and co-located collaborative music improvisation. After having elaborated on the theoretical backgrounds of prerequisites ofco-located collaborative tabletop applications as well as the common paradigms in music composition/notation, we will review related work on novel IT approaches to music composition and improvisation. Subsequently, we will present our prototypical implementation and the results.
@inproceedings{Klugel2011, author = {Kl\''{u}gel, Niklas and Frie\ss, Marc R. and Groh, Georg and Echtler, Florian}, title = {An Approach to Collaborative Music Composition}, pages = {32--35}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178071}, url = {http://www.nime.org/proceedings/2011/nime2011_032.pdf}, keywords = {Tabletop Interface, Collaborative Music Composition, Creativity Support } }
Nicolas E. Gold and Roger B. Dannenberg. 2011. A Reference Architecture and Score Representation for Popular Music Human-Computer Music Performance Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 36–39. http://doi.org/10.5281/zenodo.1178033
Abstract
Download PDF DOI
Popular music (characterized by improvised instrumental parts, beat and measure-level organization, and steady tempo) poses challenges for human-computer music performance (HCMP). Pieces of music are typically rearrangeable on-the-fly and involve a high degree of variation from ensemble to ensemble, and even between rehearsal and performance. Computer systems aiming to participate in such ensembles must therefore cope with a dynamic high-level structure in addition to the more traditional problems of beat-tracking, score-following, and machine improvisation. There are many approaches to integrating the components required to implement dynamic human-computer music performance systems. This paper presents a reference architecture designed to allow the typical sub-components (e.g. beat-tracking, tempo prediction, improvisation) to be integrated in a consistent way, allowing them to be combined and/or compared systematically. In addition, the paper presents a dynamic score representation particularly suited to the demands of popular music performance by computer.
@inproceedings{Gold2011, author = {Gold, Nicolas E. and Dannenberg, Roger B.}, title = {A Reference Architecture and Score Representation for Popular Music Human-Computer Music Performance Systems}, pages = {36--39}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178033}, url = {http://www.nime.org/proceedings/2011/nime2011_036.pdf}, keywords = {live performance,popular music,software design} }
Mark A. Bokowiec. 2011. V’OCT (Ritual): An Interactive Vocal Work for Bodycoder System and 8 Channel Spatialization. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 40–43. http://doi.org/10.5281/zenodo.1177967
Abstract
Download PDF DOI
V’OCT(Ritual) is a work for solo vocalist/performer and Bodycoder System, composed in residency at Dartington College of Arts (UK) Easter 2010. This paper looks at the technical and compositional methodologies used in the realization of the work, in particular, the choices made with regard to the mapping of sensor elements to various spatialization functions. Kinaesonics will be discussed in relation to the coding of real-time one-to-one mapping of sound to gesture and its expression in terms of hardware and software design. Four forms of expressivity arising out of interactive work with the Bodycoder system will be identified. How sonic (electro-acoustic), programmed, gestural (kinaesonic) and in terms of the V’Oct(Ritual) vocal expressivities are constructed as pragmatic and tangible elements within the compositional practice will be discussed and the subsequent importance of collaboration with a performer will be exposed.
@inproceedings{Bokowiec2011, author = {Bokowiec, Mark A.}, title = {V'OCT (Ritual): An Interactive Vocal Work for Bodycoder System and 8~{C}hannel Spatialization}, pages = {40--43}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177967}, url = {http://www.nime.org/proceedings/2011/nime2011_040.pdf}, keywords = {Bodycoder, Kinaesonics, Expressivity, Gestural Control, Interactive Performance Mechanisms, Collaboration. } }
Florent Berthaut, Haruhiro Katayose, Hironori Wakama, Naoyuki Totani, and Yuichi Sato. 2011. First Person Shooters as Collaborative Multiprocess Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 44–47. http://doi.org/10.5281/zenodo.1177961
Abstract
Download PDF DOI
First Person Shooters are among the most played computer videogames. They combine navigation, interaction and collaboration in3D virtual environments using simple input devices, i.e. mouseand keyboard. In this paper, we study the possibilities broughtby these games for musical interaction. We present the Couacs, acollaborative multiprocess instrument which relies on interactiontechniques used in FPS together with new techniques adding theexpressiveness required for musical interaction. In particular, theFaders For All game mode allows musicians to perform patternbased electronic compositions.
@inproceedings{Berthaut2011, author = {Berthaut, Florent and Katayose, Haruhiro and Wakama, Hironori and Totani, Naoyuki and Sato, Yuichi}, title = {First Person Shooters as Collaborative Multiprocess Instruments}, pages = {44--47}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177961}, url = {http://www.nime.org/proceedings/2011/nime2011_044.pdf}, keywords = {the couacs, fps, first person shooters, collaborative, 3D interaction, multiprocess instrument } }
Tilo Hähnel and Axel Berndt. 2011. Studying Interdependencies in Music Performance : An Interactive Tool. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 48–51. http://doi.org/10.5281/zenodo.1178037
BibTeX
Download PDF DOI
@inproceedings{Hahnel2011, author = {H\''{a}hnel, Tilo and Berndt, Axel}, title = {Studying Interdependencies in Music Performance : An Interactive Tool}, pages = {48--51}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178037}, url = {http://www.nime.org/proceedings/2011/nime2011_048.pdf}, keywords = {articula-,duration,dynamics,egales,loudness,notes in,synthetic performance,timing,tion} }
Sinan Bökesoy and Patrick Adler. 2011. 1city1001vibrations : Development of a Interactive Sound Installation with Robotic Instrument Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 52–55. http://doi.org/10.5281/zenodo.1177945
BibTeX
Download PDF DOI
@inproceedings{Bokesoy2011, author = {B\''{o}kesoy, Sinan and Adler, Patrick}, title = {1city1001vibrations : Development of a Interactive Sound Installation with Robotic Instrument Performance}, pages = {52--55}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177945}, url = {http://www.nime.org/proceedings/2011/nime2011_052.pdf}, keywords = {Sound installation, robotic music, interactive systems } }
Tim Murray-Browne, Di Mainstone, Nick Bryan-Kinns, and Mark D. Plumbley. 2011. The Medium is the Message: Composing Instruments and Performing Mappings. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 56–59. http://doi.org/10.5281/zenodo.1178119
Abstract
Download PDF DOI
Many performers of novel musical instruments find it difficult to engage audiences beyond those in the field. Previousresearch points to a failure to balance complexity with usability, and a loss of transparency due to the detachmentof the controller and sound generator. The issue is oftenexacerbated by an audience’s lack of prior exposure to theinstrument and its workings.However, we argue that there is a conflict underlyingmany novel musical instruments in that they are intendedto be both a tool for creative expression and a creative workof art in themselves, resulting in incompatible requirements.By considering the instrument, the composition and theperformance together as a whole with careful considerationof the rate of learning demanded of the audience, we propose that a lack of transparency can become an asset ratherthan a hindrance. Our approach calls for not only controllerand sound generator to be designed in sympathy with eachother, but composition, performance and physical form too.Identifying three design principles, we illustrate this approach with the Serendiptichord, a wearable instrument fordancers created by the , , authors.
@inproceedings{MurrayBrowne2011, author = {Murray-Browne, Tim and Mainstone, Di and Bryan-Kinns, Nick and Plumbley, Mark D.}, title = {The Medium is the Message: Composing Instruments and Performing Mappings}, pages = {56--59}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178119}, url = {http://www.nime.org/proceedings/2011/nime2011_056.pdf}, keywords = {Performance, composed instrument, transparency, constraint. } }
Seunghun Kim, Luke K. Kim, Songhee Jeong, and Woon Seung Yeo. 2011. Clothesline as a Metaphor for a Musical Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 60–63. http://doi.org/10.5281/zenodo.1178065
Abstract
Download PDF DOI
In this paper, we discuss the use of the clothesline as ametaphor for designing a musical interface called Airer Choir. This interactive installation is based on the function ofan ordinary object that is not a traditional instrument, andhanging articles of clothing is literally the gesture to use theinterface. Based on this metaphor, a musical interface withhigh transparency was designed. Using the metaphor, weexplored the possibilities for recognizing of input gesturesand creating sonic events by mapping data to sound. Thus,four different types of Airer Choir were developed. By classifying the interfaces, we concluded that various musicalexpressions are possible by using the same metaphor.
@inproceedings{Kim2011, author = {Kim, Seunghun and Kim, Luke K. and Jeong, Songhee and Yeo, Woon Seung}, title = {Clothesline as a Metaphor for a Musical Interface}, pages = {60--63}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178065}, url = {http://www.nime.org/proceedings/2011/nime2011_060.pdf}, keywords = {musical interface, metaphor, clothesline installation } }
Pietro Polotti and Maurizio Goina. 2011. EGGS in Action. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 64–67. http://doi.org/10.5281/zenodo.1178137
Abstract
Download PDF DOI
In this paper, we discuss the results obtained by means of the EGGS (Elementary Gestalts for Gesture Sonification) system in terms of artistic realizations. EGGS was introduced in a previous edition of this conference. The works presented include interactive installations in the form of public art and interactive onstage performances. In all of the works, the EGGS principles of simplicity based on the correspondence between elementary sonic and movement units, and of organicity between sound and gesture are applied. Indeed, we study both sound as a means for gesture representation and gesture as embodiment of sound. These principles constitute our guidelines for the investigation of the bidirectional relationship between sound and body expression with various strategies involving both educated and non-educated executors.
@inproceedings{Polotti2011, author = {Polotti, Pietro and Goina, Maurizio}, title = {EGGS in Action}, pages = {64--67}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178137}, url = {http://www.nime.org/proceedings/2011/nime2011_064.pdf}, keywords = {Gesture sonification, Interactive performance, Public art. } }
Berit Janssen. 2011. A Reverberation Instrument Based on Perceptual Mapping. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 68–71. http://doi.org/10.5281/zenodo.1178049
Abstract
Download PDF DOI
The present article describes a reverberation instrumentwhich is based on cognitive categorization of reverberating spaces. Different techniques for artificial reverberationwill be covered. A multidimensional scaling experimentwas conducted on impulse responses in order to determinehow humans acoustically perceive spatiality. This researchseems to indicate that the perceptual dimensions are related to early energy decay and timbral qualities. Theseresults are applied to a reverberation instrument based ondelay lines. It can be contended that such an instrumentcan be controlled more intuitively than other delay line reverberation tools which often provide a confusing range ofparameters which have a physical rather than perceptualmeaning.
@inproceedings{Janssen2011, author = {Janssen, Berit}, title = {A Reverberation Instrument Based on Perceptual Mapping}, pages = {68--71}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178049}, url = {http://www.nime.org/proceedings/2011/nime2011_068.pdf}, keywords = {Reverberation, perception, multidimensional scaling, mapping } }
Lauren Hayes. 2011. Vibrotactile Feedback-Assisted Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 72–75. http://doi.org/10.5281/zenodo.1178043
BibTeX
Download PDF DOI
@inproceedings{Hayes2011, author = {Hayes, Lauren}, title = {Vibrotactile Feedback-Assisted Performance}, pages = {72--75}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178043}, url = {http://www.nime.org/proceedings/2011/nime2011_072.pdf}, keywords = {Vibrotactile feedback, human-computer interfaces, digital composition, real-time performance, augmented instruments. } }
Daichi Ando. 2011. Improving User-Interface of Interactive EC for Composition-Aid by means of Shopping Basket Procedure. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 76–79. http://doi.org/10.5281/zenodo.1177941
Abstract
Download PDF DOI
The use of Interactive Evolutionary Computation (IEC) is suitable to the development of art-creation aid system for beginners. This is because of important features of IEC, like the ability of optimizing with ambiguous evaluation measures, and not requiring special knowledge about art-creation. With the popularity of Consumer Generated Media, many beginners in term of art-creation are interested in creating their own original art works. Thus developing of useful IEC system for musical creation is an urgent task. However, user-assist functions for IEC proposed in pastworks decrease the possibility of getting good unexpected results, which is an important feature of art-creation with IEC. In this paper, The author proposes a new IEC evaluation process named "Shopping Basket" procedure IEC. In the procedure, an user-assist function called Similarity-Based Reasoning allows for natural evaluation by the user. The function reduces user’s burden without reducing the possibility of unexpected results. The author performs an experiment where subjects use the new interface to validate it. As a result of the experiment, the author concludes that the new interface is better to motivate users to compose with IEC system than the old interface.
@inproceedings{Ando2011, author = {Ando, Daichi}, title = {Improving User-Interface of Interactive EC for Composition-Aid by means of Shopping Basket Procedure}, pages = {76--79}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177941}, url = {http://www.nime.org/proceedings/2011/nime2011_076.pdf}, keywords = {Interactive Evolutionary Computation, User-Interface, Composition Aid } }
Ryan Mcgee, Yuan-Yi Fan, and Reza Ali. 2011. BioRhythm : a Biologically-inspired Audio-Visual Installation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 80–83. http://doi.org/10.5281/zenodo.1178105
Abstract
Download PDF DOI
BioRhythm is an interactive bio-feedback installation controlled by the cardiovascular system. Data from a photoplethysmograph (PPG) sensor controls sonification and visualization parameters in real-time. Biological signals areobtained using the techniques of Resonance Theory in Hemodynamics and mapped to audiovisual cues via the Five Element Philosophy. The result is a new media interface utilizing sound synthesis and spatialization with advanced graphics rendering. BioRhythm serves as an artistic explorationof the harmonic spectra of pulse waves.
@inproceedings{Mcgee2011, author = {Mcgee, Ryan and Fan, Yuan-Yi and Ali, Reza}, title = {BioRhythm : a Biologically-inspired Audio-Visual Installation}, pages = {80--83}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178105}, url = {http://www.nime.org/proceedings/2011/nime2011_080.pdf}, keywords = {bio-feedback,bio-sensing,fm synthesis,open sound control,parallel computing,sonification,spa-,spatial audio,tialization,tion,visualiza-} }
Jon Pigott. 2011. Vibration , Volts and Sonic Art: A Practice and Theory of Electromechanical Sound. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 84–87. http://doi.org/10.5281/zenodo.1178133
BibTeX
Download PDF DOI
@inproceedings{Pigott2011, author = {Pigott, Jon}, title = {Vibration , Volts and Sonic Art: A Practice and Theory of Electromechanical Sound}, pages = {84--87}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178133}, url = {http://www.nime.org/proceedings/2011/nime2011_084.pdf}, keywords = {Electromechanical sonic art, kinetic sound art, prepared speakers, Infinite Spring. } }
George Sioros and Carlos Guedes. 2011. Automatic Rhythmic Performance in Max/MSP: the kin.rhythmicator. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 88–91. http://doi.org/10.5281/zenodo.1178163
Abstract
Download PDF DOI
We introduce a novel algorithm for automatically generating rhythms in real time in a certain meter. The generated rhythms are "generic" in the sense that they are characteristic of each time signature without belonging to a specific musical style. The algorithm is based on a stochastic model in which various aspects and qualities of the generated rhythm can be controlled intuitively and in real time. Such qualities are the density of the generated events per bar, the amount of variation in generation, the amount of syncopation, the metrical strength, and of course the meter itself. The kin.rhythmicator software application was developed to implement this algorithm. During a performance with the kin.rhythmicator the user can control all aspects of the performance through descriptive and intuitive graphic controls.
@inproceedings{Sioros2011, author = {Sioros, George and Guedes, Carlos}, title = {Automatic Rhythmic Performance in Max/MSP: the kin.rhythmicator}, pages = {88--91}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178163}, url = {http://www.nime.org/proceedings/2011/nime2011_088.pdf}, keywords = {automatic music generation, generative, stochastic, metric indispensability, syncopation, Max/MSP, Max4Live } }
André Goncalves. 2011. Towards a Voltage-Controlled Computer Control and Interaction Beyond an Embedded System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 92–95. http://doi.org/10.5281/zenodo.1178035
Abstract
Download PDF DOI
The importance of embedded devices as new devices to thefield of Voltage-Controlled Synthesizers is realized. Emphasis is directed towards understanding the importance of suchdevices in Voltage-Controlled Synthesizers. Introducing theVoltage-Controlled Computer as a new paradigm. Specifications for hardware interfacing and programming techniquesare described based on real prototypes. Implementationsand successful results are reported.
@inproceedings{Goncalves2011, author = {Goncalves, Andr{\'{e}}}, title = {Towards a Voltage-Controlled Computer Control and Interaction Beyond an Embedded System}, pages = {92--95}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178035}, url = {http://www.nime.org/proceedings/2011/nime2011_092.pdf}, keywords = {Voltage-controlled synthesizer, embedded systems, voltage-controlled computer, computer driven control voltage generation } }
Tae Hun Kim, Satoru Fukayama, Takuya Nishimoto, and Shigeki Sagayama. 2011. Polyhymnia : An Automatic Piano Performance System with Statistical Modeling of Polyphonic Expression and Musical Symbol Interpretation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 96–99. http://doi.org/10.5281/zenodo.1178069
Abstract
Download PDF DOI
We developed an automatic piano performance system calledPolyhymnia that is able to generate expressive polyphonicpiano performances with music scores so that it can be usedas a computer-based tool for an expressive performance.The system automatically renders expressive piano musicby means of automatic musical symbol interpretation andstatistical models of structure-expression relations regarding polyphonic features of piano performance. Experimental results indicate that the generated performances of various piano pieces with diverse trained models had polyphonicexpression and sounded expressively. In addition, the models trained with different performance styles reflected thestyles observed in the training performances, and they werewell distinguishable by human listeners. Polyhymnia wonthe first prize in the autonomous section of the PerformanceRendering Contest for Computer Systems (Rencon) 2010.
@inproceedings{Kim2011b, author = {Kim, Tae Hun and Fukayama, Satoru and Nishimoto, Takuya and Sagayama, Shigeki}, title = {Polyhymnia : An Automatic Piano Performance System with Statistical Modeling of Polyphonic Expression and Musical Symbol Interpretation}, pages = {96--99}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178069}, url = {http://www.nime.org/proceedings/2011/nime2011_096.pdf}, keywords = {performance rendering, polyphonic expression, statistical modeling, conditional random fields } }
Juan P. Carrascal and Sergi Jordà. 2011. Multitouch Interface for Audio Mixing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 100–103. http://doi.org/10.5281/zenodo.1177983
Abstract
Download PDF DOI
Audio mixing is the adjustment of relative volumes, panning and other parameters corresponding to different soundsources, in order to create a technically and aesthetically adequate sound sum. To do this, audio engineers employ "panpots" and faders, the standard controls in audio mixers. The design of such devices has remained practically unchanged for decades since their introduction. At the time,no usability studies seem to have been conducted on suchdevices, so one could question if they are really optimizedfor the task they are meant for.This paper proposes a new set of controls that might beused to simplify and/or improve the performance of audiomixing tasks, taking into account the spatial characteristicsof modern mixing technologies such as surround and 3Daudio and making use of multitouch interface technologies.A preliminary usability test has shown promising results.
@inproceedings{Carrascal2011, author = {Carrascal, Juan P. and Jord\`{a}, Sergi}, title = {Multitouch Interface for Audio Mixing}, pages = {100--103}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177983}, url = {http://www.nime.org/proceedings/2011/nime2011_100.pdf}, keywords = {audio mixing,control surface,multitouch,touchscreen} }
Nate Derbinsky and Georg Essl. 2011. Cognitive Architecture in Mobile Music Interactions. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 104–107. http://doi.org/10.5281/zenodo.1177993
Abstract
Download PDF DOI
This paper explores how a general cognitive architecture canpragmatically facilitate the development and exploration ofinteractive music interfaces on a mobile platform. To thisend we integrated the Soar cognitive architecture into themobile music meta-environment urMus. We develop anddemonstrate four artificial agents which use diverse learningmechanisms within two mobile music interfaces. We alsoinclude details of the computational performance of theseagents, evincing that the architecture can support real-timeinteractivity on modern commodity hardware.
@inproceedings{Derbinsky2011, author = {Derbinsky, Nate and Essl, Georg}, title = {Cognitive Architecture in Mobile Music Interactions}, pages = {104--107}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177993}, url = {http://www.nime.org/proceedings/2011/nime2011_104.pdf}, keywords = {cognitive architecture,machine learning,mobile music} }
Benjamin D. Smith and Guy E. Garnett. 2011. The Self-Supervising Machine. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 108–111. http://doi.org/10.5281/zenodo.1178169
Abstract
Download PDF DOI
Supervised machine learning enables complex many-to-manymappings and control schemes needed in interactive performance systems. One of the persistent problems in theseapplications is generating, identifying and choosing inputoutput pairings for training. This poses problems of scope(limiting the realm of potential control inputs), effort (requiring significant pre-performance training time), and cognitive load (forcing the performer to learn and remember thecontrol areas). We discuss the creation and implementationof an automatic "supervisor", using unsupervised machinelearning algorithms to train a supervised neural networkon the fly. This hierarchical arrangement enables networktraining in real time based on the musical or gestural control inputs employed in a performance, aiming at freeing theperformer to operate in a creative, intuitive realm, makingthe machine control transparent and automatic. Three implementations of this self supervised model driven by iPod,iPad, and acoustic violin are described.
@inproceedings{Smith2011, author = {Smith, Benjamin D. and Garnett, Guy E.}, title = {The Self-Supervising Machine}, pages = {108--111}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178169}, url = {http://www.nime.org/proceedings/2011/nime2011_108.pdf}, keywords = {NIME, machine learning, interactive computer music, machine listening, improvisation, adaptive resonance theory } }
Aaron Albin, Sertan Sentürk, Akito Van Troyer, Brian Blosser, Oliver Jan, and Gil Weinberg. 2011. Beatscape , a Mixed Virtual-Physical Environment for Musical Ensembles. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 112–115. http://doi.org/10.5281/zenodo.1177939
Abstract
Download PDF DOI
A mixed media tool was created that promotes ensemblevirtuosity through tight coordination and interdepence inmusical performance. Two different types of performers interact with a virtual space using Wii remote and tangibleinterfaces using the reacTIVision toolkit [11]. One group ofperformers uses a tangible tabletop interface to place andmove sound objects in a virtual environment. The soundobjects are represented by visual avatars and have audiosamples associated with them. A second set of performersmake use of Wii remotes to create triggering waves thatcan collide with those sound objects. Sound is only produced upon collision of the waves with the sound objects.What results is a performance in which users must negotiate through a physical and virtual space and are positionedto work together to create musical pieces.
@inproceedings{Albin2011, author = {Albin, Aaron and Sent\''{u}rk, Sertan and Van Troyer, Akito and Blosser, Brian and Jan, Oliver and Weinberg, Gil}, title = {Beatscape , a Mixed Virtual-Physical Environment for Musical Ensembles}, pages = {112--115}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177939}, url = {http://www.nime.org/proceedings/2011/nime2011_112.pdf}, keywords = {reacTIVision, processing, ensemble, mixed media, virtualization, tangible, sample } }
Marco Fabiani, Gaël Dubus, and Roberto Bresin. 2011. MoodifierLive : Interactive and Collaborative Expressive Music Performance on Mobile Devices. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 116–119. http://doi.org/10.5281/zenodo.1178005
Abstract
Download PDF DOI
This paper presents MoodifierLive, a mobile phone application for interactive control of rule-based automatic musicperformance. Five different interaction modes are available,of which one allows for collaborative performances with upto four participants, and two let the user control the expressive performance using expressive hand gestures. Evaluations indicate that the application is interesting, fun touse, and that the gesture modes, especially the one based ondata from free expressive gestures, allow for performanceswhose emotional content matches that of the gesture thatproduced them.
@inproceedings{Fabiani2011, author = {Fabiani, Marco and Dubus, Ga\''{e}l and Bresin, Roberto}, title = {MoodifierLive : Interactive and Collaborative Expressive Music Performance on Mobile Devices}, pages = {116--119}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178005}, url = {http://www.nime.org/proceedings/2011/nime2011_116.pdf}, keywords = {Expressive performance, gesture, collaborative performance, mobile phone } }
Benjamin Schroeder, Marc Ainger, and Richard Parent. 2011. A Physically Based Sound Space for Procedural Agents. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 120–123. http://doi.org/10.5281/zenodo.1178157
BibTeX
Download PDF DOI
@inproceedings{Schroeder2011, author = {Schroeder, Benjamin and Ainger, Marc and Parent, Richard}, title = {A Physically Based Sound Space for Procedural Agents}, pages = {120--123}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178157}, url = {http://www.nime.org/proceedings/2011/nime2011_120.pdf}, keywords = {a human performer,agents,agents smoothly changing the,behavioral animation,figure 1,length of,physically based sound,pro-,strings being played by} }
Garcı́a Francisco, Leny Vinceslas, Josep Tubau, and Esteban Maestre. 2011. Acquisition and Study of Blowing Pressure Profiles in Recorder Playing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 124–127. http://doi.org/10.5281/zenodo.1178025
Abstract
Download PDF DOI
This paper presents a study of blowing pressure profilesacquired from recorder playing. Blowing pressure signalsare captured from real performance by means of a a lowintrusiveness acquisition system constructed around commercial pressure sensors based on piezoelectric transducers.An alto recorder was mechanically modified by a luthierto allow the measurement and connection of sensors whilerespecting playability and intrusiveness. A multi-modaldatabase including aligned blowing pressure and sound signals is constructed from real practice, covering the performance space by considering different fundamental frequencies, dynamics, articulations and note durations. Once signals were pre-processed and segmented, a set of temporalenvelope features were defined as a basis for studying andconstructing a simplified model of blowing pressure profilesin different performance contexts.
@inproceedings{Garcia2011, author = {Garc\'{\i}a, Francisco and Vinceslas, Leny and Tubau, Josep and Maestre, Esteban}, title = {Acquisition and Study of Blowing Pressure Profiles in Recorder Playing}, pages = {124--127}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178025}, url = {http://www.nime.org/proceedings/2011/nime2011_124.pdf}, keywords = {blowing,instrumental gesture,multi-modal data,pressure,recorder,wind instrument} }
Anders Friberg and Anna Källblad. 2011. Experiences from Video-Controlled Sound Installations. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 128–131. http://doi.org/10.5281/zenodo.1178017
Abstract
Download PDF DOI
This is an overview of the three installations Hoppsa Universum, CLOSE and Flying Carpet. They were all designed as choreographed sound and music installations controlled by the visitors movements. The perspective is from an artistic goal/vision intention in combination with the technical challenges and possibilities. All three installations were realized with video cameras in the ceiling registering the users’ position or movement. The video analysis was then controlling different types of interactive software audio players. Different aspects like narrativity, user control, and technical limitations are discussed.
@inproceedings{Friberg2011, author = {Friberg, Anders and K\''{a}llblad, Anna}, title = {Experiences from Video-Controlled Sound Installations}, pages = {128--131}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178017}, url = {http://www.nime.org/proceedings/2011/nime2011_128.pdf}, keywords = {Gestures, dance, choreography, music installation, interactive music. } }
Nicolas d’Alessandro, Roberto Calderon, and Stefanie Müller. 2011. ROOM #81—Agent-Based Instrument for Experiencing Architectural and Vocal Cues. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 132–135. http://doi.org/10.5281/zenodo.1177933
BibTeX
Download PDF DOI
@inproceedings{dAlessandro2011, author = {d'Alessandro, Nicolas and Calderon, Roberto and M\''{u}ller, Stefanie}, title = {ROOM #81---Agent-Based Instrument for Experiencing Architectural and Vocal Cues}, pages = {132--135}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177933}, url = {http://www.nime.org/proceedings/2011/nime2011_132.pdf}, keywords = {agent,architecture,collaboration,figure 1,installation,instrument,interactive fabric,light,mo-,movements in the installation,space and,tion,voice synthesis} }
Yasuo Kuhara and Daiki Kobayashi. 2011. Kinetic Particles Synthesizer Using Multi-Touch Screen Interface of Mobile Devices. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 136–137. http://doi.org/10.5281/zenodo.1178079
Abstract
Download PDF DOI
We developed a kinetic particles synthesizer for mobile devices having a multi-touch screen such as a tablet PC and a smart phone. This synthesizer generates music based on the kinetics of particles under a two-dimensional physics engine. The particles move in the screen to synthesize sounds according to their own physical properties, which are shape, size, mass, linear and angular velocity, friction, restitution, etc. If a particle collides with others, a percussive sound is generated. A player can play music by the simple operation of touching or dragging on the screen of the device. Using a three-axis acceleration sensor, a player can perform music by shuffling or tilting the device. Each particle sounds just a simple tone. However, a large amount of various particles play attractive music by aggregating their sounds. This concept has been inspired by natural sounds made from an assembly of simple components, for example, rustling leaves or falling rain. For a novice who has no experience of playing a musical instrument, it is easy to learn how to play instantly and enjoy performing music with intuitive operation. Our system is used for musical instruments for interactive music entertainment.
@inproceedings{Kuhara2011, author = {Kuhara, Yasuo and Kobayashi, Daiki}, title = {Kinetic Particles Synthesizer Using Multi-Touch Screen Interface of Mobile Devices}, pages = {136--137}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178079}, url = {http://www.nime.org/proceedings/2011/nime2011_136.pdf}, keywords = {Particle, Tablet PC, iPhone, iPod touch, iPad, Smart phone, Kinetics, Touch screen, Physics engine. } }
Chris Carlson, Eli Marschner, and Hunter Mccurry. 2011. The Sound Flinger : A Haptic Spatializer. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 138–139. http://doi.org/10.5281/zenodo.1177981
BibTeX
Download PDF DOI
@inproceedings{Carlson2011, author = {Carlson, Chris and Marschner, Eli and Mccurry, Hunter}, title = {The Sound Flinger : A Haptic Spatializer}, pages = {138--139}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177981}, url = {http://www.nime.org/proceedings/2011/nime2011_138.pdf}, keywords = {arduino,beagleboard,ccrma,force feedback,haptics,jack,linux audio,multi-channel audio,nime,pd,pure data,satellite ccrma,sound spatialization} }
Ravi Kondapalli and Ben-Zhen Sung. 2011. Daft Datum – An Interface for Producing Music Through Foot-based Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 140–141. http://doi.org/10.5281/zenodo.1178075
Abstract
Download PDF DOI
Daft Datum is an autonomous new media artefact that takes input from movement of the feet (i.e. tapping/stomping/stamping) on a wooden surface, underneath which is a sensor sheet. The sensors in the sheet are mapped to various sound samples and synthesized sounds. Attributes of the synthesized sound, such as pitch and octave, can be controlled using the Nintendo Wii Remote. It also facilitates switching between modes of sound and recording/playing back a segment of audio. The result is music generated by dancing on the device that is further modulated by a hand-held controller.
@inproceedings{Kondapalli2011, author = {Kondapalli, Ravi and Sung, Ben-Zhen}, title = {Daft Datum -- An Interface for Producing Music Through Foot-based Interaction}, pages = {140--141}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178075}, url = {http://www.nime.org/proceedings/2011/nime2011_140.pdf}, keywords = {Daft Datum, Wii, Dance Pad, Feet, Controller, Bluetooth, Musical Interface, Dance, Sensor Sheet } }
Charles Martin and Chi-Hsia Lai. 2011. Strike on Stage: a Percussion and Media Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 142–143. http://doi.org/10.5281/zenodo.1178103
Abstract
Download PDF DOI
This paper describes Strike on Stage, an interface and corresponding audio-visual performance work developed and performed in 2010 by percussionists and media artists Chi-Hsia Lai and Charles Martin. The concept of Strike on Stage is to integrate computer visuals and sound into animprovised percussion performance. A large projection surface is positioned directly behind the performers, while acomputer vision system tracks their movements. The setup allows computer visualisation and sonification to be directly responsive and unified with the performers’ gestures.
@inproceedings{Martin2011, author = {Martin, Charles and Lai, Chi-Hsia}, title = {Strike on Stage: a Percussion and Media Performance}, pages = {142--143}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178103}, url = {http://www.nime.org/proceedings/2011/nime2011_142.pdf}, keywords = {computer vision, media performance, percussion} }
Baptiste Caramiaux, Patrick Susini, Tommaso Bianco, et al. 2011. Gestural Embodiment of Environmental Sounds: an Experimental Study. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 144–148. http://doi.org/10.5281/zenodo.1177979
Abstract
Download PDF DOI
In this paper we present an experimental study concerninggestural embodiment of environmental sounds in a listeningcontext. The presented work is part of a project aiming atmodeling movement-sound relationships, with the end goalof proposing novel approaches for designing musical instruments and sounding objects. The experiment is based onsound stimuli corresponding to "causal" and "non-causal" sounds. It is divided into a performance phase and an interview. The experiment is designed to investigate possiblecorrelation between the perception of the "causality" of environmental sounds and different gesture strategies for thesound embodiment. In analogy with the perception of thesounds’ causality, we propose to distinguish gestures that "mimic" a sound’s cause and gestures that "trace" a sound’smorphology following temporal sound characteristics. Results from the interviews show that, first, our causal soundsdatabase lead to consistent descriptions of the action at theorigin of the sound and participants mimic this action. Second, non-causal sounds lead to inconsistent metaphoric descriptions of the sound and participants make gestures following sound "contours". Quantitatively, the results showthat gesture variability is higher for causal sounds that noncausal sounds.
@inproceedings{Caramiaux2011a, author = {Caramiaux, Baptiste and Susini, Patrick and Bianco, Tommaso and Bevilacqua, Fr\'{e}d\'{e}ric and Houix, Olivier and Schnell, Norbert and Misdariis, Nicolas}, title = {Gestural Embodiment of Environmental Sounds: an Experimental Study}, pages = {144--148}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177979}, url = {http://www.nime.org/proceedings/2011/nime2011_144.pdf}, presentation-video = {https://vimeo.com/26805553/}, keywords = {Embodiment, Environmental Sound Perception, Listening, Gesture Sound Interaction } }
Sebastián Mealla, Aleksander Väaljamäae, Mathieu Bosi, and Sergi Jordà. 2011. Listening to Your Brain: Implicit Interaction in Collaborative Music Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 149–154. http://doi.org/10.5281/zenodo.1178107
Abstract
Download PDF DOI
The use of physiological signals in Human Computer Interaction (HCI) is becoming popular and widespread, mostly due to sensors miniaturization and advances in real-time processing. However, most of the studies that use physiology based interaction focus on single-user paradigms, and its usage in collaborative scenarios is still in its beginning. In this paper we explore how interactive sonification of brain and heart signals, and its representation through physical objects (physiopucks) in a tabletop interface may enhance motivational and controlling aspects of music collaboration. A multimodal system is presented, based on an electrophysiology sensor system and the Reactable, a musical tabletop interface. Performance and motivation variables were assessed in an experiment involving a test "Physio" group(N=22) and a control "Placebo" group (N=10). Pairs of participants used two methods for sound creation: implicit interaction through physiological signals, and explicit interaction by means of gestural manipulation. The results showed that pairs in the Physio Group declared less difficulty, higher confidence and more symmetric control than the Placebo Group, where no real-time sonification was provided as subjects were using pre-recorded physiological signal being unaware of it. These results support the feasibility of introducing physiology-based interaction in multimodal interfaces for collaborative music generation.
@inproceedings{Mealla2011, author = {Mealla, Sebasti\'{a}n and V\''{a}aljam\''{a}ae, Aleksander and Bosi, Mathieu and Jord\`{a}, Sergi}, title = {Listening to Your Brain: Implicit Interaction in Collaborative Music Performances}, pages = {149--154}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178107}, url = {http://www.nime.org/proceedings/2011/nime2011_149.pdf}, presentation-video = {https://vimeo.com/26806576/}, keywords = {bci, collaboration, cscw, hci, multimodal interfaces, music, physiological computing, physiopucks, tabletops, universitat pompeu fabra} }
Dan Newton and Mark T. Marshall. 2011. Examining How Musicians Create Augmented Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 155–160. http://doi.org/10.5281/zenodo.1178121
Abstract
Download PDF DOI
This paper examines the creation of augmented musicalinstruments by a number of musicians. Equipped with asystem called the Augmentalist, 10 musicians created newaugmented instruments based on their traditional acousticor electric instruments. This paper discusses the ways inwhich the musicians augmented their instruments, examines the similarities and differences between the resultinginstruments and presents a number of interesting findingsresulting from this process.
@inproceedings{Newton2011, author = {Newton, Dan and Marshall, Mark T.}, title = {Examining How Musicians Create Augmented Musical Instruments}, pages = {155--160}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178121}, url = {http://www.nime.org/proceedings/2011/nime2011_155.pdf}, presentation-video = {https://vimeo.com/26807158/}, keywords = {Augmented Instruments, Instrument Design, Digital Musical Instruments, Performance } }
Zachary Seldess and Toshiro Yamada. 2011. Tahakum: A Multi-Purpose Audio Control Framework. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 161–166. http://doi.org/10.5281/zenodo.1178159
Abstract
Download PDF DOI
We present Tahakum, an open source, extensible collection of software tools designed to enhance workflow on multichannel audio systems within complex multi-functional research and development environments. Tahakum aims to provide critical functionality required across a broad spectrum of audio systems usage scenarios, while at the same time remaining sufficiently open as to easily support modifications and extensions via 3rd party hardware and software. Features provided in the framework include software for custom mixing/routing and audio system preset automation, software for network message routing/redirection and protocol conversion, and software for dynamic audio asset management and control.
@inproceedings{Seldess2011, author = {Seldess, Zachary and Yamada, Toshiro}, title = {Tahakum: A Multi-Purpose Audio Control Framework}, pages = {161--166}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178159}, url = {http://www.nime.org/proceedings/2011/nime2011_161.pdf}, presentation-video = {https://vimeo.com/26809966/}, keywords = {Audio Control Systems, Audio for VR, Max/MSP, Spatial Audio } }
Dawen Liang, Guangyu Xia, and Roger B. Dannenberg. 2011. A Framework for Coordination and Synchronization of Media. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 167–172. http://doi.org/10.5281/zenodo.1178091
Abstract
Download PDF DOI
Computer music systems that coordinate or interact with human musicians exist in many forms. Often, coordination is at the level of gestures and phrases without synchronization at the beat level (or perhaps the notion of "beat" does not even exist). In music with beats, fine-grain synchronization can be achieved by having humans adapt to the computer (e.g. following a click track), or by computer accompaniment in which the computer follows a predetermined score. We consider an alternative scenario in which improvisation prevents traditional score following, but where synchronization is achieved at the level of beats, measures, and cues. To explore this new type of human-computer interaction, we have created new software abstractions for synchronization and coordination of music and interfaces in different modalities. We describe these new software structures, present examples, and introduce the idea of music notation as an interactive musical interface rather than a static document.
@inproceedings{Liang2011, author = {Liang, Dawen and Xia, Guangyu and Dannenberg, Roger B.}, title = {A Framework for Coordination and Synchronization of Media}, pages = {167--172}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178091}, url = {http://www.nime.org/proceedings/2011/nime2011_167.pdf}, presentation-video = {https://vimeo.com/26832515/}, keywords = {automatic accompaniment,interactive,music display,popular music,real-time,synchronization} }
Edgar Berdahl and Wendy Ju. 2011. Satellite CCRMA: A Musical Interaction and Sound Synthesis Platform. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 173–178. http://doi.org/10.5281/zenodo.1177957
Abstract
Download PDF DOI
This paper describes a new Beagle Board-based platform forteaching and practicing interaction design for musical applications. The migration from desktop and laptop computerbased sound synthesis to a compact and integrated control, computation and sound generation platform has enormous potential to widen the range of computer music instruments and installations that can be designed, and improvesthe portability, autonomy, extensibility and longevity of designed systems. We describe the technical features of theSatellite CCRMA platform and contrast it with personalcomputer-based systems used in the past as well as emergingsmart phone-based platforms. The advantages and tradeoffs of the new platform are considered, and some projectwork is described.
@inproceedings{Berdahl2011a, author = {Berdahl, Edgar and Ju, Wendy}, title = {Satellite CCRMA: A Musical Interaction and Sound Synthesis Platform}, pages = {173--178}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177957}, url = {http://www.nime.org/proceedings/2011/nime2011_173.pdf}, presentation-video = {https://vimeo.com/26833829/}, keywords = {arduino,beagle board,instruments omap,linux,microcontrollers,music controllers,nime,pd,pedagogy,texas} }
Nicholas J. Bryan and Ge Wang. 2011. Two Turntables and a Mobile Phone. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 179–184. http://doi.org/10.5281/zenodo.1177971
Abstract
Download PDF DOI
A novel method of digital scratching is presented as an alternative to currently available digital hardware interfaces and time-coded vinyl (TCV). Similar to TCV, the proposed method leverages existing analog turntables as a physical interface to manipulate the playback of digital audio. To doso, however, an accelerometer/gyroscope–equipped smartphone is firmly attached to a modified record, placed on a turntable, and used to sense a performers movement, resulting in a wireless sensing-based scratching method. The accelerometer and gyroscope data is wirelessly transmitted to a computer to manipulate the digital audio playback in real-time. The method provides the benefit of digital audio and storage, requires minimal additional hardware, accommodates familiar proprioceptive feedback, and allows a single interface to control both digital and analog audio. In addition, the proposed method provides numerous additional benefits including real-time graphical display,multi-touch interaction, and untethered performance (e.g“air-scratching”). Such a method turns a vinyl record into an interactive surface and enhances traditional scratching performance by affording new and creative musical interactions. Informal testing show this approach to be viable,responsive, and robust.
@inproceedings{Bryan2011, author = {Bryan, Nicholas J. and Wang, Ge}, title = {Two Turntables and a Mobile Phone}, pages = {179--184}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177971}, url = {http://www.nime.org/proceedings/2011/nime2011_179.pdf}, presentation-video = {https://vimeo.com/26835277/}, keywords = {Digital scratching, mobile music, digital DJ, smartphone, turntable, turntablism, record player, accelerometer, gyroscope, vinyl emulation software } }
Nick Kruge and Ge Wang. 2011. MadPad: A Crowdsourcing System for Audiovisual Sampling. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 185–190. http://doi.org/10.5281/zenodo.1178077
Abstract
Download PDF DOI
MadPad is a networked audiovisual sample station for mobile devices. Twelve short video clips are loaded onto thescreen in a grid and playback is triggered by tapping anywhere on the clip. This is similar to tapping the pads of anaudio sample station, but extends that interaction to addvisual sampling. Clips can be shot on-the-fly with a cameraenabled mobile device and loaded into the player instantly,giving the performer an ability to quickly transform his orher surroundings into a sample-based, audiovisual instrument. Samples can also be sourced from an online community in which users can post or download content. The recent ubiquity of multitouch mobile devices and advances inpervasive computing have made this system possible, providing for a vast amount of content only limited by theimagination of the performer and the community. This paper presents the core features of MadPad and the designexplorations that inspired them.
@inproceedings{Kruge2011, author = {Kruge, Nick and Wang, Ge}, title = {MadPad: A Crowdsourcing System for Audiovisual Sampling}, pages = {185--190}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178077}, url = {http://www.nime.org/proceedings/2011/nime2011_185.pdf}, presentation-video = {https://vimeo.com/26855684/}, keywords = {mobile music, networked music, social music, audiovisual, sampling, user-generated content, crowdsourcing, sample station, iPad, iPhone } }
Patrick O. Keefe and Georg Essl. 2011. The Visual in Mobile Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 191–196. http://doi.org/10.5281/zenodo.1178061
Abstract
Download PDF DOI
Visual information integration in mobile music performanceis an area that has not been thoroughly explored and currentapplications are often individually designed. From camerainput to flexible output rendering, we discuss visual performance support in the context of urMus, a meta-environmentfor mobile interaction and performance development. Theuse of cameras, a set of image primitives, interactive visualcontent, projectors, and camera flashes can lead to visuallyintriguing performance possibilities.
@inproceedings{Keefe2011, author = {Keefe, Patrick O. and Essl, Georg}, title = {The Visual in Mobile Music Performance}, pages = {191--196}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178061}, url = {http://www.nime.org/proceedings/2011/nime2011_191.pdf}, presentation-video = {https://vimeo.com/26836592/}, keywords = {Mobile performance, visual interaction, camera phone, mobile collaboration } }
Ge Wang, Jieun Oh, and Tom Lieber. 2011. Designing for the iPad: Magic Fiddle. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 197–202. http://doi.org/10.5281/zenodo.1178187
Abstract
Download PDF DOI
This paper describes the origin, design, and implementation of Smule’s Magic Fiddle, an expressive musical instrument for the iPad. Magic Fiddle takes advantage of the physical aspects of the device to integrate game-like and pedagogical elements. We describe the origin of Magic Fiddle, chronicle its design process, discuss its integrated music education system, and evaluate the overall experience.
@inproceedings{Wang2011, author = {Wang, Ge and Oh, Jieun and Lieber, Tom}, title = {Designing for the iPad: Magic Fiddle}, pages = {197--202}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178187}, url = {http://www.nime.org/proceedings/2011/nime2011_197.pdf}, presentation-video = {https://vimeo.com/26857032/}, keywords = {Magic Fiddle, iPad, physical interaction design, experiential design, music education. } }
Benjamin Knapp and Brennon Bortz. 2011. MobileMuse: Integral Music Control Goes Mobile. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 203–206. http://doi.org/10.5281/zenodo.1178073
Abstract
Download PDF DOI
This paper describes a new interface for mobile music creation, the MobileMuse, that introduces the capability of using physiological indicators of emotion as a new mode of interaction. Combining both kinematic and physiological measurement in a mobile environment creates the possibility of integral music control—the use of both gesture and emotion to control sound creation—where it has never been possible before. This paper will review the concept of integral music control and describe the motivation for creating the MobileMuse, its design and future possibilities.
@inproceedings{Knapp2011, author = {Knapp, Benjamin and Bortz, Brennon}, title = {MobileMuse: Integral Music Control Goes Mobile}, pages = {203--206}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178073}, url = {http://www.nime.org/proceedings/2011/nime2011_203.pdf}, presentation-video = {https://vimeo.com/26858339/}, keywords = {affective computing,bile music performance,mo-,physiological signal measurement} }
Stephen D. Beck, Chris Branton, and Sharath Maddineni. 2011. Tangible Performance Management of Grid-based Laptop Orchestras. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 207–210. http://doi.org/10.5281/zenodo.1177951
Abstract
Download PDF DOI
Laptop Orchestras (LOs) have recently become a very popular mode of musical expression. They engage groups ofperformers to use ordinary laptop computers as instrumentsand sound sources in the performance of specially createdmusic software. Perhaps the biggest challenge for LOs isthe distribution, management and control of software acrossheterogeneous collections of networked computers. Software must be stored and distributed from a central repository, but launched on individual laptops immediately beforeperformance. The GRENDL project leverages proven gridcomputing frameworks and approaches the Laptop Orchestra as a distributed computing platform for interactive computer music. This allows us to readily distribute softwareto each laptop in the orchestra depending on the laptop’sinternal configuration, its role in the composition, and theplayer assigned to that computer. Using the SAGA framework, GRENDL is able to distribute software and managesystem and application environments for each composition.Our latest version includes tangible control of the GRENDLenvironment for a more natural and familiar user experience.
@inproceedings{Beck2011, author = {Beck, Stephen D. and Branton, Chris and Maddineni, Sharath}, title = {Tangible Performance Management of Grid-based Laptop Orchestras}, pages = {207--210}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177951}, url = {http://www.nime.org/proceedings/2011/nime2011_207.pdf}, presentation-video = {https://vimeo.com/26860960/}, keywords = {laptop orchestra, tangible interaction, grid computing } }
Smilen Dimitrov and Stefania Serafin. 2011. Audio Arduino – an ALSA (Advanced Linux Sound Architecture) Audio Driver for FTDI-based Arduinos. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 211–216. http://doi.org/10.5281/zenodo.1177997
Abstract
Download PDF DOI
A contemporary PC user, typically expects a sound cardto be a piece of hardware, that: can be manipulated by’audio’ software (most typically exemplified by ’media players’); and allows interfacing of the PC to audio reproduction and/or recording equipment. As such, a ’sound card’can be considered to be a system, that encompasses designdecisions on both hardware and software levels – that also demand a certain understanding of the architecture of thetarget PC operating system.This project outlines how an Arduino Duemillanoveboard (containing a USB interface chip, manufactured byFuture Technology Devices International Ltd [FTDI]company) can be demonstrated to behave as a full-duplex,mono, 8-bit 44.1 kHz soundcard, through an implementation of: a PC audio driver for ALSA (Advanced LinuxSound Architecture); a matching program for theArduino’sATmega microcontroller – and nothing more than headphones (and a couple of capacitors). The main contributionof this paper is to bring a holistic aspect to the discussionon the topic of implementation of soundcards – also by referring to open-source driver, microcontroller code and testmethods; and outline a complete implementation of an open – yet functional – soundcard system.
@inproceedings{Dimitrov2011, author = {Dimitrov, Smilen and Serafin, Stefania}, title = {Audio Arduino -- an ALSA (Advanced Linux Sound Architecture) Audio Driver for FTDI-based Arduinos}, pages = {211--216}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177997}, url = {http://www.nime.org/proceedings/2011/nime2011_211.pdf}, keywords = {alsa,arduino,audio,driver,linux,sound card} }
Seunghun Kim and Woon Seung Yeo. 2011. Musical Control of a Pipe Based on Acoustic Resonance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 217–219. http://doi.org/10.5281/zenodo.1178067
Abstract
Download PDF DOI
In this paper, we introduce a pipe interface that recognizestouch on tone holes by the resonances in the pipe instead ofa touch sensor. This work was based on the acoustic principles of woodwind instruments without complex sensors andelectronic circuits to develop a simple and durable interface.The measured signals were analyzed to show that differentfingerings generate various sounds. The audible resonancesignal in the pipe interface can be used as a sonic event formusical expression by itself and also as an input parameterfor mapping different sounds.
@inproceedings{Kim2011a, author = {Kim, Seunghun and Yeo, Woon Seung}, title = {Musical Control of a Pipe Based on Acoustic Resonance}, pages = {217--219}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178067}, url = {http://www.nime.org/proceedings/2011/nime2011_217.pdf}, keywords = {resonance, mapping, pipe } }
Anne-Marie S. Hansen, Hans J. Anderson, and Pirkko Raudaskoski. 2011. Play Fluency in Music Improvisation Games for Novices. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 220–223. http://doi.org/10.5281/zenodo.1178039
Abstract
Download PDF DOI
In this paper a collaborative music game for two pen tablets is studied in order to see how two people with no professional music background negotiated musical improvisation. In an initial study of what it is that constitutes play fluency in improvisation, a music game has been designed and evaluated through video analysis: A qualitative view of mutual action describes the social context of music improvisation: how two people with speech, laughter, gestures, postures and pauses negotiate individual and joint action. The objective behind the design of the game application was to support players in some aspects of their mutual play. Results show that even though players activated additional sound feedback as a result of their mutual play, players also engaged in forms of mutual play that the game engine did not account for. These ways of mutual play are descibed further along with some suggestions for how to direct future designs of collaborative music improvisation games towards ways of mutual play.
@inproceedings{Hansen2011, author = {Hansen, Anne-Marie S. and Anderson, Hans J. and Raudaskoski, Pirkko}, title = {Play Fluency in Music Improvisation Games for Novices}, pages = {220--223}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178039}, url = {http://www.nime.org/proceedings/2011/nime2011_220.pdf}, keywords = {Collaborative interfaces, improvisation, interactive music games, social interaction, play, novice. } }
Izzi Ramkissoon. 2011. The Bass Sleeve: A Real-time Multimedia Gestural Controller for Augmented Electric Bass Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 224–227. http://doi.org/10.5281/zenodo.1178141
Abstract
Download PDF DOI
The Bass Sleeve uses an Arduino board with a combination of buttons, switches, flex sensors, force sensing resistors, and an accelerometer to map the ancillary movements of a performer to sampling, real-time audio and video processing including pitch shifting, delay, low pass filtering, and onscreen video movement. The device was created to augment the existing functions of the electric bass and explore the use of ancillary gestures to control the laptop in a live performance. In this research it was found that incorporating ancillary gestures into a live performance could be useful when controlling the parameters of audio processing, sound synthesis and video manipulation. These ancillary motions can be a practical solution to gestural multitasking allowing independent control of computer music parameters while performing with the electric bass. The process of performing with the Bass Sleeve resulted in a greater amount of laptop control, an increase in the amount of expressiveness using the electric bass in combination with the laptop, and an improvement in the interactivity on both the electric bass and laptop during a live performance. The design uses various gesture-to-sound mapping strategies to accomplish a compositional task during an electro acoustic multimedia musical performance piece.
@inproceedings{Ramkissoon2011, author = {Ramkissoon, Izzi}, title = {The Bass Sleeve: A Real-time Multimedia Gestural Controller for Augmented Electric Bass Performance}, pages = {224--227}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178141}, url = {http://www.nime.org/proceedings/2011/nime2011_224.pdf}, keywords = {Interactive Music, Interactive Performance Systems, Gesture Controllers, Augmented Instruments, Electric Bass, Video Tracking } }
Ajay Kapur, Michael Darling, Jim Murphy, Jordan Hochenbaum, Dimitri Diakopoulos, and Trimpin Trimpin. 2011. The KarmetiK NotomotoN : A New Breed of Musical Robot for Teaching and Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 228–231. http://doi.org/10.5281/zenodo.1178059
Abstract
Download PDF DOI
This paper describes the KarmetiK NotomotoN, a new musical robotic system for performance and education. A long time goal of the , , authors has been to provide users with plug-andplay, highly expressive musical robot system with a high degree of portability. This paper describes the technical details of the NotomotoN, and discusses its use in performance and educational scenarios. Detailed tests performed to optimize technical aspects of the NotomotoN are described to highlight usability and performance specifications for electronic musicians and educators.
@inproceedings{Kapur2011, author = {Kapur, Ajay and Darling, Michael and Murphy, Jim and Hochenbaum, Jordan and Diakopoulos, Dimitri and Trimpin, Trimpin}, title = {The KarmetiK NotomotoN : A New Breed of Musical Robot for Teaching and Performance}, pages = {228--231}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178059}, url = {http://www.nime.org/proceedings/2011/nime2011_228.pdf}, keywords = {music technology,musical robotics,robotic performance} }
Adrián Barenca and Giuseppe Torre. 2011. The Manipuller : Strings Manipulation and Multi-Dimensional Force Sensing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 232–235. http://doi.org/10.5281/zenodo.1177949
Abstract
Download PDF DOI
The Manipuller is a novel Gestural Controller based on strings manipulation and multi-dimensional force sensing technology. This paper describes its motivation, design and operational principles along with some of its musical applications. Finally the results of a preliminary usability test are presented and discussed.
@inproceedings{Barenca2011, author = {Barenca, Adri\'{a}n and Torre, Giuseppe}, title = {The Manipuller : Strings Manipulation and Multi-Dimensional Force Sensing}, pages = {232--235}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177949}, url = {http://www.nime.org/proceedings/2011/nime2011_232.pdf}, keywords = {1,and force sensors within,force sensing,gestural,gestural controller,manipulation,strings,strings and force sensing,the integration of strings} }
Alain Crevoisier and Cécile Picard-Limpens. 2011. Mapping Objects with the Surface Editor. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 236–239. http://doi.org/10.5281/zenodo.1177989
Abstract
Download PDF DOI
The Surface Editor is a software tool for creating control interfaces and mapping input actions to OSC or MIDI actions very easily and intuitively. Originally conceived to be used with a tactile interface, the Surface Editor has been extended to support the creation of graspable interfaces as well. This paper presents a new framework for the generic mapping of user actions with graspable objects on a surface. We also present a system for detecting touch on thin objects, allowing for extended interactive possibilities. The Surface Editor is not limited to a particular tracking system though, and the generic mapping approach for objects can have a broader use with various input interfaces supporting touch and/or objects.
@inproceedings{Crevoisier2011, author = {Crevoisier, Alain and Picard-Limpens, C\'{e}cile}, title = {Mapping Objects with the Surface Editor}, pages = {236--239}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177989}, url = {http://www.nime.org/proceedings/2011/nime2011_236.pdf}, keywords = {NIME, mapping, interaction, user-defined interfaces, tangibles, graspable interfaces. } }
Jordan Hochenbaum and Ajay Kapur. 2011. Adding Z-Depth and Pressure Expressivity to Tangible Tabletop Surfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 240–243. http://doi.org/10.5281/zenodo.1178045
Abstract
Download PDF DOI
This paper presents the SmartFiducial, a wireless tangible object that facilitates additional modes of expressivity for vision-based tabletop surfaces. Using infrared proximity sensing and resistive based force-sensors, the SmartFiducial affords users unique, and highly gestural inputs. Furthermore, the SmartFiducial incorporates additional customizable pushbutton switches. Using XBee radio frequency (RF) wireless transmission, the SmartFiducial establishes bipolar communication with a host computer. This paper describes the design and implementation of the SmartFiducial, as well as an exploratory use in a musical context.
@inproceedings{Hochenbaum2011, author = {Hochenbaum, Jordan and Kapur, Ajay}, title = {Adding Z-Depth and Pressure Expressivity to Tangible Tabletop Surfaces}, pages = {240--243}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178045}, url = {http://www.nime.org/proceedings/2011/nime2011_240.pdf}, keywords = {Fiducial, Tangible Interface, Multi-touch, Sensors, Gesture, Haptics, Bricktable, Proximity Sensing } }
Andrew J. Milne, Anna Xambó, Robin Laney, David B. Sharp, Anthony Prechtl, and Simon Holland. 2011. Hex Player — A Virtual Musical Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 244–247. http://doi.org/10.5281/zenodo.1178109
BibTeX
Download PDF DOI
@inproceedings{Milne2011, author = {Milne, Andrew J. and Xamb\'{o}, Anna and Laney, Robin and Sharp, David B. and Prechtl, Anthony and Holland, Simon}, title = {Hex Player --- A Virtual Musical Controller}, pages = {244--247}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178109}, url = {http://www.nime.org/proceedings/2011/nime2011_244.pdf}, keywords = {generalized keyboard, isomorphic layout, multi-touch surface, tablet, musical interface design, iPad, microtonality } }
Carl H. Waadeland. 2011. Rhythm Performance from a Spectral Point of View. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 248–251. http://doi.org/10.5281/zenodo.1178185
BibTeX
Download PDF DOI
@inproceedings{Waadeland2011, author = {Waadeland, Carl H.}, title = {Rhythm Performance from a Spectral Point of View}, pages = {248--251}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178185}, url = {http://www.nime.org/proceedings/2011/nime2011_248.pdf}, keywords = {gesture,movement,rhythm performance,spectral analysis} }
Josep M. Comajuncosas, Alex Barrachina, John O’Connell, and Enric Guaus. 2011. Nuvolet: 3D Gesture-driven Collaborative Audio Mosaicing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 252–255. http://doi.org/10.5281/zenodo.1177987
Abstract
Download PDF DOI
This research presents a 3D gestural interface for collaborative concatenative sound synthesis and audio mosaicing.Our goal is to improve the communication between the audience and performers by means of an enhanced correlationbetween gestures and musical outcome. Nuvolet consists ofa 3D motion controller coupled to a concatenative synthesis engine. The interface detects and tracks the performers hands in four dimensions (x,y,z,t) and allows them toconcurrently explore two or three-dimensional sound cloudrepresentations of the units from the sound corpus, as wellas to perform collaborative target-based audio mosaicing.Nuvolet is included in the Esmuc Laptop Orchestra catalogfor forthcoming performances.
@inproceedings{Comajuncosas2011, author = {Comajuncosas, Josep M. and Barrachina, Alex and O'Connell, John and Guaus, Enric}, title = {Nuvolet: {3D} Gesture-driven Collaborative Audio Mosaicing}, pages = {252--255}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177987}, url = {http://www.nime.org/proceedings/2011/nime2011_252.pdf}, keywords = {concatenative synthesis, audio mosaicing, open-air interface, gestural controller, musical instrument, 3D } }
Erwin Schoonderwaldt and Alexander Refsum Jensenius. 2011. Effective and Expressive Movements in a French-Canadian fiddler’s Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 256–259. http://doi.org/10.5281/zenodo.1178155
Abstract
Download PDF DOI
We report on a performance study of a French-Canadian fiddler. The fiddling tradition forms an interesting contrast toclassical violin performance in several ways. Distinguishingfeatures include special elements in the bowing techniqueand the presence of an accompanying foot clogging pattern.These two characteristics are described, visualized and analyzed using video and motion capture recordings as sourcematerial.
@inproceedings{Schoonderwaldt2011, author = {Schoonderwaldt, Erwin and Jensenius, Alexander Refsum}, title = {Effective and Expressive Movements in a French-Canadian fiddler's Performance}, pages = {256--259}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178155}, url = {http://www.nime.org/proceedings/2011/nime2011_256.pdf}, keywords = {fiddler, violin, French-Canadian, bowing, feet, clogging, motion capture, video, motiongram, kinematics, sonification } }
Daniel Bisig, Jan C. Schacher, and Martin Neukom. 2011. Flowspace – A Hybrid Ecosystem. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 260–263. http://doi.org/10.5281/zenodo.1177965
Abstract
Download PDF DOI
In this paper an audio-visual installation is discussed, which combines interactive, immersive and generative elements. After introducing some of the challenges in the field of Generative Art and placing the work within its research context, conceptual reflections are made about the spatial, behavioural, perceptual and social issues that are raised within the entire installation. A discussion about the artistic content follows, focussing on the scenography and on working with flocking algorithms in general, before addressing three specific pieces realised for the exhibition. Next the technical implementation for both hardand software are detailed before the idea of a hybrid ecosystem gets discussed and further developments outlined.
@inproceedings{Bisig2011, author = {Bisig, Daniel and Schacher, Jan C. and Neukom, Martin}, title = {Flowspace -- A Hybrid Ecosystem}, pages = {260--263}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177965}, url = {http://www.nime.org/proceedings/2011/nime2011_260.pdf}, keywords = {Generative Art, Interactive Environment, Immersive Installation, Swarm Simulation, Hybrid Ecosystem } }
Marc Sosnick and William Hsu. 2011. Implementing a Finite Difference-Based Real-time Sound Synthesizer using GPUs. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 264–267. http://doi.org/10.5281/zenodo.1178173
Abstract
Download PDF DOI
In this paper, we describe an implementation of a real-time sound synthesizer using Finite Difference-based simulation of a two-dimensional membrane. Finite Difference (FD) methods can be the basis for physics-based music instrument models that generate realistic audio output. However, such methods are compute-intensive; large simulations cannot run in real time on current CPUs. Many current systems now include powerful Graphics Processing Units (GPUs), which are a good fit for FD methods. We demonstrate that it is possible to use this method to create a usable real-time audio synthesizer.
@inproceedings{Sosnick2011, author = {Sosnick, Marc and Hsu, William}, title = {Implementing a Finite Difference-Based Real-time Sound Synthesizer using {GPU}s}, pages = {264--267}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178173}, url = {http://www.nime.org/proceedings/2011/nime2011_264.pdf}, keywords = {Finite Difference, GPU, CUDA, Synthesis } }
Axel Tidemann. 2011. An Artificial Intelligence Architecture for Musical Expressiveness that Learns by Imitation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 268–271. http://doi.org/10.5281/zenodo.1178175
Abstract
Download PDF DOI
Interacting with musical avatars have been increasingly popular over the years, with the introduction of games likeGuitar Hero and Rock Band. These games provide MIDIequipped controllers that look like their real-world counterparts (e.g. MIDI guitar, MIDI drumkit) that the users playto control their designated avatar in the game. The performance of the user is measured against a score that needs tobe followed. However, the avatar does not move in responseto how the user plays, it follows some predefined movementpattern. If the user plays badly, the game ends with theavatar ending the performance (i.e. throwing the guitar onthe floor). The gaming experience would increase if theavatar would move in accordance with user input. This paper presents an architecture that couples musical input withbody movement. Using imitation learning, a simulated human robot learns to play the drums like human drummersdo, both visually and auditory. Learning data is recordedusing MIDI and motion tracking. The system uses an artificial intelligence approach to implement imitation learning,employing artificial neural networks.
@inproceedings{Tidemann2011, author = {Tidemann, Axel}, title = {An Artificial Intelligence Architecture for Musical Expressiveness that Learns by Imitation}, pages = {268--271}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178175}, url = {http://www.nime.org/proceedings/2011/nime2011_268.pdf}, keywords = {artificial intelli-,drumming,modeling human behaviour} }
Luke Dahl, Jorge Herrera, and Carr Wilkerson. 2011. TweetDreams : Making Music with the Audience and the World using Real-time Twitter Data. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 272–275. http://doi.org/10.5281/zenodo.1177991
Abstract
Download PDF DOI
TweetDreams is an instrument and musical compositionwhich creates real-time sonification and visualization oftweets. Tweet data containing specified search terms is retrieved from Twitter and used to build networks of associated tweets. These networks govern the creation of melodiesassociated with each tweet and are displayed graphically.Audience members participate in the piece by tweeting,and their tweets are given special musical and visual prominence.
@inproceedings{Dahl2011, author = {Dahl, Luke and Herrera, Jorge and Wilkerson, Carr}, title = {TweetDreams : Making Music with the Audience and the World using Real-time Twitter Data}, pages = {272--275}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177991}, url = {http://www.nime.org/proceedings/2011/nime2011_272.pdf}, keywords = {Twitter, audience participation, sonification, data visualization, text processing, interaction, multi-user instrument. } }
Lawrence Fyfe, Adam Tindale, and Sheelagh Carpendale. 2011. JunctionBox : A Toolkit for Creating Multi-touch Sound Control Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 276–279. http://doi.org/10.5281/zenodo.1178021
Abstract
Download PDF DOI
JunctionBox is a new software toolkit for creating multitouch interfaces for controlling sound and music. Morespecifically, the toolkit has special features which make iteasy to create TUIO-based touch interfaces for controllingsound engines via Open Sound Control. Programmers using the toolkit have a great deal of freedom to create highlycustomized interfaces that work on a variety of hardware.
@inproceedings{Fyfe2011, author = {Fyfe, Lawrence and Tindale, Adam and Carpendale, Sheelagh}, title = {JunctionBox : A Toolkit for Creating Multi-touch Sound Control Interfaces}, pages = {276--279}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178021}, url = {http://www.nime.org/proceedings/2011/nime2011_276.pdf}, keywords = {Multi-touch, Open Sound Control, Toolkit, TUIO } }
Andrew Johnston. 2011. Beyond Evaluation : Linking Practice and Theory in New Musical Interface Design. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 280–283. http://doi.org/10.5281/zenodo.1178053
Abstract
Download PDF DOI
This paper presents an approach to practice-based researchin new musical instrument design. At a high level, the process involves drawing on relevant theories and aesthetic approaches to design new instruments, attempting to identify relevant applied design criteria, and then examiningthe experiences of performers who use the instruments withparticular reference to these criteria. Outcomes of this process include new instruments, theories relating to musicianinstrument interaction and a set of design criteria informedby practice and research.
@inproceedings{Johnston2011, author = {Johnston, Andrew}, title = {Beyond Evaluation : Linking Practice and Theory in New Musical Interface Design}, pages = {280--283}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178053}, url = {http://www.nime.org/proceedings/2011/nime2011_280.pdf}, keywords = {practice-based research, evaluation, Human-Computer Interaction, research methods, user studies } }
Phillip Popp and Matthew Wright. 2011. Intuitive Real-Time Control of Spectral Model Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 284–287. http://doi.org/10.5281/zenodo.1178139
BibTeX
Download PDF DOI
@inproceedings{Popp2011, author = {Popp, Phillip and Wright, Matthew}, title = {Intuitive Real-Time Control of Spectral Model Synthesis}, pages = {284--287}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178139}, url = {http://www.nime.org/proceedings/2011/nime2011_284.pdf}, keywords = {Spectral Model Synthesis, Gesture Recognition, Synthesis Control, Wacom Tablet, Machine Learning } }
Pablo Molina, Haro Martı́n, and Sergi Jordà. 2011. BeatJockey : A New Tool for Enhancing DJ Skills. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 288–291. http://doi.org/10.5281/zenodo.1178113
Abstract
Download PDF DOI
We present BeatJockey, a prototype interface which makesuse of Audio Mosaicing (AM), beat-tracking and machinelearning techniques, for supporting Diskjockeys (DJs) byproposing them new ways of interaction with the songs onthe DJ’s playlist. This prototype introduces a new paradigmto DJing in which the user has the capability to mix songsinteracting with beat-units that accompany the DJ’s mix.For this type of interaction, the system suggests song slicestaken from songs selected from a playlist, which could gowell with the beats of whatever master song is being played.In addition the system allows the synchronization of multiple songs, thus permitting flexible, coherent and rapid progressions in the DJ’s mix. BeatJockey uses the Reactable,a musical tangible user interface (TUI), and it has beendesigned to be used by all DJs regardless of their level ofexpertise, as the system helps the novice while bringing newcreative opportunities to the expert.
@inproceedings{Molina2011, author = {Molina, Pablo and Haro, Mart\'{\i}n and Jord\`{a}, Sergi}, title = {BeatJockey : A New Tool for Enhancing DJ Skills}, pages = {288--291}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178113}, url = {http://www.nime.org/proceedings/2011/nime2011_288.pdf}, keywords = {DJ, music information retrieval, audio mosaicing, percussion, turntable, beat-mash, interactive music interfaces, realtime, tabletop interaction, reactable. } }
Jan C. Schacher and Angela Stoecklin. 2011. Traces – Body, Motion and Sound. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 292–295. http://doi.org/10.5281/zenodo.1178149
Abstract
Download PDF DOI
In this paper the relationship between body, motion and sound is addressed. The comparison with traditional instruments and dance is shown with regards to basic types of motion. The difference between gesture and movement is outlined and some of the models used in dance for structuring motion sequences are described. In order to identify expressive aspects of motion sequences a test scenario is devised. After the description of the methods and tools used in a series of measurements, two types of data-display are shown and the applied in the interpretation. One salient feature is recognized and put into perspective with regards to movement and gestalt perception. Finally the merits of the technical means that were applied are compared and a model-based approach to motion-sound mapping is proposed.
@inproceedings{Schacher2011, author = {Schacher, Jan C. and Stoecklin, Angela}, title = {Traces -- Body, Motion and Sound}, pages = {292--295}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178149}, url = {http://www.nime.org/proceedings/2011/nime2011_292.pdf}, keywords = {Interactive Dance, Motion and Gesture, Sonification, Motion Perception, Mapping } }
Grace Leslie and Tim Mullen. 2011. MoodMixer : EEG-based Collaborative Sonification. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 296–299. http://doi.org/10.5281/zenodo.1178089
Abstract
Download PDF DOI
MoodMixer is an interactive installation in which participants collaboratively navigate a two-dimensional music spaceby manipulating their cognitive state and conveying thisstate via wearable Electroencephalography (EEG) technology. The participants can choose to actively manipulateor passively convey their cognitive state depending on theirdesired approach and experience level. A four-channel electronic music mixture continuously conveys the participants’expressed cognitive states while a colored visualization oftheir locations on a two-dimensional projection of cognitive state attributes aids their navigation through the space.MoodMixer is a collaborative experience that incorporatesaspects of both passive and active EEG sonification andperformance art. We discuss the technical design of the installation and place its collaborative sonification aestheticdesign within the context of existing EEG-based music andart.
@inproceedings{Leslie2011, author = {Leslie, Grace and Mullen, Tim}, title = {MoodMixer : {EEG}-based Collaborative Sonification}, pages = {296--299}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178089}, url = {http://www.nime.org/proceedings/2011/nime2011_296.pdf}, keywords = {EEG, BCMI, collaboration, sonification, visualization } }
Ståle A. Skogstad, Yago de Quay, and Alexander Refsum Jensenius. 2011. OSC Implementation and Evaluation of the Xsens MVN Suit. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 300–303. http://doi.org/10.5281/zenodo.1178165
Abstract
Download PDF DOI
The paper presents research about implementing a full body inertial motion capture system, the Xsens MVN suit, for musical interaction. Three different approaches for stream-ing real time and prerecorded motion capture data with Open Sound Control have been implemented. Furthermore, we present technical performance details and our experience with the motion capture system in realistic practice.
@inproceedings{Skogstad2011, author = {Skogstad, Ståle A. and de Quay, Yago and Jensenius, Alexander Refsum}, title = {OSC Implementation and Evaluation of the Xsens MVN Suit}, pages = {300--303}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178165}, url = {http://www.nime.org/proceedings/2011/nime2011_300.pdf} }
Lonce Wyse, Norikazu Mitani, and Suranga Nanayakkara. 2011. The Effect of Visualizing Audio Targets in a Musical Listening and Performance Task. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 304–307. http://doi.org/10.5281/zenodo.1178191
Abstract
Download PDF DOI
The goal of our research is to find ways of supporting and encouraging musical behavior by non-musicians in shared public performance environments. Previous studies indicated simultaneous music listening and performance is difficult for non-musicians, and that visual support for the task might be helpful. This paper presents results from a preliminary user study conducted to evaluate the effect of visual feedback on a musical tracking task. Participants generated a musical signal by manipulating a hand-held device with two dimensions of control over two parameters, pitch and density of note events, and were given the task of following a target pattern as closely as possible. The target pattern was a machine-generated musical signal comprising of variation over the same two parameters. Visual feedback provided participants with information about the control parameters of the musical signal generated by the machine. We measured the task performance under different visual feedback strategies. Results show that single parameter visualizations tend to improve the tracking performance with respect to the visualized parameter, but not the non-visualized parameter. Visualizing two independent parameters simultaneously decreases performance in both dimensions.
@inproceedings{Wyse2011, author = {Wyse, Lonce and Mitani, Norikazu and Nanayakkara, Suranga}, title = {The Effect of Visualizing Audio Targets in a Musical Listening and Performance Task}, pages = {304--307}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178191}, url = {http://www.nime.org/proceedings/2011/nime2011_304.pdf}, keywords = {Mobile phone, Interactive music performance, Listening, Group music play, Visual support } }
Adrian Freed, John MacCallum, and Andrew Schmeder. 2011. Composability for Musical Gesture Signal Processing using new OSC-based Object and Functional Programming Extensions to Max/MSP. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 308–311. http://doi.org/10.5281/zenodo.1178015
Abstract
Download PDF DOI
An effective programming style for gesture signal processing is described using a new library that brings efficient run-time polymorphism, functional and instance-based object-oriented programming to Max/MSP. By introducing better support for generic programming and composability Max/MSP becomes a more productive environment for managing the growing scale and complexity of gesture sensing systems for musical instruments and interactive installations.
@inproceedings{Freed2011, author = {Freed, Adrian and MacCallum, John and Schmeder, Andrew}, title = {Composability for Musical Gesture Signal Processing using new OSC-based Object and Functional Programming Extensions to Max/MSP}, pages = {308--311}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178015}, url = {http://www.nime.org/proceedings/2011/nime2011_308.pdf}, keywords = {composability,delegation,functional programming,gesture signal,max,msp,object,object-,open sound control,oriented programming,processing} }
Kristian Nymoen, Ståle A. Skogstad, and Alexander Refsum Jensenius. 2011. SoundSaber – A Motion Capture Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 312–315. http://doi.org/10.5281/zenodo.1178125
Abstract
Download PDF DOI
The paper presents the SoundSaber-a musical instrument based on motion capture technology. We present technical details of the instrument and discuss the design development process. The SoundSaber may be used as an example of how high-fidelity motion capture equipment can be used for prototyping musical instruments, and we illustrate this with an example of a low-cost implementation of our motion capture instrument.
@inproceedings{Nymoen2011, author = {Nymoen, Kristian and Skogstad, Ståle A. and Jensenius, Alexander Refsum}, title = {SoundSaber -- A Motion Capture Instrument}, pages = {312--315}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178125}, url = {http://www.nime.org/proceedings/2011/nime2011_312.pdf} }
Öyvind Brandtsegg, Sigurd Saue, and Thom Johansen. 2011. A Modulation Matrix for Complex Parameter Sets. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 316–319. http://doi.org/10.5281/zenodo.1177969
Abstract
Download PDF DOI
The article describes a flexible mapping technique realized as a many-to-many dynamic mapping matrix. Digital sound generation is typically controlled by a large number of parameters and efficient and flexible mapping is necessary to provide expressive control over the instrument. The proposed modulation matrix technique may be seen as a generic and selfmodifying mapping mechanism integrated in a dynamic interpolation scheme. It is implemented efficiently by taking advantage of its inherent sparse matrix structure. The modulation matrix is used within the Hadron Particle Synthesizer, a complex granular module with 200 synthesis parameters and a simplified performance control structure with 4 expression parameters.
@inproceedings{Brandtsegg2011, author = {Brandtsegg, \''{O}yvind and Saue, Sigurd and Johansen, Thom}, title = {A Modulation Matrix for Complex Parameter Sets}, pages = {316--319}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177969}, url = {http://www.nime.org/proceedings/2011/nime2011_316.pdf}, keywords = {Mapping, granular synthesis, modulation, live performance } }
Yu-Chung Tseng, Che-Wei Liu, Tzu-Heng Chi, and Hui-Yu Wang. 2011. Sound Low Fun. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 320–321. http://doi.org/10.5281/zenodo.1178179
BibTeX
Download PDF DOI
@inproceedings{Tseng2011, author = {Tseng, Yu-Chung and Liu, Che-Wei and Chi, Tzu-Heng and Wang, Hui-Yu}, title = {Sound Low Fun}, pages = {320--321}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178179}, url = {http://www.nime.org/proceedings/2011/nime2011_320.pdf} }
Edgar Berdahl and Chris Chafe. 2011. Autonomous New Media Artefacts ( AutoNMA ). Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 322–323. http://doi.org/10.5281/zenodo.1177953
Abstract
Download PDF DOI
The purpose of this brief paper is to revisit the question oflongevity in present experimental practice and coin the termautonomous new media artefacts (AutoNMA), which arecomplete and independent of external computer systems,so they can be operable for a longer period of time andcan be demonstrated at a moment’s notice. We argue thatplatforms for prototyping should promote the creation ofAutoNMA to make extant the devices which will be a partof the future history of new media.
@inproceedings{Berdahl2011, author = {Berdahl, Edgar and Chafe, Chris}, title = {Autonomous New Media Artefacts ( AutoNMA )}, pages = {322--323}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177953}, url = {http://www.nime.org/proceedings/2011/nime2011_322.pdf}, keywords = {autonomous, standalone, Satellite CCRMA, Arduino } }
Min-Joon Yoo, Jin-Wook Beak, and In-Kwon Lee. 2011. Creating Musical Expression using Kinect. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 324–325. http://doi.org/10.5281/zenodo.1178193
Abstract
Download PDF DOI
Recently, Microsoft introduced a game interface called Kinect for the Xbox 360 video game platform. This interface enables users to control and interact with the game console without the need to touch a controller. It largely increases the users’ degree of freedom to express their emotion. In this paper, we first describe the system we developed to use this interface for sound generation and controlling musical expression. The skeleton data are extracted from users’ motions and the data are translated to pre-defined MIDI data. We then use the MIDI data to control several applications. To allow the translation between the data, we implemented a simple Kinect-to-MIDI data convertor, which is introduced in this paper. We describe two applications to make music with Kinect: we first generate sound with Max/MSP, and then control the adlib with our own adlib generating system by the body movements of the users.
@inproceedings{Yoo2011, author = {Yoo, Min-Joon and Beak, Jin-Wook and Lee, In-Kwon}, title = {Creating Musical Expression using Kinect}, pages = {324--325}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178193}, url = {http://www.nime.org/proceedings/2011/nime2011_324.pdf}, keywords = {Kinect, gaming interface, sound generation, adlib generation } }
Staas de Jong. 2011. Making Grains Tangible: Microtouch for Microsound. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 326–328. http://doi.org/10.5281/zenodo.1178055
Abstract
Download PDF DOI
This paper proposes a new research direction for the large family of instrumental musical interfaces where sound is generated using digital granular synthesis, and where interaction and control involve the (fine) operation of stiff, flat contact surfaces. First, within a historical context, a general absence of, and clear need for, tangible output that is dynamically instantiated by the grain-generating process itself is identified. Second, to fill this gap, a concrete general approach is proposed based on the careful construction of non-vibratory and vibratory force pulses, in a one-to-one relationship with sonic grains.An informal pilot psychophysics experiment initiating the approach was conducted, which took into account the two main cases for applying forces to the human skin: perpendicular, and lateral. Initial results indicate that the force pulse approach can enable perceivably multidimensional, tangible display of the ongoing grain-generating process. Moreover, it was found that this can be made to meaningfully happen (in real time) in the same timescale of basic sonic grain generation. This is not a trivial property, and provides an important and positive fundament for further developing this type of enhanced display. It also leads to the exciting prospect of making arbitrary sonic grains actual physical manipulanda.
@inproceedings{DeJong2011, author = {de Jong, Staas}, title = {Making Grains Tangible: Microtouch for Microsound}, pages = {326--328}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178055}, url = {http://www.nime.org/proceedings/2011/nime2011_326.pdf}, keywords = {and others,and today granular,barry truax,curtis roads,granular sound synthesis,instrumental control,tangible display,tangible manipulation} }
Baptiste Caramiaux, Frédéric Bevilacqua, and Norbert Schnell. 2011. Sound Selection by Gestures. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 329–330. http://doi.org/10.5281/zenodo.1177977
Abstract
Download PDF DOI
This paper presents a prototypical tool for sound selection driven by users’ gestures. Sound selection by gesturesis a particular case of "query by content" in multimedia databases. Gesture-to-Sound matching is based on computing the similarity between both gesture and sound parameters’ temporal evolution. The tool presents three algorithms for matching gesture query to sound target. Thesystem leads to several applications in sound design, virtualinstrument design and interactive installation.
@inproceedings{Caramiaux2011, author = {Caramiaux, Baptiste and Bevilacqua, Fr\'{e}d\'{e}ric and Schnell, Norbert}, title = {Sound Selection by Gestures}, pages = {329--330}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177977}, url = {http://www.nime.org/proceedings/2011/nime2011_329.pdf}, keywords = {Query by Gesture, Time Series Analysis, Sonic Interaction } }
Hernán Kerlleñevich, Eguı́a Manuel C., and Pablo E. Riera. 2011. An Open Source Interface based on Biological Neural Networks for Interactive Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 331–336. http://doi.org/10.5281/zenodo.1178063
Abstract
Download PDF DOI
We propose and discuss an open source real-time interface that focuses in the vast potential for interactive soundart creation emerging from biological neural networks, asparadigmatic complex systems for musical exploration. Inparticular, we focus on networks that are responsible for thegeneration of rhythmic patterns.The interface relies uponthe idea of relating metaphorically neural behaviors to electronic and acoustic instruments notes, by means of flexiblemapping strategies. The user can intuitively design network configurations by dynamically creating neurons andconfiguring their inter-connectivity. The core of the systemis based in events emerging from his network design, whichfunctions in a similar way to what happens in real smallneural networks. Having multiple signal and data inputsand outputs, as well as standard communications protocolssuch as MIDI, OSC and TCP/IP, it becomes and uniquetool for composers and performers, suitable for different performance scenarios, like live electronics, sound installationsand telematic concerts.
@inproceedings{Kerllenevich2011, author = {Kerlle\~{n}evich, Hern\'{a}n and Egu\'{\i}a, Manuel C. and Riera, Pablo E.}, title = {An Open Source Interface based on Biological Neural Networks for Interactive Music Performance}, pages = {331--336}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178063}, url = {http://www.nime.org/proceedings/2011/nime2011_331.pdf}, presentation-video = {https://vimeo.com/26874396/}, keywords = {rhythm generation, biological neural networks, complex patterns, musical interface, network performance } }
Nicholas Gillian, Benjamin Knapp, and Sile O’Modhrain. 2011. Recognition Of Multivariate Temporal Musical Gestures Using N-Dimensional Dynamic Time Warping. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 337–342. http://doi.org/10.5281/zenodo.1178029
Abstract
Download PDF DOI
This paper presents a novel algorithm that has been specifically designed for the recognition of multivariate temporal musical gestures. The algorithm is based on DynamicTime Warping and has been extended to classify any N dimensional signal, automatically compute a classificationthreshold to reject any data that is not a valid gesture andbe quickly trained with a low number of training examples.The algorithm is evaluated using a database of 10 temporalgestures performed by 10 participants achieving an averagecross-validation result of 99%.
@inproceedings{Gillian2011, author = {Gillian, Nicholas and Knapp, Benjamin and O'Modhrain, Sile}, title = {Recognition Of Multivariate Temporal Musical Gestures Using N-Dimensional Dynamic Time Warping}, pages = {337--342}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178029}, url = {http://www.nime.org/proceedings/2011/nime2011_337.pdf}, presentation-video = {https://vimeo.com/26874428/}, keywords = {Dynamic Time Warping, Gesture Recognition, Musician-Computer Interaction, Multivariate Temporal Gestures } }
Nicholas Gillian, Benjamin Knapp, and Sile O’Modhrain. 2011. A Machine Learning Toolbox For Musician Computer Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 343–348. http://doi.org/10.5281/zenodo.1178031
Abstract
Download PDF DOI
This paper presents the SARC EyesWeb Catalog, (SEC),a machine learning toolbox that has been specifically developed for musician-computer interaction. The SEC features a large number of machine learning algorithms that can be used in real-time to recognise static postures, perform regression and classify multivariate temporal gestures. The algorithms within the toolbox have been designed to work with any N -dimensional signal and can be quickly trained with a small number of training examples. We also provide the motivation for the algorithms used for the recognition of musical gestures to achieve a low intra-personal generalisation error, as opposed to the inter-personal generalisation error that is more common in other areas of human-computer interaction.
@inproceedings{Gillian2011a, author = {Gillian, Nicholas and Knapp, Benjamin and O'Modhrain, Sile}, title = {A Machine Learning Toolbox For Musician Computer Interaction}, pages = {343--348}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178031}, url = {http://www.nime.org/proceedings/2011/nime2011_343.pdf}, presentation-video = {https://vimeo.com/26872843/}, keywords = {Machine learning, gesture recognition, musician-computer interaction, SEC } }
Elena Jessop, Peter A. Torpey, and Benjamin Bloomberg. 2011. Music and Technology in Death and the Powers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 349–354. http://doi.org/10.5281/zenodo.1178051
Abstract
Download PDF DOI
In composer Tod Machover’s new opera Death and the Powers, the main character uploads his consciousness into anelaborate computer system to preserve his essence and agencyafter his corporeal death. Consequently, for much of theopera, the stage and the environment itself come alive asthe main character. This creative need brings with it a hostof technical challenges and opportunities. In order to satisfythe needs of this storyline, Machover’s Opera of the Futuregroup at the MIT Media Lab has developed a suite of newperformance technologies, including robot characters, interactive performance capture systems, mapping systems for, , authoring interactive multimedia performances, new musical instruments, unique spatialized sound controls, anda unified control system for all these technological components. While developed for a particular theatrical production, many of the concepts and design procedures remain relevant to broader contexts including performance,robotics, and interaction design.
@inproceedings{Jessop2011, author = {Jessop, Elena and Torpey, Peter A. and Bloomberg, Benjamin}, title = {Music and Technology in Death and the Powers}, pages = {349--354}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178051}, url = {http://www.nime.org/proceedings/2011/nime2011_349.pdf}, presentation-video = {https://vimeo.com/26878423/}, keywords = {opera, Death and the Powers, Tod Machover, gestural interfaces, Disembodied Performance, ambisonics } }
Victor Zappi, Dario Mazzanti, Andrea Brogni, and Darwin Caldwell. 2011. Design and Evaluation of a Hybrid Reality Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 355–360. http://doi.org/10.5281/zenodo.1178197
Abstract
Download PDF DOI
In this paper we introduce a multimodal platform for Hybrid Reality live performances: by means of non-invasiveVirtual Reality technology, we developed a system to presentartists and interactive virtual objects in audio/visual choreographies on the same real stage. These choreographiescould include spectators too, providing them with the possibility to directly modify the scene and its audio/visual features. We also introduce the first interactive performancestaged with this technology, in which an electronic musician played live five tracks manipulating the 3D projectedvisuals. As questionnaires have been distributed after theshow, in the last part of this work we discuss the analysisof collected data, underlining positive and negative aspectsof the proposed experience.This paper belongs together with a performance proposalcalled Dissonance, in which two performers exploit the platform to create a progressive soundtrack along with the exploration of an interactive virtual environment.
@inproceedings{Zappi2011, author = {Zappi, Victor and Mazzanti, Dario and Brogni, Andrea and Caldwell, Darwin}, title = {Design and Evaluation of a Hybrid Reality Performance}, pages = {355--360}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178197}, url = {http://www.nime.org/proceedings/2011/nime2011_355.pdf}, presentation-video = {https://vimeo.com/26880256/}, keywords = {Interactive Performance, Hybrid Choreographies, Virtual Reality, Music Control } }
Jérémie Garcia, Theophanis Tsandilas, Carlos Agon, and Wendy E. Mackay. 2011. InkSplorer : Exploring Musical Ideas on Paper and Computer. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 361–366. http://doi.org/10.5281/zenodo.1178027
Abstract
Download PDF DOI
We conducted three studies with contemporary music composers at IRCAM. We found that even highly computer-literate composers use an iterative process that begins with expressing musical ideas on paper, followed by active parallel exploration on paper and in software, prior to final execution of their ideas as an original score. We conducted a participatory design study that focused on the creative exploration phase, to design tools that help composers better integrate their paper-based and electronic activities. We then developed InkSplorer as a technology probe that connects users’ hand-written gestures on paper to Max/MSP and OpenMusic. Composers appropriated InkSplorer according to their preferred composition styles, emphasizing its ability to help them quickly explore musical ideas on paper as they interact with the computer. We conclude with recommendations for designing interactive paper tools that support the creative process, letting users explore musical ideas both on paper and electronically.
@inproceedings{Garcia2011a, author = {Garcia, J\'{e}r\'{e}mie and Tsandilas, Theophanis and Agon, Carlos and Mackay, Wendy E.}, title = {InkSplorer : Exploring Musical Ideas on Paper and Computer}, pages = {361--366}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178027}, url = {http://www.nime.org/proceedings/2011/nime2011_361.pdf}, presentation-video = {https://vimeo.com/26881368/}, keywords = {Composer, Creativity, Design Exploration, InkSplorer, Interactive Paper, OpenMusic, Technology Probes. } }
Pedro Lopez, Alfredo Ferreira, and J. A. Madeiras Pereira. 2011. Battle of the DJs: an HCI Perspective of Traditional, Virtual, Hybrid and Multitouch DJing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 367–372. http://doi.org/10.5281/zenodo.1178093
Abstract
Download PDF DOI
The DJ culture uses a gesture lexicon strongly rooted in thetraditional setup of turntables and a mixer. As novel toolsare introduced in the DJ community, this lexicon is adaptedto the features they provide. In particular, multitouch technologies can offer a new syntax while still supporting the oldlexicon, which is desired by DJs.We present a classification of DJ tools, from an interaction point of view, that divides the previous work into Traditional, Virtual and Hybrid setups. Moreover, we presenta multitouch tabletop application, developed with a groupof DJ consultants to ensure an adequate implementation ofthe traditional gesture lexicon.To conclude, we conduct an expert evaluation, with tenDJ users in which we compare the three DJ setups with ourprototype. The study revealed that our proposal suits expectations of Club/Radio-DJs, but fails against the mentalmodel of Scratch-DJs, due to the lack of haptic feedback torepresent the record’s physical rotation. Furthermore, testsshow that our multitouch DJ setup, reduces task durationwhen compared with Virtual setups.
@inproceedings{Lopez2011, author = {Lopez, Pedro and Ferreira, Alfredo and Pereira, J. A. Madeiras}, title = {Battle of the DJs: an HCI Perspective of Traditional, Virtual, Hybrid and Multitouch DJing}, pages = {367--372}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178093}, url = {http://www.nime.org/proceedings/2011/nime2011_367.pdf}, presentation-video = {https://vimeo.com/26881380/}, keywords = {DJing, Multitouch Interaction, Expert User evaluation, HCI } }
Adnan Marquez-Borbon, Michael Gurevich, A. Cavan Fyans, and Paul Stapleton. 2011. Designing Digital Musical Interactions in Experimental Contexts. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 373–376. http://doi.org/10.5281/zenodo.1178099
Abstract
Download PDF DOI
As NIME’s focus has expanded beyond the design reportswhich were pervasive in the early days to include studies andexperiments involving music control devices, we report on aparticular area of activity that has been overlooked: designsof music devices in experimental contexts. We demonstratethis is distinct from designing for artistic performances, witha unique set of novel challenges. A survey of methodologicalapproaches to experiments in NIME reveals a tendency torely on existing instruments or evaluations of new devicesdesigned for broader creative application. We present twoexamples from our own studies that reveal the merits ofdesigning purpose-built devices for experimental contexts.
@inproceedings{MarquezBorbon2011, author = {Marquez-Borbon, Adnan and Gurevich, Michael and Fyans, A. Cavan and Stapleton, Paul}, title = {Designing Digital Musical Interactions in Experimental Contexts}, pages = {373--376}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178099}, url = {http://www.nime.org/proceedings/2011/nime2011_373.pdf}, presentation-video = {https://vimeo.com/26882375/}, keywords = {Experiment, Methodology, Instrument Design, DMIs } }
Jonathan Reus. 2011. Crackle: A Dynamic Mobile Multitouch Topology for Exploratory Sound Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 377–380. http://doi.org/10.5281/zenodo.1178143
Abstract
Download PDF DOI
This paper describes the design of Crackle, a interactivesound and touch experience inspired by the CrackleBox.We begin by describing a ruleset for Crackle’s interactionderived from the salient interactive qualities of the CrackleBox. An implementation strategy is then described forrealizing the ruleset as an application for the iPhone. Thepaper goes on to consider the potential of using Crackleas an encapsulated interaction paradigm for exploring arbitrary sound spaces, and concludes with lessons learned ondesigning for multitouch surfaces as expressive input sensors.
@inproceedings{Reus2011, author = {Reus, Jonathan}, title = {Crackle: A Dynamic Mobile Multitouch Topology for Exploratory Sound Interaction}, pages = {377--380}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178143}, url = {http://www.nime.org/proceedings/2011/nime2011_377.pdf}, presentation-video = {https://vimeo.com/26882621/}, keywords = {touchscreen, interface topology, mobile music, interaction paradigm, dynamic mapping, CrackleBox, iPhone } }
Samuel Aaron, Alan Blackwell, Richard Hoadley, and Tim Regan. 2011. A Principled Approach to Developing New Languages for Live Coding. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 381–386. http://doi.org/10.5281/zenodo.1177935
Abstract
Download PDF DOI
This paper introduces Improcess, a novel cross-disciplinarycollaborative project focussed on the design and development of tools to structure the communication between performer and musical process. We describe a 3-tiered architecture centering around the notion of a Common MusicRuntime, a shared platform on top of which inter-operatingclient interfaces may be combined to form new musical instruments. This approach allows hardware devices such asthe monome to act as an extended hardware interface withthe same power to initiate and control musical processesas a bespoke programming language. Finally, we reflect onthe structure of the collaborative project itself, which offers an opportunity to discuss general research strategy forconducting highly sophisticated technical research within aperforming arts environment such as the development of apersonal regime of preparation for performance.
@inproceedings{Aaron2011, author = {Aaron, Samuel and Blackwell, Alan and Hoadley, Richard and Regan, Tim}, title = {A Principled Approach to Developing New Languages for Live Coding}, pages = {381--386}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177935}, url = {http://www.nime.org/proceedings/2011/nime2011_381.pdf}, presentation-video = {https://vimeo.com/26905683/}, keywords = {Improvisation, live coding, controllers, monome, collaboration, concurrency, abstractions } }
Jamie Bullock, Daniel Beattie, and Jerome Turner. 2011. Integra Live : a New Graphical User Interface for Live Electronic Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 387–392. http://doi.org/10.5281/zenodo.1177973
BibTeX
Download PDF DOI
@inproceedings{Bullock2011, author = {Bullock, Jamie and Beattie, Daniel and Turner, Jerome}, title = {Integra Live : a New Graphical User Interface for Live Electronic Music}, pages = {387--392}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177973}, url = {http://www.nime.org/proceedings/2011/nime2011_387.pdf}, presentation-video = {https://vimeo.com/26906574/}, keywords = {live electronics,software,usability,user experience} }
Jung-Sim Roh, Yotam Mann, Adrian Freed, and David Wessel. 2011. Robust and Reliable Fabric, Piezoresistive Multitouch Sensing Surfaces for Musical Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 393–398. http://doi.org/10.5281/zenodo.1178145
Abstract
Download PDF DOI
The design space of fabric multitouch surface interaction is explored with emphasis on novel materials and construction techniques aimed towards reliable, repairable pressure sensing surfaces for musical applications.
@inproceedings{Roh2011, author = {Roh, Jung-Sim and Mann, Yotam and Freed, Adrian and Wessel, David}, title = {Robust and Reliable Fabric, Piezoresistive Multitouch Sensing Surfaces for Musical Controllers}, pages = {393--398}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178145}, url = {http://www.nime.org/proceedings/2011/nime2011_393.pdf}, presentation-video = {https://vimeo.com/26906580/}, keywords = {Multitouch, surface interaction, piezoresistive, fabric sensor, e-textiles, tangible computing, drum controller } }
Mark T. Marshall and Marcelo M. Wanderley. 2011. Examining the Effects of Embedded Vibrotactile Feedback on the Feel of a Digital Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 399–404. http://doi.org/10.5281/zenodo.1178101
Abstract
Download PDF DOI
This paper deals with the effects of integrated vibrotactile feedback on the "feel" of a digital musical instrument(DMI). Building on previous work developing a DMI withintegrated vibrotactile feedback actuators, we discuss howto produce instrument-like vibrations, compare these simulated vibrations with those produced by an acoustic instrument and examine how the integration of this feedbackeffects performer ratings of the instrument. We found thatintegrated vibrotactile feedback resulted in an increase inperformer engagement with the instrument, but resulted ina reduction in the perceived control of the instrument. Wediscuss these results and their implications for the design ofnew digital musical instruments.
@inproceedings{Marshall2011, author = {Marshall, Mark T. and Wanderley, Marcelo M.}, title = {Examining the Effects of Embedded Vibrotactile Feedback on the Feel of a Digital Musical Instrument}, pages = {399--404}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178101}, url = {http://www.nime.org/proceedings/2011/nime2011_399.pdf}, keywords = {Vibrotactile Feedback, Digital Musical Instruments, Feel, Loudspeakers } }
Dimitri Diakopoulos and Ajay Kapur. 2011. HIDUINO : A firmware for building driverless USB-MIDI devices using the Arduino microcontroller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 405–408. http://doi.org/10.5281/zenodo.1177995
Abstract
Download PDF DOI
This paper presents a series of open-source firmwares for the latest iteration of the popular Arduino microcontroller platform. A portmanteau of Human Interface Device and Arduino, the HIDUINO project tackles a major problem in designing NIMEs: easily and reliably communicating with a host computer using standard MIDI over USB. HIDUINO was developed in conjunction with a class at the California Institute of the Arts intended to teach introductory-level human-computer and human-robot interaction within the context of musical controllers. We describe our frustration with existing microcontroller platforms and our experiences using the new firmware to facilitate the development and prototyping of new music controllers.
@inproceedings{Diakopoulos2011, author = {Diakopoulos, Dimitri and Kapur, Ajay}, title = {HIDUINO : A firmware for building driverless {USB}-MIDI devices using the Arduino microcontroller}, pages = {405--408}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177995}, url = {http://www.nime.org/proceedings/2011/nime2011_405.pdf}, presentation-video = {https://vimeo.com/26908264/}, keywords = {Arduino, USB, HID, MIDI, HCI, controllers, microcontrollers } }
Emmanuel Fléty and Côme Maestracci. 2011. Latency Improvement in Sensor Wireless Transmission Using IEEE 802.15.4. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 409–412. http://doi.org/10.5281/zenodo.1178009
Abstract
Download PDF DOI
We present a strategy for the improvement of wireless sensor data transmission latency, implemented in two current projects involving gesture/control sound interaction. Our platform was designed to be capable of accepting accessories using a digital bus. The receiver features a IEEE 802.15.4 microcontroller associated to a TCP/IP stack integrated circuit that transmits the received wireless data to a host computer using the Open Sound Control protocol. This paper details how we improved the latency and sample rate of the said technology while keeping the device small and scalable.
@inproceedings{Flety2011, author = {Fl\'{e}ty, Emmanuel and Maestracci, C\^{o}me}, title = {Latency Improvement in Sensor Wireless Transmission Using {IEEE} 802.15.4}, pages = {409--412}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178009}, url = {http://www.nime.org/proceedings/2011/nime2011_409.pdf}, presentation-video = {https://vimeo.com/26908266/}, keywords = {Embedded sensors, gesture recognition, wireless, sound and music computing, interaction, 802.15.4, Zigbee. } }
Jeff Snyder. 2011. Snyderphonics Manta Controller, a Novel USB Touch-Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 413–416. http://doi.org/10.5281/zenodo.1178171
Abstract
Download PDF DOI
The Snyderphonics Manta controller is a USB touch controller for music and video. It features 48 capacitive touch sensors, arranged in a hexagonal grid, with bi-color LEDs that are programmable from the computer. The sensors send continuous data proportional to surface area touched, and a velocitydetection algorithm has been implemented to estimate attack velocity based on this touch data. In addition to these hexagonal sensors, the Manta has two high-dimension touch sliders (giving 12-bit values), and four assignable function buttons. In this paper, I outline the features of the controller, the available methods for communicating between the device and a computer, and some current uses for the controller.
@inproceedings{Snyder2011, author = {Snyder, Jeff}, title = {Snyderphonics Manta Controller, a Novel {USB} Touch-Controller}, pages = {413--416}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178171}, url = {http://www.nime.org/proceedings/2011/nime2011_413.pdf}, presentation-video = {https://vimeo.com/26908273/}, keywords = {Snyderphonics, Manta, controller, USB, capacitive, touch, sensor, decoupled LED, hexagon, grid, touch slider, HID, portable, wood, live music, live video } }
William Hsu. 2011. On Movement , Structure and Abstraction in Generative Audiovisual Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 417–420. http://doi.org/10.5281/zenodo.1178047
BibTeX
Download PDF DOI
@inproceedings{Hsu2011, author = {Hsu, William}, title = {On Movement , Structure and Abstraction in Generative Audiovisual Improvisation}, pages = {417--420}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178047}, url = {http://www.nime.org/proceedings/2011/nime2011_417.pdf}, keywords = {animation,audio-visual,generative,improvisation,interactive} }
Claudia R. Angel. 2011. Creating Interactive Multimedia Works with Bio-data. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 421–424. http://doi.org/10.5281/zenodo.1177943
Abstract
Download PDF DOI
This paper deals with the usage of bio-data from performers to create interactive multimedia performances or installations. It presents this type of research in some art works produced in the last fifty years (such as Lucier’s Music for a Solo Performance, from 1965), including two interactive performances of my , , authorship, which use two different types of bio-interfaces: on the one hand, an EMG (Electromyography) and on the other hand, an EEG (electroencephalography). The paper explores the interaction between the human body and real-time media (audio and visual) by the usage of bio-interfaces. This research is based on biofeedback investigations pursued by the psychologist Neal E. Miller in the 1960s, mainly based on finding new methods to reduce stress. However, this article explains and shows examples in which biofeedback research is used for artistic purposes only.
@inproceedings{Angel2011, author = {Angel, Claudia R.}, title = {Creating Interactive Multimedia Works with Bio-data}, pages = {421--424}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177943}, url = {http://www.nime.org/proceedings/2011/nime2011_421.pdf}, keywords = {Live electronics, Butoh, performance, biofeedback, interactive sound and video. } }
Paula Ustarroz. 2011. TresnaNet Musical Generation based on Network Protocols. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 425–428. http://doi.org/10.5281/zenodo.1178181
Abstract
Download PDF DOI
TresnaNet explores the potential of Telematics as a generator ofmusical expressions. I pretend to sound the silent flow ofinformation from the network.This is realized through the fabrication of a prototypefollowing the intention of giving substance to the intangibleparameters of our communication. The result may haveeducational, commercial and artistic applications because it is aphysical and perceptible representation of the transfer ofinformation over the network. This paper describes the design,implementation and conclusions about TresnaNet.
@inproceedings{Ustarroz2011, author = {Ustarroz, Paula}, title = {TresnaNet Musical Generation based on Network Protocols}, pages = {425--428}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178181}, url = {http://www.nime.org/proceedings/2011/nime2011_425.pdf}, keywords = {Interface, musical generation, telematics, network, musical instrument, network sniffer. } }
Matti Luhtala, Tiina Kymäläinen, and Johan Plomp. 2011. Designing a Music Performance Space for Persons with Intellectual Learning Disabilities. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 429–432. http://doi.org/10.5281/zenodo.1178095
BibTeX
Download PDF DOI
@inproceedings{Luhtala2011, author = {Luhtala, Matti and Kym\''{a}l\''{a}inen, Tiina and Plomp, Johan}, title = {Designing a Music Performance Space for Persons with Intellectual Learning Disabilities}, pages = {429--432}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178095}, url = {http://www.nime.org/proceedings/2011/nime2011_429.pdf}, keywords = {Music interfaces, music therapy, modifiable interfaces, design tools, Human-Technology Interaction (HTI), User-Centred Design (UCD), design for all (DfA), prototyping, performance. } }
Tom Ahola, Koray Tahiroglu, Teemu Ahmaniemi, Fabio Belloni, and Ville Ranki. 2011. Raja – A Multidisciplinary Artistic Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 433–436. http://doi.org/10.5281/zenodo.1177937
Abstract
Download PDF DOI
Motion-based interactive systems have long been utilizedin contemporary dance performances. These performancesbring new insight to sound-action experiences in multidisciplinary art forms. This paper discusses the related technology within the framework of the dance piece, Raja. The performance set up of Raja gives a possibility to use two complementary tracking systems and two alternative choices formotion sensors in real-time audio-visual synthesis.
@inproceedings{Ahola2011, author = {Ahola, Tom and Tahiroglu, Koray and Ahmaniemi, Teemu and Belloni, Fabio and Ranki, Ville}, title = {Raja -- A Multidisciplinary Artistic Performance}, pages = {433--436}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177937}, url = {http://www.nime.org/proceedings/2011/nime2011_433.pdf}, keywords = {raja, performance, dance, motion sensor, accelerometer, gyro, positioning, sonification, pure data, visualization, Qt} }
Emmanuelle Gallin and Marc Sirguy. 2011. Eobody3: a Ready-to-use Pre-mapped & Multi-protocol Sensor Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 437–440. http://doi.org/10.5281/zenodo.1178023
BibTeX
Download PDF DOI
@inproceedings{Gallin2011, author = {Gallin, Emmanuelle and Sirguy, Marc}, title = {Eobody3: a Ready-to-use Pre-mapped \& Multi-protocol Sensor Interface}, pages = {437--440}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178023}, url = {http://www.nime.org/proceedings/2011/nime2011_437.pdf}, keywords = {Controller, Sensor, MIDI, USB, Computer Music, USB, OSC, CV, MIDI, DMX, A/D Converter, Interface. } }
Rasmus Bå\aath, Thomas Strandberg, and Christian Balkenius. 2011. Eye Tapping : How to Beat Out an Accurate Rhythm using Eye Movements. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 441–444. http://doi.org/10.5281/zenodo.1177947
Abstract
Download PDF DOI
The aim of this study was to investigate how well subjectsbeat out a rhythm using eye movements and to establishthe most accurate method of doing this. Eighteen subjectsparticipated in an experiment were five different methodswere evaluated. A fixation based method was found to bethe most accurate. All subjects were able to synchronizetheir eye movements with a given beat but the accuracywas much lower than usually found in finger tapping studies. Many parts of the body are used to make music but sofar, with a few exceptions, the eyes have been silent. The research presented here provides guidelines for implementingeye controlled musical interfaces. Such interfaces would enable performers and artists to use eye movement for musicalexpression and would open up new, exiting possibilities.
@inproceedings{Baath2011, author = {B\aa\aath, Rasmus and Strandberg, Thomas and Balkenius, Christian}, title = {Eye Tapping : How to Beat Out an Accurate Rhythm using Eye Movements}, pages = {441--444}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177947}, url = {http://www.nime.org/proceedings/2011/nime2011_441.pdf}, keywords = {Rhythm, Eye tracking, Sensorimotor synchronization, Eye tapping } }
Eric Rosenbaum. 2011. MelodyMorph: A Reconfigurable Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 445–447. http://doi.org/10.5281/zenodo.1178147
Abstract
Download PDF DOI
I present MelodyMorph, a reconfigurable musical instrument designed with a focus on melodic improvisation. It is designed for a touch-screen interface, and allows the user to create "bells" which can be tapped to play a note, and dragged around on a pannable and zoomable canvas. Colors, textures and shapes of the bells represent pitch and timbre properties. "Recorder bells" can store and play back performances. Users can construct instruments that are modifiable as they play, and build up complex melodies hierarchically from simple parts.
@inproceedings{Rosenbaum2011, author = {Rosenbaum, Eric}, title = {MelodyMorph: A Reconfigurable Musical Instrument}, pages = {445--447}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178147}, url = {http://www.nime.org/proceedings/2011/nime2011_445.pdf}, keywords = {Melody, improvisation, representation, multi-touch, iPad } }
Karmen Franinovic. 2011. The Flo)(ps : Negotiating Between Habitual and Explorative Gestures. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 448–452. http://doi.org/10.5281/zenodo.1178013
BibTeX
Download PDF DOI
@inproceedings{Franinovic2011, author = {Franinovic, Karmen}, title = {The Flo)(ps : Negotiating Between Habitual and Explorative Gestures}, pages = {448--452}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178013}, url = {http://www.nime.org/proceedings/2011/nime2011_448.pdf}, keywords = {exploration,gesture,habit,sonic interaction design} }
Margaret Schedel, Phoenix Perry, and Rebecca Fiebrink. 2011. Wekinating 000000Swan : Using Machine Learning to Create and Control Complex Artistic Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 453–456. http://doi.org/10.5281/zenodo.1178151
Abstract
Download PDF DOI
In this paper we discuss how the band 000000Swan uses machine learning to parse complex sensor data and create intricate artistic systems for live performance. Using the Wekinator software for interactive machine learning, we have created discrete and continuous models for controlling audio and visual environments using human gestures sensed by a commercially-available sensor bow and the Microsoft Kinect. In particular, we have employed machine learning to quickly and easily prototype complex relationships between performer gesture and performative outcome.
@inproceedings{Schedel2011, author = {Schedel, Margaret and Perry, Phoenix and Fiebrink, Rebecca}, title = {Wekinating 000000{S}wan : Using Machine Learning to Create and Control Complex Artistic Systems}, pages = {453--456}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178151}, url = {http://www.nime.org/proceedings/2011/nime2011_453.pdf}, keywords = {Wekinator, K-Bow, Machine Learning, Interactive, Multimedia, Kinect, Motion-Tracking, Bow Articulation, Animation } }
Carles F. Julià, Daniel Gallardo, and Sergi Jordà. 2011. MTCF : A Framework for Designing and Coding Musical Tabletop Applications Directly in Pure Data. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 457–460. http://doi.org/10.5281/zenodo.1178057
Abstract
Download PDF DOI
In the past decade we have seen a growing presence of tabletop systems applied to music, lately with even some products becoming commercially available and being used byprofessional musicians in concerts. The development of thistype of applications requires several demanding technicalexpertises such as input processing, graphical design, realtime sound generation or interaction design, and because ofthis complexity they are usually developed by a multidisciplinary group.In this paper we present the Musical Tabletop CodingFramework (MTCF) a framework for designing and codingmusical tabletop applications by using the graphical programming language for digital sound processing Pure Data(Pd). With this framework we try to simplify the creationprocess of such type of interfaces, by removing the need ofany programming skills other than those of Pd.
@inproceedings{Julia2011, author = {Juli\`{a}, Carles F. and Gallardo, Daniel and Jord\`{a}, Sergi}, title = {MTCF : A Framework for Designing and Coding Musical Tabletop Applications Directly in Pure Data}, pages = {457--460}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178057}, url = {http://www.nime.org/proceedings/2011/nime2011_457.pdf}, keywords = {Pure Data, tabletop, tangible, framework } }
David Pirrò and Gerhard Eckel. 2011. Physical Modelling Enabling Enaction: an Example. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 461–464. http://doi.org/10.5281/zenodo.1178135
BibTeX
Download PDF DOI
@inproceedings{Pirro2011, author = {Pirr\`{o}, David and Eckel, Gerhard}, title = {Physical Modelling Enabling Enaction: an Example}, pages = {461--464}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178135}, url = {http://www.nime.org/proceedings/2011/nime2011_461.pdf}, keywords = {embod-,enactive interfaces,has been ap-,iment,interaction,motion tracking,of sound and music,physical modelling,to movement and gesture} }
Thomas Mitchell and Imogen Heap. 2011. SoundGrasp : A Gestural Interface for the Performance of Live Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 465–468. http://doi.org/10.5281/zenodo.1178111
Abstract
Download PDF DOI
This paper documents the first developmental phase of aninterface that enables the performance of live music usinggestures and body movements. The work included focuseson the first step of this project: the composition and performance of live music using hand gestures captured using asingle data glove. The paper provides a background to thefield, the aim of the project and a technical description ofthe work completed so far. This includes the developmentof a robust posture vocabulary, an artificial neural networkbased posture identification process and a state-based system to map identified postures onto a set of performanceprocesses. The paper is closed with qualitative usage observations and a projection of future plans.
@inproceedings{Mitchell2011, author = {Mitchell, Thomas and Heap, Imogen}, title = {SoundGrasp : A Gestural Interface for the Performance of Live Music}, pages = {465--468}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178111}, url = {http://www.nime.org/proceedings/2011/nime2011_465.pdf}, keywords = {Music Controller, Gestural Music, Data Glove, Neural Network, Live Music Composition, Looping, Imogen Heap } }
Tim Mullen, Richard Warp, and Adam Jansch. 2011. Minding the (Transatlantic) Gap: An Internet-Enabled Acoustic Brain-Computer Music Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 469–472. http://doi.org/10.5281/zenodo.1178117
Abstract
Download PDF DOI
The use of non-invasive electroencephalography (EEG) in the experimental arts is not a novel concept. Since 1965, EEG has been used in a large number of, sometimes highly sophisticated, systems for musical and artistic expression. However, since the advent of the synthesizer, most such systems have utilized digital and/or synthesized media in sonifying the EEG signals. There have been relatively few attempts to create interfaces for musical expression that allow one to mechanically manipulate acoustic instruments by modulating one’s mental state. Secondly, few such systems afford a distributed performance medium, with data transfer and audience participation occurring over the Internet. The use of acoustic instruments and Internet-enabled communication expands the realm of possibilities for musical expression in Brain-Computer Music Interfaces (BCMI), while also introducing additional challenges. In this paper we report and examine a first demonstration (Music for Online Performer) of a novel system for Internet-enabled manipulation of robotic acoustic instruments, with feedback, using a non-invasive EEG-based BCI and low-cost, commercially available robotics hardware.
@inproceedings{Mullen2011, author = {Mullen, Tim and Warp, Richard and Jansch, Adam}, title = {Minding the (Transatlantic) Gap: An Internet-Enabled Acoustic Brain-Computer Music Interface}, pages = {469--472}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178117}, url = {http://www.nime.org/proceedings/2011/nime2011_469.pdf}, keywords = {EEG, Brain-Computer Music Interface, Internet, Arduino. } }
Stefano Papetti, Marco Civolani, and Federico Fontana. 2011. Rhythm’n’Shoes: a Wearable Foot Tapping Interface with Audio-Tactile Feedback. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 473–476. http://doi.org/10.5281/zenodo.1178129
Abstract
Download PDF DOI
A shoe-based interface is presented, which enables users toplay percussive virtual instruments by tapping their feet.The wearable interface consists of a pair of sandals equippedwith four force sensors and four actuators affording audiotactile feedback. The sensors provide data via wireless transmission to a host computer, where they are processed andmapped to a physics-based sound synthesis engine. Sincethe system provides OSC and MIDI compatibility, alternative electronic instruments can be used as well. The audiosignals are then sent back wirelessly to audio-tactile excitersembedded in the sandals’ sole, and optionally to headphonesand external loudspeakers. The round-trip wireless communication only introduces very small latency, thus guaranteeing coherence and unity in the multimodal percept andallowing tight timing while playing.
@inproceedings{Papetti2011, author = {Papetti, Stefano and Civolani, Marco and Fontana, Federico}, title = {Rhythm'n'Shoes: a Wearable Foot Tapping Interface with Audio-Tactile Feedback}, pages = {473--476}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178129}, url = {http://www.nime.org/proceedings/2011/nime2011_473.pdf}, keywords = {interface, audio, tactile, foot tapping, embodiment, footwear, wireless, wearable, mobile } }
Cumhur Erkut, Antti Jylhä, and Reha Discioglu. 2011. A Structured Design and Evaluation Model with Application to Rhythmic Interaction Displays. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 477–480. http://doi.org/10.5281/zenodo.1178003
Abstract
Download PDF DOI
We present a generic, structured model for design and evaluation of musical interfaces. This model is developmentoriented, and it is based on the fundamental function of themusical interfaces, i.e., to coordinate the human action andperception for musical expression, subject to human capabilities and skills. To illustrate the particulars of this modeland present it in operation, we consider the previous designand evaluation phase of iPalmas, our testbed for exploringrhythmic interaction. Our findings inform the current design phase of iPalmas visual and auditory displays, wherewe build on what has resonated with the test users, and explore further possibilities based on the evaluation results.
@inproceedings{Erkut2011, author = {Erkut, Cumhur and Jylh\''{a}, Antti and Discioglu, Reha}, title = {A Structured Design and Evaluation Model with Application to Rhythmic Interaction Displays}, pages = {477--480}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178003}, url = {http://www.nime.org/proceedings/2011/nime2011_477.pdf}, keywords = {multimodal displays,rhythmic interaction,sonification,uml} }
Marco Marchini, Panos Papiotis, Alfonso Pérez, and Esteban Maestre. 2011. A Hair Ribbon Deflection Model for Low-intrusiveness Measurement of Bow Force in Violin Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 481–486. http://doi.org/10.5281/zenodo.1178097
Abstract
Download PDF DOI
This paper introduces and evaluates a novel methodologyfor the estimation of bow pressing force in violin performance, aiming at a reduced intrusiveness while maintaininghigh accuracy. The technique is based on using a simplifiedphysical model of the hair ribbon deflection, and feeding thismodel solely with position and orientation measurements ofthe bow and violin spatial coordinates. The physical modelis both calibrated and evaluated using real force data acquired by means of a load cell.
@inproceedings{Marchini2011, author = {Marchini, Marco and Papiotis, Panos and P\'{e}rez, Alfonso and Maestre, Esteban}, title = {A Hair Ribbon Deflection Model for Low-intrusiveness Measurement of Bow Force in Violin Performance}, pages = {481--486}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178097}, url = {http://www.nime.org/proceedings/2011/nime2011_481.pdf}, keywords = {bow pressing force, bow force, pressing force, force, violin playing, bow simplified physical model, 6DOF, hair ribbon ends, string ends } }
Jon Forsyth, Aron Glennon, and Juan P. Bello. 2011. Random Access Remixing on the iPad. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 487–490. http://doi.org/10.5281/zenodo.1178011
Abstract
Download PDF DOI
Remixing audio samples is a common technique for the creation of electronic music, and there are a wide variety oftools available to edit, process, and recombine pre-recordedaudio into new compositions. However, all of these toolsconceive of the timeline of the pre-recorded audio and theplayback timeline as identical. In this paper, we introducea dual time axis representation in which these two timelines are described explicitly. We also discuss the randomaccess remix application for the iPad, an audio sample editor based on this representation. We describe an initialuser study with 15 high school students that indicates thatthe random access remix application has the potential todevelop into a useful and interesting tool for composers andperformers of electronic music.
@inproceedings{Forsyth2011, author = {Forsyth, Jon and Glennon, Aron and Bello, Juan P.}, title = {Random Access Remixing on the iPad}, pages = {487--490}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178011}, url = {http://www.nime.org/proceedings/2011/nime2011_487.pdf}, keywords = {interactive systems, sample editor, remix, iPad, multi-touch } }
Erika Donald, Ben Duinker, and Eliot Britton. 2011. Designing the EP Trio: Instrument Identities, Control and Performance Practice in an Electronic Chamber Music Ensemble. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 491–494. http://doi.org/10.5281/zenodo.1177999
Abstract
Download PDF DOI
This paper outlines the formation of the Expanded Performance (EP) trio, a chamber ensemble comprised of electriccello with sensor bow, augmented digital percussion, anddigital turntable with mixer. Decisions relating to physical set-ups and control capabilities, sonic identities, andmappings of each instrument, as well as their roles withinthe ensemble, are explored. The contributions of these factors to the design of a coherent, expressive ensemble andits emerging performance practice are considered. The trioproposes solutions to creation, rehearsal and performanceissues in ensemble live electronics.
@inproceedings{Donald2011, author = {Donald, Erika and Duinker, Ben and Britton, Eliot}, title = {Designing the EP Trio: Instrument Identities, Control and Performance Practice in an Electronic Chamber Music Ensemble}, pages = {491--494}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177999}, url = {http://www.nime.org/proceedings/2011/nime2011_491.pdf}, keywords = {Live electronics, digital performance, mapping, chamber music, ensemble, instrument identity } }
A. Cavan Fyans and Michael Gurevich. 2011. Perceptions of Skill in Performances with Acoustic and Electronic Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 495–498. http://doi.org/10.5281/zenodo.1178019
Abstract
Download PDF DOI
We present observations from two separate studies of spectators’ perceptions of musical performances, one involvingtwo acoustic instruments, the other two electronic instruments. Both studies followed the same qualitative method,using structured interviews to ascertain and compare spectators’ experiences. In this paper, we focus on outcomespertaining to perceptions of the performers’ skill, relatingto concepts of embodiment and communities of practice.
@inproceedings{Fyans2011, author = {Fyans, A. Cavan and Gurevich, Michael}, title = {Perceptions of Skill in Performances with Acoustic and Electronic Instruments}, pages = {495--498}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178019}, url = {http://www.nime.org/proceedings/2011/nime2011_495.pdf}, keywords = {skill, embodiment, perception, effort, control, spectator } }
Hiroki Nishino. 2011. Cognitive Issues in Computer Music Programming. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 499–502. http://doi.org/10.5281/zenodo.1178123
BibTeX
Download PDF DOI
@inproceedings{Nishino2011, author = {Nishino, Hiroki}, title = {Cognitive Issues in Computer Music Programming}, pages = {499--502}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178123}, url = {http://www.nime.org/proceedings/2011/nime2011_499.pdf}, keywords = {Computer music, programming language, the psychology of programming, usability } }
Roland Lamb and Andrew Robertson. 2011. Seaboard : a New Piano Keyboard-related Interface Combining Discrete and Continuous Control. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 503–506. http://doi.org/10.5281/zenodo.1178081
Abstract
Download PDF DOI
This paper introduces the Seaboard, a new tangible musicalinstrument which aims to provide musicians with significantcapability to manipulate sound in real-time in a musicallyintuitive way. It introduces the core design features whichmake the Seaboard unique, and describes the motivationand rationale behind the design. The fundamental approachto dealing with problems associated with discrete and continuous inputs is summarized.
@inproceedings{Lamb2011, author = {Lamb, Roland and Robertson, Andrew}, title = {Seaboard : a New Piano Keyboard-related Interface Combining Discrete and Continuous Control}, pages = {503--506}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178081}, url = {http://www.nime.org/proceedings/2011/nime2011_503.pdf}, keywords = {Piano keyboard-related interface, continuous and discrete control, haptic feedback, Human-Computer Interaction (HCI) } }
Gilbert Beyer and Max Meier. 2011. Music Interfaces for Novice Users : Composing Music on a Public Display with Hand Gestures. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 507–510. http://doi.org/10.5281/zenodo.1177963
BibTeX
Download PDF DOI
@inproceedings{Beyer2011, author = {Beyer, Gilbert and Meier, Max}, title = {Music Interfaces for Novice Users : Composing Music on a Public Display with Hand Gestures}, pages = {507--510}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177963}, url = {http://www.nime.org/proceedings/2011/nime2011_507.pdf}, keywords = {Interactive music, public displays, user experience, out-of-home media, algorithmic composition, soft constraints } }
Birgitta Cappelen and Anders-Petter Anderson. 2011. Expanding the Role of the Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 511–514. http://doi.org/10.5281/zenodo.1177975
Abstract
Download PDF DOI
The traditional role of the musical instrument is to be the working tool of the professional musician. On the instrument the musician performs music for the audience to listen to. In this paper we present an interactive installation, where we expand the role of the instrument to motivate musicking and cocreation between diverse users. We have made an open installation, where users can perform a variety of actions in several situations. By using the abilities of the computer, we have made an installation, which can be interpreted to have many roles. It can both be an instrument, a co-musician, a communication partner, a toy, a meeting place and an ambient musical landscape. The users can dynamically shift between roles, based on their abilities, knowledge and motivation.
@inproceedings{Cappelen2011, author = {Cappelen, Birgitta and Anderson, Anders-Petter}, title = {Expanding the Role of the Instrument}, pages = {511--514}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177975}, url = {http://www.nime.org/proceedings/2011/nime2011_511.pdf}, keywords = {design,genre,interaction,interactive installation,music instrument,musicking,narrative,open,role,sound art} }
Todor Todoroff. 2011. Wireless Digital/Analog Sensors for Music and Dance Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 515–518. http://doi.org/10.5281/zenodo.1178177
Abstract
Download PDF DOI
We developed very small and light sensors, each equippedwith 3-axes accelerometers, magnetometers and gyroscopes.Those MARG (Magnetic, Angular Rate, and Gravity) sensors allow for a drift-free attitude computation which in turnleads to the possibility of recovering the skeleton of bodyparts that are of interest for the performance, improvingthe results of gesture recognition and allowing to get relative position between the extremities of the limbs and thetorso of the performer. This opens new possibilities in termsof mapping. We kept our previous approach developed atARTeM [2]: wireless from the body to the host computer,but wired through a 4-wire digital bus on the body. Byrelieving the need for a transmitter on each sensing node,we could built very light and flat sensor nodes that can bemade invisible under the clothes. Smaller sensors, coupledwith flexible wires on the body, give more freedom of movement to dancers despite the need for cables on the body.And as the weight of each sensor node, box included, isonly 5 grams (Figure 1), they can also be put on the upper and lower arm and hand of a violin or viola player, toretrieve the skeleton from the torso to the hand, withoutadding any weight that would disturb the performer. Weused those sensors in several performances with a dancingviola player and in one where she was simultaneously controlling gas flames interactively. We are currently applyingthem to other types of musical performances.
@inproceedings{Todoroff2011, author = {Todoroff, Todor}, title = {Wireless Digital/Analog Sensors for Music and Dance Performances}, pages = {515--518}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178177}, url = {http://www.nime.org/proceedings/2011/nime2011_515.pdf}, keywords = {wireless MARG sensors } }
Trond Engum. 2011. Real-time Control and Creative Convolution. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 519–522. http://doi.org/10.5281/zenodo.1178001
Abstract
Download PDF DOI
This paper covers and also describes an ongoing research project focusing on new artistic possibilities by exchanging music technological methods and techniques between two distinct musical genres. Through my background as a guitarist and composer in an experimental metal band I have experienced a vast development in music technology during the last 20 years. This development has made a great impact in changing the procedures for composing and producing music within my genre without necessarily changing the strategies of how the technology is used. The transition from analogue to digital sound technology not only opened up new ways of manipulating and manoeuvring sound, it also opened up challenges in how to integrate and control the digital sound technology as a seamless part of my musical genre. By using techniques and methods known from electro-acoustic/computer music, and adapting them for use within my tradition, this research aims to find new strategies for composing and producing music within my genre.
@inproceedings{Engum2011, author = {Engum, Trond}, title = {Real-time Control and Creative Convolution}, pages = {519--522}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178001}, url = {http://www.nime.org/proceedings/2011/nime2011_519.pdf}, keywords = {Artistic research, strategies for composition and production, convolution, environmental sounds, real time control } }
Andreas Bergsland. 2011. Phrases from Paul Lansky’s Six Fantasies. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 523–526. http://doi.org/10.5281/zenodo.1177959
BibTeX
Download PDF DOI
@inproceedings{Bergsland2011, author = {Bergsland, Andreas}, title = {Phrases from {P}aul {L}ansky's {S}ix {F}antasies}, pages = {523--526}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177959}, url = {http://www.nime.org/proceedings/2011/nime2011_523.pdf}, keywords = {LPC, software instrument, analysis, modeling, csound } }
Jan T. von Falkenstein. 2011. Gliss : An Intuitive Sequencer for the iPhone and iPad. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 527–528. http://doi.org/10.5281/zenodo.1178007
Abstract
Download PDF DOI
Gliss is an application for iOS that lets the user sequence five separate instruments and play them back in various ways. Sequences can be created by drawing onto the screen while the sequencer is running. The playhead of the sequencer can be set to randomly deviate from the drawings or can be controlled via the accelerometer of the device. This makes Gliss a hybrid of a sequencer, an instrument and a generative music system.
@inproceedings{VonFalkenstein2011, author = {von Falkenstein, Jan T.}, title = {Gliss : An Intuitive Sequencer for the iPhone and iPad}, pages = {527--528}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178007}, url = {http://www.nime.org/proceedings/2011/nime2011_527.pdf}, keywords = {Gliss, iOS, iPhone, iPad, interface, UPIC, music, sequencer, accelerometer, drawing } }
Jiffer Harriman, Locky Casey, and Linden Melvin. 2011. Quadrofeelia – A New Instrument for Sliding into Notes. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 529–530. http://doi.org/10.5281/zenodo.1178041
Abstract
Download PDF DOI
This paper describes a new musical instrument inspired by the pedal-steel guitar, along with its motivations and other considerations. Creating a multi-dimensional, expressive instrument was the primary driving force. For these criteria the pedal steel guitar proved an apt model as it allows control over several instrument parameters simultaneously and continuously. The parameters we wanted control over were volume, timbre, release time and pitch.The Quadrofeelia is played with two hands on a horizontal surface. Single notes and melodies are easily played as well as chordal accompaniment with a variety of timbres and release times enabling a range of legato and staccato notes in an intuitive manner with a new yet familiar interface.
@inproceedings{Harriman2011, author = {Harriman, Jiffer and Casey, Locky and Melvin, Linden}, title = {Quadrofeelia -- A New Instrument for Sliding into Notes}, pages = {529--530}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178041}, url = {http://www.nime.org/proceedings/2011/nime2011_529.pdf}, keywords = {NIME, pedal-steel, electronic, slide, demonstration, membrane, continuous, ribbon, instrument, polyphony, lead } }
Johnty Wang, Nicolas d’Alessandro, Sidney S. Fels, and Bob Pritchard. 2011. SQUEEZY : Extending a Multi-touch Screen with Force Sensing Objects for Controlling Articulatory Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 531–532. http://doi.org/10.5281/zenodo.1178189
BibTeX
Download PDF DOI
@inproceedings{Wang2011a, author = {Wang, Johnty and d'Alessandro, Nicolas and Fels, Sidney S. and Pritchard, Bob}, title = {SQUEEZY : Extending a Multi-touch Screen with Force Sensing Objects for Controlling Articulatory Synthesis}, pages = {531--532}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178189}, url = {http://www.nime.org/proceedings/2011/nime2011_531.pdf} }
Souhwan Choe and Kyogu Lee. 2011. SWAF: Towards a Web Application Framework for Composition and Documentation of Soundscape. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 533–534. http://doi.org/10.5281/zenodo.1177985
Abstract
Download PDF DOI
In this paper, we suggest a conceptual model of a Web application framework for the composition and documentation of soundscape and introduce corresponding prototype projects, SeoulSoundMap and SoundScape Composer. We also survey the current Web-based sound projects in terms of soundscape documentation.
@inproceedings{Choe2011, author = {Choe, Souhwan and Lee, Kyogu}, title = {{SW}AF: Towards a Web Application Framework for Composition and Documentation of Soundscape}, pages = {533--534}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1177985}, url = {http://www.nime.org/proceedings/2011/nime2011_533.pdf}, keywords = {soundscape, web application framework, sound archive, sound map, soundscape composition, soundscape documentation. } }
Norbert Schnell, Frédéric Bevilacqua, Nicolas Rasamimanana, Julien Blois, Fabrice Guédy, and Emmanuel Fléty. 2011. Playing the "MO" – Gestural Control and Re-Embodiment of Recorded Sound and Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 535–536. http://doi.org/10.5281/zenodo.1178153
Abstract
Download PDF DOI
We are presenting a set of applications that have been realized with the MO modular wireless motion capture deviceand a set of software components integrated into Max/MSP.These applications, created in the context of artistic projects,music pedagogy, and research, allow for the gestural reembodiment of recorded sound and music. They demonstrate a large variety of different "playing techniques" inmusical performance using wireless motion sensor modulesin conjunction with gesture analysis and real-time audioprocessing components.
@inproceedings{Schnell2011, author = {Schnell, Norbert and Bevilacqua, Fr\'{e}d\'{e}ric and Rasamimanana, Nicolas and Blois, Julien and Gu\'{e}dy, Fabrice and Fl\'{e}ty, Emmanuel}, title = {Playing the "MO" -- Gestural Control and Re-Embodiment of Recorded Sound and Music}, pages = {535--536}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178153}, url = {http://www.nime.org/proceedings/2011/nime2011_535.pdf}, keywords = {Music, Gesture, Interface, Wireless Sensors, Gesture Recognition, Audio Processing, Design, Interaction } }
Bruno Zamborlin, Giorgio Partesana, and Marco Liuni. 2011. (LAND)MOVES. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 537–538. http://doi.org/10.5281/zenodo.1178195
Abstract
Download PDF DOI
(land)moves is an interactive installation: the user’s gestures control the multimedia processing with a total synergybetween audio and video synthesis and treatment.
@inproceedings{Zamborlin2011, author = {Zamborlin, Bruno and Partesana, Giorgio and Liuni, Marco}, title = {({LAN}D)MOVES}, pages = {537--538}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178195}, url = {http://www.nime.org/proceedings/2011/nime2011_537.pdf}, keywords = {mapping gesture-audio-video, gesture recognition, landscape, soundscape } }
Bill Verplank and Francesco Georg. 2011. Can Haptics Make New Music ? – Fader and Plank Demos. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 539–540. http://doi.org/10.5281/zenodo.1178183
Abstract
Download PDF DOI
Haptic interfaces using active force-feedback have mostly been used for emulating existing instruments and making conventional music. With the right speed, force, precision and software they can also be used to make new sounds and perhaps new music. The requirements are local microprocessors (for low-latency and high update rates), strategic sensors (for force as well as position), and non-linear dynamics (that make for rich overtones and chaotic music).
@inproceedings{Verplank2011, author = {Verplank, Bill and Georg, Francesco}, title = {Can Haptics Make New Music ? -- Fader and Plank Demos}, pages = {539--540}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2011}, address = {Oslo, Norway}, issn = {2220-4806}, doi = {10.5281/zenodo.1178183}, url = {http://www.nime.org/proceedings/2011/nime2011_539.pdf}, keywords = {NIME, Haptics, Music Controllers, Microprocessors. } }
2010
Owen Vallis, Jordan Hochenbaum, and Ajay Kapur. 2010. A Shift Towards Iterative and Open-Source Design for Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 1–6. http://doi.org/10.5281/zenodo.1177919
Abstract
Download PDF DOI
The aim of this paper is to define the process of iterative interface design as it pertains to musical performance. Embodying this design approach, the Monome OSC/MIDI USB controller represents a minimalist, open-source hardware device. The open-source nature of the device has allowed for a small group of Monome users to modify the hardware, firmware, and software associated with the interface. These user driven modifications have allowed the re-imagining of the interface for new and novel purposes, beyond even that of the device’s original intentions. With development being driven by a community of users, a device can become several related but unique generations of musical controllers, each one focused on a specific set of needs.
@inproceedings{Vallis2010, author = {Vallis, Owen and Hochenbaum, Jordan and Kapur, Ajay}, title = {A Shift Towards Iterative and Open-Source Design for Musical Interfaces}, pages = {1--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177919}, url = {http://www.nime.org/proceedings/2010/nime2010_001.pdf}, keywords = {Iterative Design, Monome, Arduinome, Arduino.} }
Yutaro Maruyama, Yoshinari Takegawa, Tsutomu Terada, and Masahiko Tsukamoto. 2010. UnitInstrument : Easy Configurable Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 7–12. http://doi.org/10.5281/zenodo.1177845
Abstract
Download PDF DOI
Musical instruments have a long history, and many types of musical instruments have been created to attain ideal sound production. At the same time, various types of electronic musical instruments have been developed. Since the main purpose of conventional electronic instruments is to duplicate the shape of acoustic instruments with no change in their hardware configuration, the diapason and the performance style of each instrument is inflexible. Therefore, the goal of our study is to construct the UnitInstrument that consists of various types of musical units. A unit is constructed by simulating functional elements of conventional musical instruments, such as output timing of sound and pitch decision. Each unit has connectors for connecting other units to create various types of musical instruments. Additionally, we propose a language for easily and flexibly describing the settings of units. We evaluated the effectiveness of our proposed system by using it in actual performances.
@inproceedings{Maruyama2010, author = {Maruyama, Yutaro and Takegawa, Yoshinari and Terada, Tsutomu and Tsukamoto, Masahiko}, title = {UnitInstrument : Easy Configurable Musical Instruments}, pages = {7--12}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177845}, url = {http://www.nime.org/proceedings/2010/nime2010_007.pdf}, keywords = {Musical instruments, Script language} }
Jos Mulder. 2010. The Loudspeaker as Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 13–18. http://doi.org/10.5281/zenodo.1177861
Abstract
Download PDF DOI
With the author’s own experiences in mind, this paper argues that, when used to amplify musical instruments or to play back other sonic material to an audience, loudspeakers and the technology that drives them, can be considered as a musical instrument. Particularly in situations with acoustic instruments this perspective can provide insight into the often cumbersome relation between the –technology orientated– sound engineer and the –music orientated– performer. Playing a musical instrument (whether acoustic, electric or electronic) involves navigating often complicated but very precise interfaces. The interface for sound amplification technology in a certain environment is not limited to the control surface of a mixing desk but includes the interaction with other stakeholder, i.e. the performers and the choice of loudspeakers and microphones and their positions. As such this interface can be as accurate and intimate but also as complicated as the interfaces of ’normal’ musical instruments. By zooming in on differences between acoustic and electronic sources a step is taken towards inclusion in this discussion of the perception of amplified music and the possible influence of that amplification on performance practise.
@inproceedings{Mulder2010, author = {Mulder, Jos}, title = {The Loudspeaker as Musical Instrument}, pages = {13--18}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177861}, url = {http://www.nime.org/proceedings/2010/nime2010_013.pdf}, keywords = {Sound technology (amplification), musical instruments, multi modal perception, performance practice.} }
Miha Ciglar. 2010. An Ultrasound Based Instrument Generating Audible and Tactile Sound. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 19–22. http://doi.org/10.5281/zenodo.1177745
Abstract
Download PDF DOI
This paper, describes the second phase of an ongoing research project dealing with the implementation of an interactive interface. It is a "hands free" instrument, utilizing a non-contact tactile feedback method based on airborne ultrasound. The three main elements/components of the interface that will be discussed in this paper are: 1. Generation of audible sound by self-demodulation of an ultrasound signal during its propagation through air; 2. The condensation of the ultrasound energy in one spatial point generating a precise tactile reproduction of the audible sound; and 3. The feed-forward method enabling a real-time intervention of the musician, by shaping the tactile (ultra)sound directly with his hands.
@inproceedings{Ciglar2010, author = {Ciglar, Miha}, title = {An Ultrasound Based Instrument Generating Audible and Tactile Sound}, pages = {19--22}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177745}, url = {http://www.nime.org/proceedings/2010/nime2010_019.pdf}, keywords = {haptics, vibro-tactility, feedback, ultrasound, hands-free interface, nonlinear acoustics, parametric array.} }
Ted Hayes. 2010. Neurohedron : A Nonlinear Sequencer Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 23–25. http://doi.org/10.5281/zenodo.1177799
Abstract
Download PDF DOI
The Neurohedron is a multi-modal interface for a nonlinear sequencer software model, embodied physically in a dodecahedron. The faces of the dodecahedron are both inputs and outputs, allowing the device to visualize the activity of the software model as well as convey input to it. The software model maps MIDI notes to the faces of the device, and defines and controls the behavior of the sequencer’s progression around its surface, resulting in a unique instrument for computer-based performance and composition.
@inproceedings{Hayes2010, author = {Hayes, Ted}, title = {Neurohedron : A Nonlinear Sequencer Interface}, pages = {23--25}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177799}, url = {http://www.nime.org/proceedings/2010/nime2010_023.pdf}, keywords = {controller, human computer interaction, interface, live performance, neural network, sequencer} }
Nobuyuki Umetani, Jun Mitani, and Takeo Igarashi. 2010. Designing Custom-made Metallophone with Concurrent Eigenanalysis. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 26–30. http://doi.org/10.5281/zenodo.1177917
Abstract
Download PDF DOI
We introduce an interactive interface for the custom designof metallophones. The shape of each plate must be determined in the design process so that the metallophone willproduce the proper tone when struck with a mallet. Unfortunately, the relationship between plate shape and tone iscomplex, which makes it difficult to design plates with arbitrary shapes. Our system addresses this problem by runninga concurrent numerical eigenanalysis during interactive geometry editing. It continuously presents a predicted tone tothe user with both visual and audio feedback, thus makingit possible to design a plate with any desired shape and tone.We developed this system to demonstrate the effectivenessof integrating real-time finite element method analysis intogeometric editing to facilitate the design of custom-mademusical instruments. An informal study demonstrated theability of technically unsophisticated user to apply the system to complex metallophone design.
@inproceedings{Umetani2010, author = {Umetani, Nobuyuki and Mitani, Jun and Igarashi, Takeo}, title = {Designing Custom-made Metallophone with Concurrent Eigenanalysis}, pages = {26--30}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177917}, url = {http://www.nime.org/proceedings/2010/nime2010_026.pdf}, keywords = {Modeling Interfaces, Geometric Modeling, CAD, Education, Real-time FEM} }
Sungkuk Chun, Andrew Hawryshkewich, Keechul Jung, and Philippe Pasquier. 2010. Freepad : A Custom Paper-based MIDI Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 31–36. http://doi.org/10.5281/zenodo.1177743
Abstract
Download PDF DOI
The field of mixed-reality interface design is relatively young and in regards to music, has not been explored in great depth. Using computer vision and collision detection techniques, Freepad further explores the development of mixed-reality interfaces for music. The result is an accessible user-definable MIDI interface for anyone with a webcam, pen and paper, which outputs MIDI notes with velocity values based on the speed of the strikes on drawn pads.
@inproceedings{Chun2010, author = {Chun, Sungkuk and Hawryshkewich, Andrew and Jung, Keechul and Pasquier, Philippe}, title = {Freepad : A Custom Paper-based MIDI Interface}, pages = {31--36}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177743}, url = {http://www.nime.org/proceedings/2010/nime2010_031.pdf}, keywords = {Computer vision, form recognition, collision detection, mixed- reality, custom interface, MIDI} }
John A. Mills, Damien Di Fede, and Nicolas Brix. 2010. Music Programming in Minim. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 37–42. http://doi.org/10.5281/zenodo.1177855
Abstract
Download PDF DOI
Our team realized that a need existed for a music programming interface in the Minim audio library of the Processingprogramming environment. The audience for this new interface would be the novice programmer interested in usingmusic as part of the learning experience, though the interface should also be complex enough to benefit experiencedartist-programmers. We collected many ideas from currently available music programming languages and librariesto design and create the new capabilities in Minim. Thebasic mechanisms include chained unit generators, instruments, and notes. In general, one "patches" unit generators(for example, oscillators, delays, and envelopes) together inorder to create synthesis algorithms. These algorithms canthen either create continuous sound, or be used in instruments to play notes with specific start time and duration.We have written a base set of unit generators to enablea wide variety of synthesis options, and the capabilities ofthe unit generators, instruments, and Processing allow fora wide range of composition techniques.
@inproceedings{Mills2010, author = {Mills, John A. and Di Fede, Damien and Brix, Nicolas}, title = {Music Programming in Minim}, pages = {37--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177855}, url = {http://www.nime.org/proceedings/2010/nime2010_037.pdf}, keywords = {Minim, music programming, audio library, Processing, mu- sic software} }
Thor Magnusson. 2010. An Epistemic Dimension Space for Musical Devices. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 43–46. http://doi.org/10.5281/zenodo.1177837
Abstract
Download PDF DOI
The analysis of digital music systems has traditionally been characterized by an approach that can be defined as phenomenological. The focus has been on the body and its relationship to the machine, often neglecting the system’s conceptual design. This paper brings into focus the epistemic features of digital systems, which implies emphasizing the cognitive, conceptual and music theoretical side of our musical instruments. An epistemic dimension space for the analysis of musical devices is proposed.
@inproceedings{Magnusson2010, author = {Magnusson, Thor}, title = {An Epistemic Dimension Space for Musical Devices}, pages = {43--46}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177837}, url = {http://www.nime.org/proceedings/2010/nime2010_043.pdf}, keywords = {Epistemic tools, music theory, dimension space, analysis.} }
A. Baki Kocaballi, Petra Gemeinboeck, and Rob Saunders. 2010. Investigating the Potential for Shared Agency using Enactive Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 47–50. http://doi.org/10.5281/zenodo.1177829
Abstract
Download PDF DOI
Human agency, our capacity for action, has been at the hub of discussions centring upon philosophical enquiry for a long period of time. Sensory supplementation devices can provide us with unique opportunities to investigate the different aspects of our agency by enabling new modes of perception and facilitating the emergence of novel interactions, all of which is impossible without the aforesaid devices. Our preliminary study investigates the non-verbal strategies employed for negotiation of our capacity for action with other bodies and the surrounding space through body-to-body and body-to-space couplings enabled by sensory supplementation devices. We employed a lowfi rapid prototyping approach to build this device, enabling distal perception by sonic and haptic feedback. Further, we conducted a workshop in which participants equipped with this device engaged in game-like activities.
@inproceedings{Kocaballi2010, author = {Kocaballi, A. Baki and Gemeinboeck, Petra and Saunders, Rob}, title = {Investigating the Potential for Shared Agency using Enactive Interfaces}, pages = {47--50}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177829}, url = {http://www.nime.org/proceedings/2010/nime2010_047.pdf}, keywords = {Human agency, sensory supplementation, distal perception, sonic feedback, tactile feedback, enactive interfaces} }
Noah Liebman, Michael Nagara, Jacek Spiewla, and Erin Zolkosky. 2010. Cuebert : A New Mixing Board Concept for Musical Theatre. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 51–56. http://doi.org/10.5281/zenodo.1177833
Abstract
Download PDF DOI
We present Cuebert, a mixing board concept for musical theatre. Using a user-centered design process, our goal was to reconceptualize the mixer using modern technology and interaction techniques, questioning over fifty years of interface design in audio technology. Our research resulted in a design that retains the physical controls — faders and knobs — demanded by sound engineers while taking advantage of multitouch display technology to allow for flexible display of dynamic and context-sensitive content.
@inproceedings{Liebman2010, author = {Liebman, Noah and Nagara, Michael and Spiewla, Jacek and Zolkosky, Erin}, title = {Cuebert : A New Mixing Board Concept for Musical Theatre}, pages = {51--56}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177833}, url = {http://www.nime.org/proceedings/2010/nime2010_051.pdf}, keywords = {audio, control surfaces, mixing board, multitouch, sound, theatre, touch-screen, user-centered design} }
Charles Roberts, Matthew Wright, JoAnn Kuchera-Morin, and Lance Putnam. 2010. Dynamic Interactivity Inside the AlloSphere. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 57–62. http://doi.org/10.5281/zenodo.1177883
Abstract
Download PDF DOI
We present the Device Server, a framework and application driving interaction in the AlloSphere virtual reality environment. The motivation and development of the Device Server stems from the practical concerns of managing multi-user interactivity with a variety of physical devices for disparate performance and virtual reality environments housed in the same physical location. The interface of the Device Server allows users to see how devices are assigned to application functionalities, alter these assignments and save them into configuration files for later use. Configurations defining how applications use devices can be changed on the fly without recompiling or relaunching applications. Multiple applications can be connected to the Device Server concurrently. The Device Server provides several conveniences for performance environments. It can process control data efficiently using Just-In-Time compiled Lua expressions; in doing so it frees processing cycles on audio and video rendering computers. All control signals entering the Device Server can be recorded, saved, and played back allowing performances based on control data to be recreated in their entirety. The Device Server attempts to homogenize the appearance of different control signals to applications so that users can assign any interface element they choose to application functionalities and easily experiment with different control configurations.
@inproceedings{Roberts2010, author = {Roberts, Charles and Wright, Matthew and Kuchera-Morin, JoAnn and Putnam, Lance}, title = {Dynamic Interactivity Inside the AlloSphere}, pages = {57--62}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177883}, url = {http://www.nime.org/proceedings/2010/nime2010_057.pdf}, keywords = {AlloSphere, mapping, performance, HCI, interactivity, Virtual Reality, OSC, multi-user, network} }
Florian Alt, Alireza S. Shirazi, Stefan Legien, Albrecht Schmidt, and Julian Mennenöh. 2010. Creating Meaningful Melodies from Text Messages. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 63–68. http://doi.org/10.5281/zenodo.1177713
Abstract
Download PDF DOI
Writing text messages (e.g. email, SMS, instant messaging) is a popular form of synchronous and asynchronous communication. However, when it comes to notifying users about new messages, current audio-based approaches, such as notification tones, are very limited in conveying information. In this paper we show how entire text messages can be encoded into a meaningful and euphonic melody in such a way that users can guess a message’s intention without actually seeing the content. First, as a proof of concept, we report on the findings of an initial on-line survey among 37 musicians and 32 non-musicians evaluating the feasibility and validity of our approach. We show that our representation is understandable and that there are no significant differences between musicians and non-musicians. Second, we evaluated the approach in a real world scenario based on a Skype plug-in. In a field study with 14 participants we showed that sonified text messages strongly impact on the users’ message checking behavior by significantly reducing the time between receiving and reading an incoming message.
@inproceedings{Alt2010, author = {Alt, Florian and Shirazi, Alireza S. and Legien, Stefan and Schmidt, Albrecht and Mennen\''{o}h, Julian}, title = {Creating Meaningful Melodies from Text Messages}, pages = {63--68}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177713}, url = {http://www.nime.org/proceedings/2010/nime2010_063.pdf}, keywords = {instant messaging, sms, sonority, text sonification} }
Tim Humphrey, Madeleine Flynn, and Jesse Stevens. 2010. Epi-thet : A Musical Performance Installation and a Choreography of Stillness. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 69–71. http://doi.org/10.5281/zenodo.1177811
Abstract
Download PDF DOI
This paper articulates an interest in a kind of interactive musical instrument and artwork that defines the mechanisms for instrumental interactivity from the iconic morphologies of ready-mades, casting historical utilitarian objects as the basis for performed musical experiences by spectators. The interactive repertoires are therefore partially pre-determined through enculturated behaviors that are associated with particular objects, but more importantly, inextricably linked to the thematic and meaningful assemblage of the work itself. Our new work epi-thet gathers data from individual interactions with common microscopes placed on platforms within a large space. This data is correlated with public domain genetic datasets obtained from micro-array analysis. A sonification algorithm generates unique compositions associated with the spectator "as measured" through their individual specification in performing an iconic measurement action. The apparatus is a receptacle for unique compositions in sound, and invites a participatory choreography of stillness that is available for reception as a live musical performance.
@inproceedings{Humphrey2010, author = {Humphrey, Tim and Flynn, Madeleine and Stevens, Jesse}, title = {Epi-thet : A Musical Performance Installation and a Choreography of Stillness}, pages = {69--71}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177811}, url = {http://www.nime.org/proceedings/2010/nime2010_069.pdf}, keywords = {Sonification installation spectator-choreography micro-array ready-mades morphology stillness} }
Tilo Hähnel. 2010. From Mozart to MIDI : A Rule System for Expressive Articulation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 72–75. http://doi.org/10.5281/zenodo.1177787
Abstract
Download PDF DOI
The propriety of articulation, especially of notes that lackannotations, is influenced by the origin of the particularmusic. This paper presents a rule system for articulationderived from late Baroque and early Classic treatises on performance. Expressive articulation, in this respect, is understood as a combination of alterable tone features like duration, loudness, and timbre. The model differentiates globalcharacteristics and local particularities, provides a generalframework for human-like music performances, and, therefore, serves as a basis for further and more complex rulesystems.
@inproceedings{Hahnel2010, author = {H\''{a}hnel, Tilo}, title = {From Mozart to {MIDI} : A Rule System for Expressive Articulation}, pages = {72--75}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177787}, url = {http://www.nime.org/proceedings/2010/nime2010_072.pdf}, keywords = {Articulation, Historically Informed Performance, Expres- sive Performance, Synthetic Performance} }
Georg Essl and Alexander Müller. 2010. Designing Mobile Musical Instruments and Environments with urMus. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 76–81. http://doi.org/10.5281/zenodo.1177759
Abstract
Download PDF DOI
We discuss how the environment urMus was designed to allow creation of mobile musical instruments on multi-touch smartphones. The design of a mobile musical instrument consists of connecting sensory capabilities to output modalities through various means of processing. We describe how the default mapping interface was designed which allows to set up such a pipeline and how visual and interactive multi-touch UIs for musical instruments can be designed within the system.
@inproceedings{Essl2010, author = {Essl, Georg and M\''{u}ller, Alexander}, title = {Designing Mobile Musical Instruments and Environments with urMus}, pages = {76--81}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177759}, url = {http://www.nime.org/proceedings/2010/nime2010_076.pdf}, keywords = {Mobile music making, meta-environment, design, mapping, user interface} }
Jieun Oh, Jorge Herrera, Nicholas J. Bryan, Luke Dahl, and Ge Wang. 2010. Evolving The Mobile Phone Orchestra. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 82–87. http://doi.org/10.5281/zenodo.1177871
Abstract
Download PDF DOI
In this paper, we describe the development of the Stanford Mobile Phone Orchestra (MoPhO) since its inceptionin 2007. As a newly structured ensemble of musicians withiPhones and wearable speakers, MoPhO takes advantageof the ubiquity and mobility of smartphones as well asthe unique interaction techniques offered by such devices.MoPhO offers a new platform for research, instrument design, composition, and performance that can be juxtaposedto that of a laptop orchestra. We trace the origins of MoPhO,describe the motivations behind the current hardware andsoftware design in relation to the backdrop of current trendsin mobile music making, detail key interaction conceptsaround new repertoire, and conclude with an analysis onthe development of MoPhO thus far.
@inproceedings{Oh2010, author = {Oh, Jieun and Herrera, Jorge and Bryan, Nicholas J. and Dahl, Luke and Wang, Ge}, title = {Evolving The Mobile Phone Orchestra}, pages = {82--87}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177871}, url = {http://www.nime.org/proceedings/2010/nime2010_082.pdf}, keywords = {mobile phone orchestra, live performance, iPhone, mobile music} }
Atau Tanaka. 2010. Mapping Out Instruments, Affordances, and Mobiles. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 88–93. http://doi.org/10.5281/zenodo.1177903
Abstract
Download PDF DOI
This paper reviews and extends questions of the scope of an interactive musical instrument and mapping strategies for expressive performance. We apply notions of embodiment and affordance to characterize gestural instruments. We note that the democratization of sensor technology in consumer devices has extended the cultural contexts for interaction. We revisit questions of mapping drawing upon the theory of affordances to consider mapping and instrument together. This is applied to recent work by the author and his collaborators in the development of instruments based on mobile devices designed for specific performance situations.
@inproceedings{Tanaka2010, author = {Tanaka, Atau}, title = {Mapping Out Instruments, Affordances, and Mobiles}, pages = {88--93}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177903}, url = {http://www.nime.org/proceedings/2010/nime2010_088.pdf}, keywords = {Musical affordance, NIME, mapping, instrument definition, mobile, multimodal interaction.} }
Mark Havryliv. 2010. Composing For Improvisation with Chaotic Oscillators. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 94–99. http://doi.org/10.5281/zenodo.1177795
Abstract
Download PDF DOI
This paper describes a novel method for composing andimprovisation with real-time chaotic oscillators. Recentlydiscovered algebraically simple nonlinear third-order differential equations are solved and acoustical descriptors relating to their frequency spectrums are determined accordingto the MPEG-7 specification. A second nonlinearity is thenadded to these equations: a real-time audio signal. Descriptive properties of the complex behaviour of these equationsare then determined as a function of difference tones derived from a Just Intonation scale and the amplitude ofthe audio signal. By using only the real-time audio signalfrom live performer/s as an input the causal relationshipbetween acoustic performance gestures and computer output, including any visual or performer-instruction output,is deterministic even if the chaotic behaviours are not.
@inproceedings{Havryliv2010, author = {Havryliv, Mark}, title = {Composing For Improvisation with Chaotic Oscillators}, pages = {94--99}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177795}, url = {http://www.nime.org/proceedings/2010/nime2010_094.pdf}, keywords = {chaos and music, chaotic dynamics and oscillators, differential equations and music, mathematica, audio descriptors and mpeg-7} }
Andrew Hawryshkewich, Philippe Pasquier, and Arne Eigenfeldt. 2010. Beatback : A Real-time Interactive Percussion System for Rhythmic Practise and Exploration. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 100–105. http://doi.org/10.5281/zenodo.1177797
Abstract
Download PDF DOI
Traditional drum machines and digital drum-kits offer users the ability to practice or perform with a supporting ensemble – such as a bass, guitar and piano – but rarely provide support in the form of an accompanying percussion part. Beatback is a system which develops upon this missing interaction through offering a MIDI enabled drum system which learns and plays in the user’s style. In the contexts of rhythmic practise and exploration, Beatback looks at call-response and accompaniment models of interaction to enable new possibilities for rhythmic creativity.
@inproceedings{Hawryshkewich2010, author = {Hawryshkewich, Andrew and Pasquier, Philippe and Eigenfeldt, Arne}, title = {Beatback : A Real-time Interactive Percussion System for Rhythmic Practise and Exploration}, pages = {100--105}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177797}, url = {http://www.nime.org/proceedings/2010/nime2010_100.pdf}, keywords = {Interactive music interface, real-time, percussion, machine learning, Markov models, MIDI.} }
Michael Gurevich, Paul Stapleton, and Adnan Marquez-Borbon. 2010. Style and Constraint in Electronic Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 106–111. http://doi.org/10.5281/zenodo.1177785
Abstract
Download PDF DOI
A qualitative study to investigate the development of stylein performance with a highly constrained musical instrument is described. A new one-button instrument was designed, with which several musicians were each asked topractice and develop a solo performance. Observations oftrends in attributes of these performances are detailed in relation to participants’ statements in structured interviews.Participants were observed to develop stylistic variationsboth within the domain of activities suggested by the constraint, and by discovering non-obvious techniques througha variety of strategies. Data suggest that stylistic variationsoccurred in spite of perceived constraint, but also becauseof perceived constraint. Furthermore, participants tendedto draw on unique experiences, approaches and perspectivesthat shaped individual performances.
@inproceedings{Gurevich2010, author = {Gurevich, Michael and Stapleton, Paul and Marquez-Borbon, Adnan}, title = {Style and Constraint in Electronic Musical Instruments}, pages = {106--111}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177785}, url = {http://www.nime.org/proceedings/2010/nime2010_106.pdf}, keywords = {design, interaction, performance, persuasive technology} }
Hongchan Choi and Ge Wang. 2010. LUSH : An Organic Eco + Music System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 112–115. http://doi.org/10.5281/zenodo.1177741
Abstract
Download PDF DOI
We propose an environment that allows users to create music by leveraging playful visualization and organic interaction. Our attempt to improve ideas drawn from traditional sequencer paradigm has been made in terms of extemporizing music and associating with visualization in real-time. In order to offer different user experience and musical possibility, this system incorporates many techniques, including; flocking simulation, nondeterministic finite automata (NFA), score file analysis, vector calculation, OpenGL animation, and networking. We transform a sequencer into an audiovisual platform for composition and performance, which is furnished with artistry and ease of use. Thus we believe that it is suitable for not only artists such as algorithmic composers or audiovisual performers, but also anyone who wants to play music and imagery in a different way.
@inproceedings{Choi2010, author = {Choi, Hongchan and Wang, Ge}, title = {LUSH : An Organic Eco + Music System}, pages = {112--115}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177741}, url = {http://www.nime.org/proceedings/2010/nime2010_112.pdf}, keywords = {algorithmic composition,audiovisual,automata,behavior simulation,music,music sequencer,musical interface,nime10,visualization} }
Tomoyuki Yamaguchi, Tsukasa Kobayashi, Anna Ariga, and Shuji Hashimoto. 2010. TwinkleBall : A Wireless Musical Interface for Embodied Sound Media. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 116–119. http://doi.org/10.5281/zenodo.1177927
Abstract
Download PDF DOI
In this paper, we introduce a wireless musical interface driven by grasping forces and human motion. The sounds generated by the traditional digital musical instruments are dependent on the physical shape of the musical instruments. The freedom of the musical performance is restricted by its structure. Therefore, the sounds cannot be generated with the body expression like the dance. We developed a ball-shaped interface, TwinkleBall, to achieve the free-style performance. A photo sensor is embedded in the translucent rubber ball to detect the grasping force of the performer. The grasping force is translated into the luminance intensity for processing. Moreover, an accelerometer is also embedded in the interface for motion sensing. By using these sensors, a performer can control the note and volume by varying grasping force and motion respectively. The features of the proposed interface are ball-shaped, wireless, and handheld size. As a result, the proposed interface is able to generate the sound from the body expression such as dance.
@inproceedings{Yamaguchi2010, author = {Yamaguchi, Tomoyuki and Kobayashi, Tsukasa and Ariga, Anna and Hashimoto, Shuji}, title = {TwinkleBall : A Wireless Musical Interface for Embodied Sound Media}, pages = {116--119}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177927}, url = {http://www.nime.org/proceedings/2010/nime2010_116.pdf}, keywords = {Musical Interface, Embodied Sound Media, Dance Performance.} }
Joanne Cannon and Stuart Favilla. 2010. Expression and Spatial Motion : Playable Ambisonics. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 120–124. http://doi.org/10.5281/zenodo.1177735
Abstract
Download PDF DOI
This paper presents research undertaken by the Bent Leather Band investigating the application of live Ambisonics to large digital-instrument ensemble improvisation. Their playable approach to live ambisonic projection is inspired by the work of Trevor Wishart and presents a systematic investigation of the potential for live spatial motion improvisation.
@inproceedings{Cannon2010, author = {Cannon, Joanne and Favilla, Stuart}, title = {Expression and Spatial Motion : Playable Ambisonics}, pages = {120--124}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177735}, url = {http://www.nime.org/proceedings/2010/nime2010_120.pdf}, keywords = {ambisonics, augmented instruments, expressive spatial motion, playable instruments} }
Nick Collins. 2010. Contrary Motion : An Oppositional Interactive Music System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 125–129. http://doi.org/10.5281/zenodo.1177747
Abstract
Download PDF DOI
The hypothesis of this interaction research project is that it can be stimulating for experimental musicians to confront a system which ‘opposes’ their musical style. The ‘contrary motion’ of the title is the name of a MIDI-based realtime musical software agent which uses machine listening to establish the musical context, and thereby chooses its own responses to differentiate its position from that of its human interlocutant. To do this requires a deep consideration of the space of musical actions, so as to explicate what opposition should constitute, and machine listening technology (most prominently represented by new online beat and stream tracking algorithms) which gives an accurate measurement of player position so as to consistently avoid it. An initial pilot evaluation was undertaken, feeding back critical data to the developing design.
@inproceedings{Collins2010, author = {Collins, Nick}, title = {Contrary Motion : An Oppositional Interactive Music System}, pages = {125--129}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177747}, url = {http://www.nime.org/proceedings/2010/nime2010_125.pdf}, keywords = {contrary, beat tracking, stream analysis, musical agent} }
Etienne Deleflie and Greg Schiemer. 2010. Images as Spatial Sound Maps. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 130–135. http://doi.org/10.5281/zenodo.1177753
Abstract
Download PDF DOI
The tools for spatial composition typically model just a small subset of the spatial audio cues known to researchers. As composers explore this medium it has become evident that the nature of spatial sound perception is complex. Yet interfaces for spatial composition are often simplistic and the end results can be disappointing. This paper presents an interface that is designed to liberate the composer from thinking of spatialised sound as points in space. Instead, visual images are used to define sound in terms of shape, size and location. Images can be sequenced into video, thereby creating rich and complex temporal soundscapes. The interface offers both the ability to craft soundscapes and also compose their evolution in time.
@inproceedings{Deleflie2010, author = {Deleflie, Etienne and Schiemer, Greg}, title = {Images as Spatial Sound Maps}, pages = {130--135}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177753}, url = {http://www.nime.org/proceedings/2010/nime2010_130.pdf}, keywords = {Spatial audio, surround sound, ambisonics, granular synthesis, decorrelation, diffusion.} }
Kevin Schlei. 2010. Relationship-Based Instrument Mapping of Multi-Point Data Streams Using a Trackpad Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 136–139. http://doi.org/10.5281/zenodo.1177891
Abstract
Download PDF DOI
Multi-point devices are rapidly becoming a practical interface choice for electronic musicians. Interfaces that generate multiple simultaneous streams of point data present a unique mapping challenge. This paper describes an analysis system for point relationships that acts as a bridge between raw streams of multi-point data and the instruments they control, using a multipoint trackpad to test various configurations. The aim is to provide a practical approach for instrument programmers working with multi-point tools, while highlighting the difference between mapping systems based on point coordinate streams, grid evaluations, or object interaction and mapping systems based on multi-point data relationships.
@inproceedings{Schlei2010, author = {Schlei, Kevin}, title = {Relationship-Based Instrument Mapping of Multi-Point Data Streams Using a Trackpad Interface}, pages = {136--139}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177891}, url = {http://www.nime.org/proceedings/2010/nime2010_136.pdf}, keywords = {Multi-point, multi-touch interface, instrument mapping, multi-point data analysis, trackpad instrument} }
Lonce Wyse and Nguyen D. Duy. 2010. Instrumentalizing Synthesis Models. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 140–143. http://doi.org/10.5281/zenodo.1177925
Abstract
Download PDF DOI
An important part of building interactive sound models is designing the interface and control strategy. The multidimensional structure of the gestures natural for a musical or physical interface may have little obvious relationship to the parameters that a sound synthesis algorithm exposes for control. A common situation arises when there is a nonlinear synthesis technique for which a traditional instrumental interface with quasi-independent control of pitch and expression is desired. This paper presents a semi-automatic meta-modeling tool called the Instrumentalizer for embedding arbitrary synthesis algorithms in a control structure that exposes traditional instrument controls for pitch and expression.
@inproceedings{Wyse2010, author = {Wyse, Lonce and Duy, Nguyen D.}, title = {Instrumentalizing Synthesis Models}, pages = {140--143}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177925}, url = {http://www.nime.org/proceedings/2010/nime2010_140.pdf}, keywords = {Musical interface, parameter mapping, expressive control.} }
Alavaro Cassinelli, Yusaku Kuribara, Alexis Zerroug, Masatoshi Ishikawa, and Daito Manabe. 2010. scoreLight : Playing with a Human-Sized Laser Pick-Up. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 144–149. http://doi.org/10.5281/zenodo.1177739
Abstract
Download PDF DOI
scoreLight is a playful musical instrument capable of generating sound from the lines of drawings as well as from theedges of three-dimensional objects nearby (including everyday objects, sculptures and architectural details, but alsothe performer’s hands or even the moving silhouettes ofdancers). There is no camera nor projector: a laser spotexplores shapes as a pick-up head would search for soundover the surface of a vinyl record — with the significant difference that the groove is generated by the contours of thedrawing itself.
@inproceedings{Cassinelli2010, author = {Cassinelli, Alavaro and Kuribara, Yusaku and Zerroug, Alexis and Ishikawa, Masatoshi and Manabe, Daito}, title = {scoreLight : Playing with a Human-Sized Laser Pick-Up}, pages = {144--149}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177739}, url = {http://www.nime.org/proceedings/2010/nime2010_144.pdf}, keywords = {H5.2 [User Interfaces] interaction styles / H.5.5 [Sound and Music Computing] Methodologies and techniques / J.5 [Arts and Humanities] performing arts} }
Karl Yerkes, Greg Shear, and Matthew Wright. 2010. Disky : a DIY Rotational Interface with Inherent Dynamics. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 150–155. http://doi.org/10.5281/zenodo.1177929
Abstract
Download PDF DOI
Disky is a computer hard drive re-purposed into a do-it-yourself USB turntable controller that offers high resolution and low latency for controlling parameters of multimedia performance software. Disky is a response to the challenge “re-purpose something that is often discarded and share it with the do-it-yourself community to promote reuse!”
@inproceedings{Yerkes2010, author = {Yerkes, Karl and Shear, Greg and Wright, Matthew}, title = {Disky : a DIY Rotational Interface with Inherent Dynamics}, pages = {150--155}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177929}, url = {http://www.nime.org/proceedings/2010/nime2010_150.pdf}, keywords = {turntable, dial, encoder, re-purposed, hard drive, scratch-ing, inherent dynamics, DIY} }
Jorge Solis, Klaus Petersen, Tetsuro Yamamoto, et al. 2010. Development of the Waseda Saxophonist Robot and Implementation of an Auditory Feedback Control. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 156–161. http://doi.org/10.5281/zenodo.1177897
Abstract
Download PDF DOI
Since 2007, our research is related to the development of an anthropomorphic saxophonist robot, which it has been designed to imitate the saxophonist playing by mechanically reproducing the organs involved for playing a saxophone. Our research aims in understanding the motor control from an engineering point of view and enabling the communication. In this paper, the Waseda Saxophone Robot No. 2 (WAS-2) which is composed by 22-DOFs is detailed. The lip mechanism of WAS-2 has been designed with 3-DOFs to control the motion of the lower, upper and sideway lips. In addition, a human-like hand (16 DOF-s) has been designed to enable to play all the keys of the instrument. Regarding the improvement of the control system, a feed-forward control system with dead-time compensation has been implemented to assure the accurate control of the air pressure. In addition, the implementation of an auditory feedback control system has been proposed and implemented in order to adjust the positioning of the physical parameters of the components of the robot by providing a pitch feedback and defining a recovery position (off-line). A set of experiments were carried out to verify the mechanical design improvements and the dynamic response of the air pressure. As a result, the range of sound pressure has been increased and the proposed control system improved the dynamic response of the air pressure control.
@inproceedings{Solis2010, author = {Solis, Jorge and Petersen, Klaus and Yamamoto, Tetsuro and Takeuchi, Masaki and Ishikawa, Shimpei and Takanishi, Atsuo and Hashimoto, Kunimatsu}, title = {Development of the Waseda Saxophonist Robot and Implementation of an Auditory Feedback Control}, pages = {156--161}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177897}, url = {http://www.nime.org/proceedings/2010/nime2010_156.pdf}, keywords = {Humanoid Robot, Auditory Feedback, Music, Saxophone.} }
Ajay Kapur and Michael Darling. 2010. A Pedagogical Paradigm for Musical Robotics. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 162–165. http://doi.org/10.5281/zenodo.1177821
Abstract
Download PDF DOI
This paper describes the making of a class to teach the history and art of musical robotics. The details of the curriculum are described as well as designs for our custom schematics for robotic solenoid driven percussion. This paper also introduces four new robotic instruments that were built during the term of this course. This paper also introduces the Machine Orchestra, a laptop orchestra with ten human performers and our five robotic instruments.
@inproceedings{Kapur2010, author = {Kapur, Ajay and Darling, Michael}, title = {A Pedagogical Paradigm for Musical Robotics}, pages = {162--165}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177821}, url = {http://www.nime.org/proceedings/2010/nime2010_162.pdf}, keywords = {dartron,digital classroom,laptop orchestra,machine orchestra,musical robotics,nime pedagogy,nime10,solenoid} }
Ye Pan, Min-Gyu Kim, and Kenji Suzuki. 2010. A Robot Musician Interacting with a Human Partner through Initiative Exchange. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 166–169. http://doi.org/10.5281/zenodo.1177875
Abstract
Download PDF DOI
This paper proposes a novel method to realize an initiativeexchange for robot. A humanoid robot plays vibraphone exchanging initiative with a human performer by perceivingmultimodal cues in real time. It understands the initiative exchange cues through vision and audio information.In order to achieve the natural initiative exchange betweena human and a robot in musical performance, we built thesystem and the software architecture and carried out the experiments for fundamental algorithms which are necessaryto the initiative exchange.
@inproceedings{Pan2010, author = {Pan, Ye and Kim, Min-Gyu and Suzuki, Kenji}, title = {A Robot Musician Interacting with a Human Partner through Initiative Exchange}, pages = {166--169}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177875}, url = {http://www.nime.org/proceedings/2010/nime2010_166.pdf}, keywords = {Human-robot interaction, initiative exchange, prediction} }
Ivika Bukvic, Thomas Martin, Eric Standley, and Michael Matthews. 2010. Introducing L2Ork : Linux Laptop Orchestra. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 170–173. http://doi.org/10.5281/zenodo.1177731
Abstract
Download PDF DOI
Virginia Tech Department of Music’s Digital Interactive Sound & Intermedia Studio in collaboration with the College of Engineering and School of Visual Arts presents the latest addition to the *Ork family, the Linux Laptop Orchestra. Apart from maintaining compatibility with its precursors and sources of inspiration, Princeton’s PLOrk, and Stanford’s SLOrk, L2Ork’s particular focus is on delivering unprecedented affordability without sacrificing quality, as well as flexibility necessary to encourage a more widespread adoption and standardization of the laptop orchestra ensemble. The newfound strengths of L2Ork’s design have resulted in opportunities in K-12 education with a particular focus on cross-pollinating STEM and Arts, as well as research of an innovative content delivery system that can seamlessly engage students regardless of their educational background. In this document we discuss key components of the L2Ork initiative, their benefits, and offer resources necessary for the creation of other Linux-based *Orks
@inproceedings{Bukvic2010, author = {Bukvic, Ivika and Martin, Thomas and Standley, Eric and Matthews, Michael}, title = {Introducing L2Ork : Linux Laptop Orchestra}, pages = {170--173}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177731}, url = {http://www.nime.org/proceedings/2010/nime2010_170.pdf}, keywords = {l2ork,laptop orchestra,linux,nime10} }
Nicholas J. Bryan, Jorge Herrera, Jieun Oh, and Ge Wang. 2010. MoMu : A Mobile Music Toolkit. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 174–177. http://doi.org/10.5281/zenodo.1177725
Abstract
Download PDF DOI
The Mobile Music (MoMu) toolkit is a new open-sourcesoftware development toolkit focusing on musical interaction design for mobile phones. The toolkit, currently implemented for iPhone OS, emphasizes usability and rapidprototyping with the end goal of aiding developers in creating real-time interactive audio applications. Simple andunified access to onboard sensors along with utilities forcommon tasks found in mobile music development are provided. The toolkit has been deployed and evaluated in theStanford Mobile Phone Orchestra (MoPhO) and serves asthe primary software platform in a new course exploringmobile music.
@inproceedings{Bryan2010, author = {Bryan, Nicholas J. and Herrera, Jorge and Oh, Jieun and Wang, Ge}, title = {MoMu : A Mobile Music Toolkit}, pages = {174--177}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177725}, url = {http://www.nime.org/proceedings/2010/nime2010_174.pdf}, keywords = {instrument design, iPhone, mobile music, software develop- ment, toolkit} }
Luke Dahl and Ge Wang. 2010. Sound Bounce : Physical Metaphors in Designing Mobile Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 178–181. http://doi.org/10.5281/zenodo.1177751
Abstract
Download PDF DOI
The use of metaphor has a prominent role in HCI, both as a device to help users understand unfamiliar technologies, and as a tool to guide the design process. Creators of new computerbased instruments face similar design challenges as those in HCI. In the course of creating a new piece for Mobile Phone Orchestra we propose the metaphor of a sound as a ball and explore the interactions and sound mappings it suggests. These lead to the design of a gesture-controlled instrument that allows players to "bounce" sounds, "throw" them to other players, and compete in a game to "knock out" others’ sounds. We composed the piece SoundBounce based on these interactions, and note that audiences seem to find performances of the piece accessible and engaging, perhaps due to the visibility of the metaphor.
@inproceedings{Dahl2010, author = {Dahl, Luke and Wang, Ge}, title = {Sound Bounce : Physical Metaphors in Designing Mobile Music Performance}, pages = {178--181}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177751}, url = {http://www.nime.org/proceedings/2010/nime2010_178.pdf}, keywords = {Mobile music, design, metaphor, performance, gameplay.} }
Georg Essl, Michael Rohs, and Sven Kratz. 2010. Use the Force (or something) — Pressure and Pressure — Like Input for Mobile Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 182–185. http://doi.org/10.5281/zenodo.1177761
Abstract
Download PDF DOI
Impact force is an important dimension for percussive musical instruments such as the piano. We explore three possible mechanisms how to get impact forces on mobile multi-touch devices: using built-in accelerometers, the pressure sensing capability of Android phones, and external force sensing resistors. We find that accelerometers are difficult to control for this purpose. Android’s pressure sensing shows some promise, especially when combined with augmented playing technique. Force sensing resistors can offer good dynamic resolution but this technology is not currently offered in commodity devices and proper coupling of the sensor with the applied impact is difficult.
@inproceedings{Essl2010a, author = {Essl, Georg and Rohs, Michael and Kratz, Sven}, title = {Use the Force (or something) --- Pressure and Pressure --- Like Input for Mobile Music Performance}, pages = {182--185}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177761}, url = {http://www.nime.org/proceedings/2010/nime2010_182.pdf}, keywords = {Force, impact, pressure, multi-touch, mobile phone, mobile music making.} }
Roger Mills. 2010. Dislocated Sound : A Survey of Improvisation in Networked Audio Platforms. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 186–191. http://doi.org/10.5281/zenodo.1177857
Abstract
Download PDF DOI
The evolution of networked audio technologies has created unprecedented opportunities for musicians to improvise with instrumentalists from a diverse range of cultures and disciplines. As network speeds increase and latency is consigned to history, tele-musical collaboration, and in particular improvisation will be shaped by new methodologies that respond to this potential. While networked technologies eliminate distance in physical space, for the remote improviser, this creates a liminality of experience through which their performance is mediated. As a first step in understanding the conditions arising from collaboration in networked audio platforms, this paper will examine selected case studies of improvisation in a variety of networked interfaces. The author will examine how platform characteristics and network conditions influence the process of collective improvisation and the methodologies musicians are employing to negotiate their networked experiences.
@inproceedings{Mills2010a, author = {Mills, Roger}, title = {Dislocated Sound : A Survey of Improvisation in Networked Audio Platforms}, pages = {186--191}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177857}, url = {http://www.nime.org/proceedings/2010/nime2010_186.pdf}, keywords = {improvisation, internet audio, networked collaboration, sound art} }
Florent Berthaut, Myriam Desainte-Catherine, and Martin Hachet. 2010. DRILE : An Immersive Environment for Hierarchical Live-Looping. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 192–197. http://doi.org/10.5281/zenodo.1177721
Abstract
Download PDF DOI
We present Drile, a multiprocess immersive instrument built uponthe hierarchical live-looping technique and aimed at musical performance. This technique consists in creating musical trees whosenodes are composed of sound effects applied to a musical content.In the leaves, this content is a one-shot sound, whereas in higherlevel nodes this content is composed of live-recorded sequencesof parameters of the children nodes. Drile allows musicians tointeract efficiently with these trees in an immersive environment.Nodes are represented as worms, which are 3D audiovisual objects. Worms can be manipulated using 3D interaction techniques,and several operations can be applied to the live-looping trees. Theenvironment is composed of several virtual rooms, i.e. group oftrees, corresponding to specific sounds and effects. Learning Drileis progressive since the musical control complexity varies according to the levels in live-looping trees. Thus beginners may havelimited control over only root worms while still obtaining musically interesting results. Advanced users may modify the trees andmanipulate each of the worms.
@inproceedings{Berthaut2010, author = {Berthaut, Florent and Desainte-Catherine, Myriam and Hachet, Martin}, title = {DRILE : An Immersive Environment for Hierarchical Live-Looping}, pages = {192--197}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177721}, url = {http://www.nime.org/proceedings/2010/nime2010_192.pdf}, keywords = {Drile, immersive instrument, hierarchical live-looping, 3D interac- tion} }
Robin Fencott and Nick Bryan-Kinns. 2010. Hey Man, You’re Invading my Personal Space ! Privacy and Awareness in Collaborative Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 198–203. http://doi.org/10.5281/zenodo.1177763
Abstract
Download PDF DOI
This research is concerned with issues of privacy, awareness and the emergence of roles in the process of digitallymediated collaborative music making. Specifically we areinterested in how providing collaborators with varying degrees of privacy and awareness of one another influencesthe group interaction. A study is presented whereby ninegroups of co-located musicians compose music together using three different interface designs. We use qualitative andquantitative data to study and characterise the musician’sinteraction with each other and the software. We show thatwhen made available to them, participants make extensiveuse of a private working area to develop musical contributions before they are introduced to the group. We also arguethat our awareness mechanisms change the perceived quality of the musical interaction, but have no impact on theway musicians interact with the software. We then reflecton implications for the design of new collaborative musicmaking tools which exploit the potential of digital technologies, while at the same time support creative musicalinteraction.
@inproceedings{Fencott2010, author = {Fencott, Robin and Bryan-Kinns, Nick}, title = {Hey Man, You're Invading my Personal Space ! Privacy and Awareness in Collaborative Music}, pages = {198--203}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177763}, url = {http://www.nime.org/proceedings/2010/nime2010_198.pdf}, keywords = {Awareness, Privacy, Collaboration, Music, Interaction, En- gagement, Group Music Making, Design, Evaluation.} }
Charles Martin, Benjamin Forster, and Hanna Cormick. 2010. Cross-Artform Performance Using Networked Interfaces : Last Man to Die’s Vital LMTD. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 204–207. http://doi.org/10.5281/zenodo.1177843
Abstract
Download PDF DOI
In 2009 the cross artform group, Last Man to Die, presenteda series of performances using new interfaces and networkedperformance to integrate the three artforms of its members(actor, Hanna Cormick, visual artist, Benjamin Forster andpercussionist, Charles Martin). This paper explains ourartistic motivations and design for a computer vision surfaceand networked heartbeat sensor as well as the experience ofmounting our first major work, Vital LMTD.
@inproceedings{Martin2010a, author = {Martin, Charles and Forster, Benjamin and Cormick, Hanna}, title = {Cross-Artform Performance Using Networked Interfaces : Last Man to Die's Vital LMTD}, pages = {204--207}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177843}, url = {http://www.nime.org/proceedings/2010/nime2010_204.pdf}, keywords = {cross-artform performance, networked performance, physi- cal computing} }
Alexander R. Jensenius, Kjell Tore Innervik, and Ivar Frounberg. 2010. Evaluating the Subjective Effects of Microphone Placement on Glass Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 208–211. http://doi.org/10.5281/zenodo.1177817
Abstract
Download PDF DOI
We report on a study of perceptual and acoustic featuresrelated to the placement of microphones around a custommade glass instrument. Different microphone setups weretested: above, inside and outside the instrument and at different distances. The sounds were evaluated by an expertperformer, and further qualitative and quantitative analyses have been carried out. Preference was given to therecordings from microphones placed close to the rim of theinstrument, either from the inside or the outside.
@inproceedings{Jensenius2010, author = {Jensenius, Alexander R. and Innervik, Kjell Tore and Frounberg, Ivar}, title = {Evaluating the Subjective Effects of Microphone Placement on Glass Instruments}, pages = {208--211}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177817}, url = {http://www.nime.org/proceedings/2010/nime2010_208.pdf}, keywords = {glass instruments, microphone placement, sound analysis} }
Rudolfo Quintas. 2010. Glitch Delighter : Lighter’s Flame Base Hyper-Instrument for Glitch Music in Burning The Sound Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 212–216. http://doi.org/10.5281/zenodo.1177879
Abstract
Download PDF DOI
Glitch DeLighter is a HyperInstrument conceived for Glitch music, based on the idea of using fire expressiveness to digitally distort sound, pushing the body and primitive ritualism into a computer mediated sound performance. Glitch DeLighter uses ordinary lighters as physical controllers that can be played by creating a flame and moving it in the air. Droned sounds are played by sustaining the flame and beats by generating sparks and fast flames. The pitch of every sound can be changed moving the flame vertically in the air. This is achieved by using a custom computer vision system as an interface which maps the real-time the data extracted from the flame and transmits those parameters to the sound generator. As a result, the flame visual dynamics are deeply connected to the aural perception of the sound - ‘the sound seems to be burning’. This process establishes a metaphor dramaturgically engaging for an audience. This paper contextualizes the glitch music aesthetics, prior research, the design and development of the instrument and reports on Burning The Sound– the first music composition created and performed with the instrument (by the author).
@inproceedings{Quintas2010, author = {Quintas, Rudolfo}, title = {Glitch Delighter : Lighter's Flame Base Hyper-Instrument for Glitch Music in Burning The Sound Performance}, pages = {212--216}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177879}, url = {http://www.nime.org/proceedings/2010/nime2010_212.pdf}, keywords = {Hyper-Instruments, Glitch Music, Interactive Systems, Electronic Music Performance.} }
Andrew McPherson and Youngmoo Kim. 2010. Augmenting the Acoustic Piano with Electromagnetic String Actuation and Continuous Key Position Sensing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 217–222. http://doi.org/10.5281/zenodo.1177849
Abstract
Download PDF DOI
This paper presents the magnetic resonator piano, an augmented instrument enhancing the capabilities of the acoustic grand piano. Electromagnetic actuators induce the stringsto vibration, allowing each note to be continuously controlled in amplitude, frequency, and timbre without external loudspeakers. Feedback from a single pickup on thepiano soundboard allows the actuator waveforms to remainlocked in phase with the natural motion of each string. Wealso present an augmented piano keyboard which reportsthe continuous position of every key. Time and spatial resolution are sufficient to capture detailed data about keypress, release, pretouch, aftertouch, and other extended gestures. The system, which is designed with cost and setupconstraints in mind, seeks to give pianists continuous control over the musical sound of their instrument. The instrument has been used in concert performances, with theelectronically-actuated sounds blending with acoustic instruments naturally and without amplification.
@inproceedings{McPherson2010, author = {McPherson, Andrew and Kim, Youngmoo}, title = {Augmenting the Acoustic Piano with Electromagnetic String Actuation and Continuous Key Position Sensing}, pages = {217--222}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177849}, url = {http://www.nime.org/proceedings/2010/nime2010_217.pdf}, keywords = {Augmented instruments, piano, interfaces, electromagnetic actuation, gesture measurement} }
Cesar M. Grossmann. 2010. Developing a Hybrid Contrabass Recorder Resistances, Expression, Gestures and Rhetoric. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 223–228. http://doi.org/10.5281/zenodo.1177781
Abstract
Download PDF DOI
In this paper I describe aspects that have been involved in my experience of developing a hybrid instrument. The process of transformation and extension of the instrument is informed by ideas concerning the intrinsic communication aspects of musical activities. Decisions taken for designing the instrument and performing with it take into account the hypothesis that there are ontological levels of human reception in music that are related to the intercorporeal. Arguing that it is necessary to encounter resistances for achieving expression, it is suggested that new instrumental development ought to reflect on the concern for keeping the natural connections of live performances.
@inproceedings{Grossmann2010, author = {Grossmann, Cesar M.}, title = {Developing a Hybrid Contrabass Recorder Resistances, Expression, Gestures and Rhetoric}, pages = {223--228}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177781}, url = {http://www.nime.org/proceedings/2010/nime2010_223.pdf}, keywords = {live processing,new instruments,nime10,recorder} }
Alfonso P. Carrillo and Jordi Bonada. 2010. The Bowed Tube : a Virtual Violin. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 229–232. http://doi.org/10.5281/zenodo.1177737
Abstract
Download PDF DOI
This paper presents a virtual violin for real-time performances consisting of two modules: a violin spectral model and a control interface. The interface is composed by a sensing bow and a tube with drawn strings in substitution of a real violin. The spectral model is driven by the bowing controls captured with the control interface and it is able to predict spectral envelopes of the sound corresponding to those controls. The envelopes are filled with harmonic andnoisy content and given to an additive synthesizer in order to produce violin sounds. The sensing system is based on two motion trackers with 6 degrees of freedom. One tracker is attached to the bow and the other to the tube. Bowing controls are computed after a calibration process where the position of virtual strings and the hair-ribbon of the bowis obtained. A real time implementation was developed asa MAX/MSP patch with external objects for each of the modules.
@inproceedings{Carrillo2010, author = {Carrillo, Alfonso P. and Bonada, Jordi}, title = {The Bowed Tube : a Virtual Violin}, pages = {229--232}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177737}, url = {http://www.nime.org/proceedings/2010/nime2010_229.pdf}, keywords = {violin, synthesis, control, spectral, virtual} }
Jordan Hochenbaum, Ajay Kapur, and Matthew Wright. 2010. Multimodal Musician Recognition. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 233–237. http://doi.org/10.5281/zenodo.1177805
Abstract
Download PDF DOI
This research is an initial effort in showing how a multimodal approach can improve systems for gaining insight into a musician’s practice and technique. Embedding a variety of sensors inside musical instruments and synchronously recording the sensors’ data along with audio, we gather a database of gestural information from multiple performers, then use machine-learning techniques to recognize which musician is performing. Our multimodal approach (using both audio and sensor data) yields promising performer classification results, which we see as a first step in a larger effort to gain insight into musicians’ practice and technique.
@inproceedings{Hochenbaum2010, author = {Hochenbaum, Jordan and Kapur, Ajay and Wright, Matthew}, title = {Multimodal Musician Recognition}, pages = {233--237}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177805}, url = {http://www.nime.org/proceedings/2010/nime2010_233.pdf}, keywords = {Performer Recognition, Multimodal, HCI, Machine Learning, Hyperinstrument, eSitar} }
Enric Guaus, Tan Ozaslan, Eric Palacios, and Josep L. Arcos. 2010. A Left Hand Gesture Caption System for Guitar Based on Capacitive Sensors. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 238–243. http://doi.org/10.5281/zenodo.1177783
Abstract
Download PDF DOI
In this paper, we present our research on the acquisitionof gesture information for the study of the expressivenessin guitar performances. For that purpose, we design a sensor system which is able to gather the movements from lefthand fingers. Our effort is focused on a design that is (1)non-intrusive to the performer and (2) able to detect fromstrong movements of the left hand to subtle movements ofthe fingers. The proposed system is based on capacitive sensors mounted on the fingerboard of the guitar. We presentthe setup of the sensor system and analyze its response toseveral finger movements.
@inproceedings{Guaus2010, author = {Guaus, Enric and Ozaslan, Tan and Palacios, Eric and Arcos, Josep L.}, title = {A Left Hand Gesture Caption System for Guitar Based on Capacitive Sensors}, pages = {238--243}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177783}, url = {http://www.nime.org/proceedings/2010/nime2010_238.pdf}, keywords = {Guitar; Gesture acquisition; Capacitive sensors} }
Andrew Schmeder and Adrian Freed. 2010. Support Vector Machine Learning for Gesture Signal Estimation with a Piezo-Resistive Fabric Touch Surface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 244–249. http://doi.org/10.5281/zenodo.1177893
Abstract
Download PDF DOI
The design of an unusually simple fabric-based touchlocation and pressure sensor is introduced. An analysisof the raw sensor data is shown to have significant nonlinearities and non-uniform noise. Using support vectormachine learning and a state-dependent adaptive filter itis demonstrated that these problems can be overcome.The method is evaluated quantitatively using a statisticalestimate of the instantaneous rate of information transfer.The SVM regression alone is shown to improve the gesturesignal information rate by up to 20% with zero addedlatency, and in combination with filtering by 40% subjectto a constant latency bound of 10 milliseconds.
@inproceedings{Schmeder2010, author = {Schmeder, Andrew and Freed, Adrian}, title = {Support Vector Machine Learning for Gesture Signal Estimation with a Piezo-Resistive Fabric Touch Surface}, pages = {244--249}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177893}, url = {http://www.nime.org/proceedings/2010/nime2010_244.pdf}, keywords = {gesture signal processing, support vector machine, touch sensor} }
Jan C. Schacher. 2010. Motion To Gesture To Sound : Mapping For Interactive Dance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 250–254. http://doi.org/10.5281/zenodo.1177889
Abstract
Download PDF DOI
Mapping in interactive dance performance poses a number of questions related to the perception and expression of gestures in contrast to pure motion-detection and analysis. A specific interactive dance project is discussed, in which two complementary sensing modes are integrated to obtain higherlevel expressive gestures. These are applied to a modular nonlinear composition, in which the exploratory dance performance assumes the role of instrumentalist and conductor. The development strategies and methods for each of the involved artists are discussed and the software tools and wearable devices that were developed for this project are presented.
@inproceedings{Schacher2010, author = {Schacher, Jan C.}, title = {Motion To Gesture To Sound : Mapping For Interactive Dance}, pages = {250--254}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177889}, url = {http://www.nime.org/proceedings/2010/nime2010_250.pdf}, keywords = {Mapping, motion sensing, computer vision, artistic strategies, wearable sensors, mapping tools, splines, delaunay tessellation.} }
Ian Whalley. 2010. Generative Improv . & Interactive Music Project (GIIMP). Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 255–258. http://doi.org/10.5281/zenodo.1177923
Abstract
Download PDF DOI
GIIMP addresses the criticism that in many interactive music systems the machine simply reacts. Interaction is addressed by extending Winkler’s [18] model toward adapting Paine’s [10] conversational model of interaction. Realized using commercial tools, GIIMP implements a machine/human generative improvisation system using human gesture input, machine gesture capture, and a gesture mutation module in conjunction with a flocking patch, mapped through microtonal/spectral techniques to sound. The intention is to meld some established and current practices, and combine aspects of symbolic and sub-symbolic approaches, toward musical outcomes.
@inproceedings{Whalley2010, author = {Whalley, Ian}, title = {Generative Improv . \& Interactive Music Project (GIIMP)}, pages = {255--258}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177923}, url = {http://www.nime.org/proceedings/2010/nime2010_255.pdf}, keywords = {Interaction, gesture, genetic algorithm, flocking, improvisation.} }
Kristian Nymoen, Kyrre Glette, Ståle A. Skogstad, Jim Torresen, and Alexander Refsum Jensenius. 2010. Searching for Cross-Individual Relationships between Sound and Movement Features using an SVM Classifier. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 259–262. http://doi.org/10.5281/zenodo.1177869
Abstract
Download PDF DOI
In this paper we present a method for studying relationships between features of sound and features of movement. The method has been tested by carrying out an experiment with people moving an object in space along with short sounds. 3D position data of the object was recorded and several features were calculated from each of the recordings. These features were provided as input to a classifier which was able to classify the recorded actions satisfactorily, particularly when taking into account that the only link between the actions performed by the different subjects was the sound they heard while making the action.
@inproceedings{Nymoen2010, author = {Nymoen, Kristian and Glette, Kyrre and Skogstad, Ståle A. and Torresen, Jim and Jensenius, Alexander Refsum}, title = {Searching for Cross-Individual Relationships between Sound and Movement Features using an {SVM} Classifier}, pages = {259--262}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177869}, url = {http://www.nime.org/proceedings/2010/nime2010_259.pdf}, keywords = {nime10} }
Takashi Baba, Mitsuyo Hashida, and Haruhiro Katayose. 2010. ”VirtualPhilharmony”: A Conducting System with Heuristics of Conducting an Orchestra. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 263–270. http://doi.org/10.5281/zenodo.1177715
Abstract
Download PDF DOI
”VirtualPhilharmony” (V.P.) is a conducting interface that enables users to perform expressive music with conducting action. Several previously developed conducting interfaces do not satisfy users who have conducting experience because the feedback from the conducting action does not always correspond with a natural performance. The tempo scheduler, which is the main engine of a conducting system, must be improved. V.P. solves this problem by introducing heuristics of conducting an orchestra in detecting beats, applying rules regarding the tempo expression in a bar, etc. We confirmed with users that the system realized a high "following" performance and had musical persuasiveness.
@inproceedings{Baba2010, author = {Baba, Takashi and Hashida, Mitsuyo and Katayose, Haruhiro}, title = {''VirtualPhilharmony'': A Conducting System with Heuristics of Conducting an Orchestra}, pages = {263--270}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177715}, url = {http://www.nime.org/proceedings/2010/nime2010_263.pdf}, keywords = {Conducting system, heuristics, sensor, template.} }
2010. New Sensors and Pattern Recognition Techniques for String Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 271–276. http://doi.org/10.5281/zenodo.1177779
Abstract
Download PDF DOI
Pressure, motion, and gesture are important parameters inmusical instrument playing. Pressure sensing allows to interpret complex hidden forces, which appear during playinga musical instrument. The combination of our new sensorsetup with pattern recognition techniques like the lately developed ordered means models allows fast and precise recognition of highly skilled playing techniques. This includes leftand right hand analysis as well as a combination of both. Inthis paper we show bow position recognition for string instruments by means of support vector regression machineson the right hand finger pressure, as well as bowing recognition and inaccurate playing detection with ordered meansmodels. We also introduce a new left hand and chin pressuresensing method for coordination and position change analysis. Our methods in combination with our audio, video,and gesture recording software can be used for teachingand exercising. Especially studies of complex movementsand finger force distribution changes can benefit from suchan approach. Practical applications include the recognitionof inaccuracy, cramping, or malposition, and, last but notleast, the development of augmented instruments and newplaying techniques.
@inproceedings{Grosshauser2010, author = {}, title = {New Sensors and Pattern Recognition Techniques for String Instruments}, pages = {271--276}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177779}, url = {http://www.nime.org/proceedings/2010/nime2010_271.pdf}, keywords = {left hand,nime10,ordered means models,pressure,sensor,strings} }
Tilo Hähnel and Axel Berndt. 2010. Expressive Articulation for Synthetic Music Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 277–282. http://doi.org/10.5281/zenodo.1177789
Abstract
Download PDF DOI
As one of the main expressive feature in music, articulationaffects a wide range of tone attributes. Based on experimental recordings we analyzed human articulation in the lateBaroque style. The results are useful for both the understanding of historically informed performance practices andfurther progress in synthetic performance generation. Thispaper reports of our findings and the implementation in aperformance system. Because of its flexibility and universality the system allows more than Baroque articulation.
@inproceedings{Hahnel2010b, author = {H\''{a}hnel, Tilo and Berndt, Axel}, title = {Expressive Articulation for Synthetic Music Performances}, pages = {277--282}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177789}, url = {http://www.nime.org/proceedings/2010/nime2010_277.pdf}, keywords = {Expressive Performance, Articulation, Historically Informed Performance} }
Andrew R. Brown. 2010. Network Jamming : Distributed Performance using Generative Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 283–286. http://doi.org/10.5281/zenodo.1177723
Abstract
Download PDF DOI
Generative music systems can be played by musicians who manipulate the values of algorithmic parameters, and their datacentric nature provides an opportunity for coordinated interaction amongst a group of systems linked over IP networks; a practice we call Network Jamming. This paper outlines the characteristics of this networked performance practice and discusses the types of mediated musical relationships and ensemble configurations that can arise. We have developed and tested the jam2jam network jamming software over recent years. We describe this system, draw from our experiences with it, and use it to illustrate some characteristics of Network Jamming.
@inproceedings{Brown2010, author = {Brown, Andrew R.}, title = {Network Jamming : Distributed Performance using Generative Music}, pages = {283--286}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177723}, url = {http://www.nime.org/proceedings/2010/nime2010_283.pdf}, keywords = {collaborative,ensemble,generative,interaction,network,nime10} }
Ivar Frounberg, Kjell Tore Innervik, and Alexander R. Jensenius. 2010. Glass Instruments – From Pitch to Timbre. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 287–290. http://doi.org/10.5281/zenodo.1177773
Abstract
Download PDF DOI
The paper reports on the development of prototypes of glassinstruments. The focus has been on developing acousticinstruments specifically designed for electronic treatment,and where timbral qualities have had priority over pitch.The paper starts with a brief historical overview of glassinstruments and their artistic use. Then follows an overviewof the glass blowing process. Finally the musical use of theinstruments is discussed.
@inproceedings{Frounberg2010, author = {Frounberg, Ivar and Innervik, Kjell Tore and Jensenius, Alexander R.}, title = {Glass Instruments -- From Pitch to Timbre}, pages = {287--290}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177773}, url = {http://www.nime.org/proceedings/2010/nime2010_287.pdf}, keywords = {glass instruments,nime,nime10,performance practice} }
Chris Kiefer. 2010. A Malleable Interface for Sonic Exploration. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 291–296. http://doi.org/10.5281/zenodo.1177823
Abstract
Download PDF DOI
Input devices for controlling music software can benefit fromexploiting the use of perceptual-motor skill in interaction.The project described here is a new musical controller, designed with the aim of enabling intuitive and nuanced interaction through direct physical manipulation of malleablematerial.The controller is made from conductive foam. This foamchanges electrical resistance when deformed; the controllerworks by measuring resistance at multiple points in a single piece of foam in order to track its shape. These measurements are complex and interdependent so an echo statenetwork, a form of recurrent neural network, is employed totranslate the sensor readings into usable control data.A cube shaped controller was built and evaluated in thecontext of the haptic exploration of sound synthesis parameter spaces. Eight participants experimented with the controller and were interviewed about their experiences. Thecontroller achieves its aim of enabling intuitive interaction,but in terms of nuanced interaction, accuracy and repeatability were issues for some participants. It’s not clear fromthe short evaluation study whether these issues would improve with practice, a longitudinal study that gives musicians time to practice and find the creative limitations ofthe controller would help to evaluate this fully.The evaluation highlighted interesting issues concerningthe high level nature of malleable control and different approaches to sonic exploration.
@inproceedings{Kiefer2010, author = {Kiefer, Chris}, title = {A Malleable Interface for Sonic Exploration}, pages = {291--296}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177823}, url = {http://www.nime.org/proceedings/2010/nime2010_291.pdf}, keywords = {Musical Controller, Reservoir Computing, Human Computer Interaction, Tangible User Interface, Evaluation} }
Victor Zappi, Andrea Brogni, and Darwin Caldwell. 2010. OSC Virtual Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 297–302. http://doi.org/10.5281/zenodo.1177931
Abstract
Download PDF DOI
The number of artists who express themselves through music in an unconventional way is constantly growing. Thistrend strongly depends on the high diffusion of laptops,which proved to be powerful and flexible musical devices.However laptops still lack in flexible interface, specificallydesigned for music creation in live and studio performances.To resolve this issue many controllers have been developed,taking into account not only the performer’s needs andhabits during music creation, but also the audience desire tovisually understand how performer’s gestures are linked tothe way music is made. According to the common need ofadaptable visual interface to manipulate music, in this paper we present a custom tridimensional controller, based onOpen Sound Control protocol and completely designed towork inside Virtual Reality: simple geometrical shapes canbe created to directly control loop triggering and parametermodification, just using free hand interaction.
@inproceedings{Zappi2010, author = {Zappi, Victor and Brogni, Andrea and Caldwell, Darwin}, title = {OSC Virtual Controller}, pages = {297--302}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177931}, url = {http://www.nime.org/proceedings/2010/nime2010_297.pdf}, keywords = {Glove device, Music controller, Virtual Reality, OSC, con- trol mapping} }
Smilen Dimitrov. 2010. Extending the Soundcard for Use with Generic DC Sensors Demonstrated by Revisiting a Vintage ISA Design. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 303–308. http://doi.org/10.5281/zenodo.1177755
Abstract
Download PDF DOI
The sound card anno 2010, is an ubiquitous part of almostany personal computing system; what was once considereda high-end, CD-quality audio fidelity, is today found in mostcommon sound cards. The increased presence of multichannel devices, along with the high sampling frequency, makesthe sound card desirable as a generic interface for acquisition of analog signals in prototyping of sensor-based musicinterfaces. However, due to the need for coupling capacitorsat a sound card’s inputs and outputs, the use as a genericsignal interface of a sound card is limited to signals not carrying information in a constant DC component. Through arevisit of a card design for the (now defunct) ISA bus, thispaper proposes use of analog gates for bypassing the DCfiltering input sections, controllable from software — therebyallowing for arbitrary choice by the user, if a soundcardinput channel is to be used as a generic analog-to-digitalsensor interface. Issues regarding use of obsolete technology and educational aspects are discussed as well.
@inproceedings{Dimitrov2010, author = {Dimitrov, Smilen}, title = {Extending the Soundcard for Use with Generic {DC} Sensors Demonstrated by Revisiting a Vintage ISA Design}, pages = {303--308}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177755}, url = {http://www.nime.org/proceedings/2010/nime2010_303.pdf}, keywords = {dc,isa,nime10,sensors,soundcard} }
Sylvain Le Groux, Jonatas Manzolli, Paul F. Verschure, et al. 2010. Disembodied and Collaborative Musical Interaction in the Multimodal Brain Orchestra. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 309–314. http://doi.org/10.5281/zenodo.1177831
Abstract
Download PDF DOI
Most new digital musical interfaces have evolved upon theintuitive idea that there is a causality between sonic outputand physical actions. Nevertheless, the advent of braincomputer interfaces (BCI) now allows us to directly accesssubjective mental states and express these in the physicalworld without bodily actions. In the context of an interactive and collaborative live performance, we propose to exploit novel brain-computer technologies to achieve unmediated brain control over music generation and expression.We introduce a general framework for the generation, synchronization and modulation of musical material from brainsignal and describe its use in the realization of Xmotion, amultimodal performance for a "brain quartet".
@inproceedings{LeGroux2010, author = {Le Groux, Sylvain and Manzolli, Jonatas and Verschure, Paul F. and Sanchez, Marti and Luvizotto, Andre and Mura, Anna and Valjamae, Aleksander and Guger, Christoph and Prueckl, Robert and Bernardet, Ulysses}, title = {Disembodied and Collaborative Musical Interaction in the Multimodal Brain Orchestra}, pages = {309--314}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177831}, url = {http://www.nime.org/proceedings/2010/nime2010_309.pdf}, keywords = {Brain-computer Interface, Biosignals, Interactive Music System, Collaborative Musical Performance} }
Jordan Hochenbaum, Owen Vallis, Dimitri Diakopoulos, Jim Murphy, and Ajay Kapur. 2010. Designing Expressive Musical Interfaces for Tabletop Surfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 315–318. http://doi.org/10.5281/zenodo.1177807
Abstract
Download PDF DOI
This paper explores the evolution of collaborative, multi-user, musical interfaces developed for the Bricktable interactive surface. Two key types of applications are addressed: user interfaces for artistic installation and interfaces for musical performance. In describing our software, we provide insight on the methodologies and practicalities of designing interactive musical systems for tabletop surfaces. Additionally, subtleties of working with custom-designed tabletop hardware are addressed.
@inproceedings{Hochenbaum2010a, author = {Hochenbaum, Jordan and Vallis, Owen and Diakopoulos, Dimitri and Murphy, Jim and Kapur, Ajay}, title = {Designing Expressive Musical Interfaces for Tabletop Surfaces}, pages = {315--318}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177807}, url = {http://www.nime.org/proceedings/2010/nime2010_315.pdf}, keywords = {Bricktable, Multi-touch Interface, Tangible Interface, Generative Music, Music Information Retrieval} }
Wendy Suiter. 2010. Toward Algorithmic Composition of Expression in Music Using Fuzzy Logic. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 319–322. http://doi.org/10.5281/zenodo.1177901
Abstract
Download PDF DOI
This paper introduces the concept of composing expressive music using the principles of Fuzzy Logic. The paper provides a conceptual model of a musical work which follows compositional decision making processes. Significant features of this Fuzzy Logic framework are its inclusiveness through the consideration of all the many and varied musical details, while also incorporating the imprecision that characterises musical terminology and discourse. A significant attribute of my Fuzzy Logic method is that it traces the trajectory of all musical details, since it is both the individual elements and their combination over time which is significant to the effectiveness of a musical work in achieving its goals. The goal of this work is to find a set of elements and rules, which will ultimately enable the construction of a genralised algorithmic compositional system which can produce expressive music if so desired.
@inproceedings{Suiter2010, author = {Suiter, Wendy}, title = {Toward Algorithmic Composition of Expression in Music Using Fuzzy Logic}, pages = {319--322}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177901}, url = {http://www.nime.org/proceedings/2010/nime2010_319.pdf}, keywords = {fuzzy logic,music composition,musical expression,nime10} }
Kirsty Beilharz, Andrew Vande Moere, Barbara Stiel, Claudia Calo, Martin Tomitsch, and Adrian Lombard. 2010. Expressive Wearable Sonification and Visualisation : Design and Evaluation of a Flexible Display. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 323–326. http://doi.org/10.5281/zenodo.1177717
Abstract
Download PDF DOI
In this paper we examine a wearable sonification and visualisation display that uses physical analogue visualisation and digital sonification to convey feedback about the wearer’s activity and environment. Intended to bridge a gap between art aesthetics, fashionable technologies and informative physical computing, the user experience evaluation reveals the wearers’ responses and understanding of a novel medium for wearable expression. The study reveals useful insights for wearable device design in general and future iterations of this sonification and visualisation display.
@inproceedings{Beilharz2010, author = {Beilharz, Kirsty and Vande Moere, Andrew and Stiel, Barbara and Calo, Claudia and Tomitsch, Martin and Lombard, Adrian}, title = {Expressive Wearable Sonification and Visualisation : Design and Evaluation of a Flexible Display}, pages = {323--326}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177717}, url = {http://www.nime.org/proceedings/2010/nime2010_323.pdf}, keywords = {Wearable display, sonification, visualisation, design aesthetics, physical computing, multimodal expression, bimodal display} }
Jeremiah Nugroho and Kirsty Beilharz. 2010. Understanding and Evaluating User Centred Design in Wearable Expressions. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 327–330. http://doi.org/10.5281/zenodo.1177867
Abstract
Download PDF DOI
In this paper, we describe the shaping factors, which simplify and help us understand the multi-dimensional aspects of designing Wearable Expressions. These descriptive shaping factors contribute to both the design and user-experience evaluation of Wearable Expressions.
@inproceedings{Nugroho2010, author = {Nugroho, Jeremiah and Beilharz, Kirsty}, title = {Understanding and Evaluating User Centred Design in Wearable Expressions}, pages = {327--330}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177867}, url = {http://www.nime.org/proceedings/2010/nime2010_327.pdf}, keywords = {Wearable expressions, body, user-centered design.} }
Sihwa Park, Seunghun Kim, Samuel Lee, and Woon Seung Yeo. 2010. Online Map Interface for Creative and Interactive. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 331–334. http://doi.org/10.5281/zenodo.1177877
Abstract
Download PDF DOI
In this paper, we discuss the musical potential of COMPath — an online map based music-making tool — as a noveland unique interface for interactive music composition andperformance. COMPath provides an intuitive environmentfor creative music making by sonification of georeferenceddata. Users can generate musical events with simple andfamiliar actions on an online map interface; a set of local information is collected along the user-drawn route andthen interpreted as sounds of various musical instruments.We discuss the musical interpretation of routes on a map,review the design and implementation of COMPath, andpresent selected sonification results with focus on mappingstrategies for map-based composition.
@inproceedings{Park2010, author = {Park, Sihwa and Kim, Seunghun and Lee, Samuel and Yeo, Woon Seung}, title = {Online Map Interface for Creative and Interactive}, pages = {331--334}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177877}, url = {http://www.nime.org/proceedings/2010/nime2010_331.pdf}, keywords = {Musical sonification, map interface, online map service, geo- referenced data, composition, mashup} }
Aristotelis Hadjakos and Max Mühlhäuser. 2010. Analysis of Piano Playing Movements Spanning Multiple Touches. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 335–338. http://doi.org/10.5281/zenodo.1177791
Abstract
Download PDF DOI
Awareness of playing movements can help a piano student to improve technique. We are developing a piano pedagogy application that uses sensor data of hand and arm movement and generates feedback to increase movement awareness. This paper reports on a method for analysis of piano playing movements. The method allows to judge whether an active movement in a joint has occurred during a given time interval. This time interval may include one or more touches. The problem is complicated by the fact that the mechanical interaction between the arm and piano action generates additional movements that are not under direct control of the player. The analysis method is able to ignore these movements and can therefore be used to provide useful feedback.
@inproceedings{Hadjakos2010, author = {Hadjakos, Aristotelis and M\''{u}hlh\''{a}user, Max}, title = {Analysis of Piano Playing Movements Spanning Multiple Touches}, pages = {335--338}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177791}, url = {http://www.nime.org/proceedings/2010/nime2010_335.pdf}, keywords = {nime10} }
Sebastian Heinz and Sile O’Modhrain. 2010. Designing a Shareable Musical TUI. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 339–342. http://doi.org/10.5281/zenodo.1177803
Abstract
Download PDF DOI
This paper proposes a design concept for a tangible interface forcollaborative performances that incorporates two social factorspresent during performance, the individual creation andadaptation of technology and the sharing of it within acommunity. These factors are identified using the example of alaptop ensemble and then applied to three existing collaborativeperformance paradigms. Finally relevant technology, challengesand the current state of our implementation are discussed.
@inproceedings{Heinz2010, author = {Heinz, Sebastian and O'Modhrain, Sile}, title = {Designing a Shareable Musical TUI}, pages = {339--342}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177803}, url = {http://www.nime.org/proceedings/2010/nime2010_339.pdf}, keywords = {Tangible User Interfaces, collaborative performances, social factors} }
Adrian Freed. 2010. Visualizations and Interaction Strategies for Hybridization Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 343–347. http://doi.org/10.5281/zenodo.1177769
Abstract
Download PDF DOI
We present two complementary approaches for the visualization and interaction of dimensionally reduced data setsusing hybridization interfaces. Our implementations privilege syncretic systems allowing one to explore combinations(hybrids) of disparate elements of a data set through theirplacement in a 2-D space. The first approach allows for theplacement of data points anywhere on the plane accordingto an anticipated performance strategy. The contribution(weight) of each data point varies according to a power function of the distance from the control cursor. The secondapproach uses constrained vertex colored triangulations ofmanifolds with labels placed at the vertices of triangulartiles. Weights are computed by barycentric projection ofthe control cursor position.
@inproceedings{Freed2010, author = {Freed, Adrian}, title = {Visualizations and Interaction Strategies for Hybridization Interfaces}, pages = {343--347}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177769}, url = {http://www.nime.org/proceedings/2010/nime2010_343.pdf}, keywords = {Interpolation, dimension reduction, radial basis functions, triangular mesh} }
Björn Wöldecke, Christian Geiger, Holger Reckter, and Florian Schulz. 2010. ANTracks 2.0 — Generative Music on Multiple Multitouch Devices Categories and Subject Descriptors. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 348–351. http://doi.org/10.5281/zenodo.1177921
Abstract
Download PDF DOI
In this paper we describe work in progress on generative music generation on multi-touch devices. Our goal is to create a musical application framework for multiple casual users that use state of the art multitouch devices. We choose the metaphor of ants moving on a hexagonal grid to interact with a pitch pattern. The set of devices used includes a custom built multitouch table and a number of iPhones to jointly create musical expressions.
@inproceedings{Woldecke2010, author = {W\''{o}ldecke, Bj\''{o}rn and Geiger, Christian and Reckter, Holger and Schulz, Florian}, title = {ANTracks 2.0 --- Generative Music on Multiple Multitouch Devices Categories and Subject Descriptors}, pages = {348--351}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177921}, url = {http://www.nime.org/proceedings/2010/nime2010_348.pdf}, keywords = {Generative music, mobile interfaces, multitouch interaction} }
Laewoo Kang and Hsin-Yi Chien. 2010. Hé : Calligraphy as a Musical Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 352–355. http://doi.org/10.5281/zenodo.1177819
Abstract
Download PDF DOI
The project Hé(和, harmony) is a sound installation that enables a user to play music by writing calligraphy. We developed a system where calligraphic symbols can be detected and converted to a sound composed of pitch, pitch length, and volume though MIDI and serial communication. The Hé sound installation involves a micro-controller, photocells, and multiplexers. A DC motor controls the speed of a spooled paper roll that is capable of setting the music tempo. This paper presents the design concept and implementation of Hé. We discuss the major research issues such as using photocells for detecting components of calligraphy like thickness and location. Hardware and software details are also discussed. Finally, we explore the potential for further extending musical and visual experience through this project’s applications and outcomes.
@inproceedings{Kang2010, author = {Kang, Laewoo and Chien, Hsin-Yi}, title = {H\'{e} : Calligraphy as a Musical Interface}, pages = {352--355}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177819}, url = {http://www.nime.org/proceedings/2010/nime2010_352.pdf}, keywords = {Interactive music interface, calligraphy, graphical music composing, sonification} }
Martin Marier. 2010. The Sponge A Flexible Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 356–359. http://doi.org/10.5281/zenodo.1177839
Abstract
Download PDF DOI
The sponge is an interface that allows a clear link to beestablished between gesture and sound in electroacousticmusic. The goals in developing the sponge were to reintroduce the pleasure of playing and to improve the interaction between the composer/performer and the audience. Ithas been argued that expenditure of effort or energy is required to obtain expressive interfaces. The sponge favors anenergy-sound relationship in two ways : 1) it senses acceleration, which is closely related to energy; and 2) it is madeout of a flexible material (foam) that requires effort to besqueezed or twisted. Some of the mapping strategies usedin a performance context with the sponge are discussed.
@inproceedings{Marier2010, author = {Marier, Martin}, title = {The Sponge A Flexible Interface}, pages = {356--359}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177839}, url = {http://www.nime.org/proceedings/2010/nime2010_356.pdf}, keywords = {Interface, electroacoustic music, performance, expressivity, mapping} }
Lawrence Fyfe, Sean Lynch, Carmen Hull, and Sheelagh Carpendale. 2010. SurfaceMusic : Mapping Virtual Touch-based Instruments to Physical Models. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 360–363. http://doi.org/10.5281/zenodo.1177777
Abstract
Download PDF DOI
In this paper we discuss SurfaceMusic, a tabletop music system in which touch gestures are mapped to physical modelsof instruments. With physical models, parametric controlover the sound allows for a more natural interaction between gesture and sound. We discuss the design and implementation of a simple gestural interface for interactingwith virtual instruments and a messaging system that conveys gesture data to the audio system.
@inproceedings{Fyfe2010, author = {Fyfe, Lawrence and Lynch, Sean and Hull, Carmen and Carpendale, Sheelagh}, title = {SurfaceMusic : Mapping Virtual Touch-based Instruments to Physical Models}, pages = {360--363}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177777}, url = {http://www.nime.org/proceedings/2010/nime2010_360.pdf}, keywords = {Tabletop, multi-touch, gesture, physical model, Open Sound Control.} }
Aengus Martin, Sam Ferguson, and Kirsty Beilharz. 2010. Mechanisms for Controlling Complex Sound Sources : Applications to Guitar Feedback Control. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 364–367. http://doi.org/10.5281/zenodo.1177841
Abstract
Download PDF DOI
Many musical instruments have interfaces which emphasisethe pitch of the sound produced over other perceptual characteristics, such as its timbre. This is at odds with the musical developments of the last century. In this paper, weintroduce a method for replacing the interface of musicalinstruments (both conventional and unconventional) witha more flexible interface which can present the intrument’savailable sounds according to variety of different perceptualcharacteristics, such as their brightness or roughness. Weapply this method to an instrument of our own design whichcomprises an electro-mechanically controlled electric guitarand amplifier configured to produce feedback tones.
@inproceedings{Martin2010, author = {Martin, Aengus and Ferguson, Sam and Beilharz, Kirsty}, title = {Mechanisms for Controlling Complex Sound Sources : Applications to Guitar Feedback Control}, pages = {364--367}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177841}, url = {http://www.nime.org/proceedings/2010/nime2010_364.pdf}, keywords = {Concatenative Synthesis, Feedback, Guitar} }
Jim Torresen, Eirik Renton, and Alexander R. Jensenius. 2010. Wireless Sensor Data Collection based on ZigBee Communication. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 368–371. http://doi.org/10.5281/zenodo.1177911
Abstract
Download PDF DOI
This paper presents a comparison of different configurationsof a wireless sensor system for capturing human motion.The systems consist of sensor elements which wirelesslytransfers motion data to a receiver element. The sensorelements consist of a microcontroller, accelerometer(s) anda radio transceiver. The receiver element consists of a radioreceiver connected through a microcontroller to a computerfor real time sound synthesis. The wireless transmission between the sensor elements and the receiver element is basedon the low rate IEEE 802.15.4/ZigBee standard.A configuration with several accelerometers connected bywire to a wireless sensor element is compared to using multiple wireless sensor elements with only one accelerometer ineach. The study shows that it would be feasable to connect5-6 accelerometers in the given setups.Sensor data processing can be done in either the receiverelement or in the sensor element. For various reasons it canbe reasonable to implement some sensor data processing inthe sensor element. The paper also looks at how much timethat typically would be needed for a simple pre-processingtask.
@inproceedings{Torresen2010, author = {Torresen, Jim and Renton, Eirik and Jensenius, Alexander R.}, title = {Wireless Sensor Data Collection based on {ZigBee} Communication}, pages = {368--371}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177911}, url = {http://www.nime.org/proceedings/2010/nime2010_368.pdf}, keywords = {wireless communication, ZigBee, microcontroller} }
Javier Jaimovich and Benjamin Knapp. 2010. Synchronization of Multimodal Recordings for Musical Performance Research. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 372–374. http://doi.org/10.5281/zenodo.1177815
Abstract
Download PDF DOI
The past decade has seen an increase of low-cost technology for sensor data acquisition, which has been utilized for the expanding field of research in gesture measurement for music performance. Unfortunately, these devices are still far from being compatible with the audiovisual recording platforms which have been used to record synchronized streams of data. In this paper, we describe a practical solution for simultaneous recording of heterogeneous multimodal signals. The recording system presented uses MIDI Time Code to time-stamp sensor data and to synchronize with standard video and audio recording systems. We also present a set of tools for recording sensor data, as well as a set of analysis tools to evaluate in realtime the sample rate of different signals, and the overall synchronization status of the recording system.
@inproceedings{Jaimovich2010a, author = {Jaimovich, Javier and Knapp, Benjamin}, title = {Synchronization of Multimodal Recordings for Musical Performance Research}, pages = {372--374}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177815}, url = {http://www.nime.org/proceedings/2010/nime2010_372.pdf}, keywords = {Synchronization, Multimodal Signals, Sensor Data Acquisition, Signal Recording.} }
Giuseppe Torre, Mark O’Leary, and Brian Tuohy. 2010. POLLEN A Multimedia Interactive Network Installation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 375–376. http://doi.org/10.5281/zenodo.1177909
Abstract
Download PDF DOI
This paper describes the development of an interactive 3Daudio/visual and network installation entitled POLLEN.Specifically designed for large computer Laboratories, theartwork explores the regeneration of those spaces throughthe creation of a fully immersive multimedia art experience.The paper describes the technical, aesthetic and educational development of the piece.
@inproceedings{Torre2010, author = {Torre, Giuseppe and O'Leary, Mark and Tuohy, Brian}, title = {POLLEN A Multimedia Interactive Network Installation}, pages = {375--376}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177909}, url = {http://www.nime.org/proceedings/2010/nime2010_375.pdf}, keywords = {Interactive, Installation, Network, 3D Physics Emulator, Educational Tools, Public Spaces, Computer Labs, Sound Design, Site-Specific Art} }
Xiaoyang Feng. 2010. Irregular Incurve. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 377–379. http://doi.org/10.5281/zenodo.1177765
Abstract
Download PDF DOI
Irregular Incurve is a MIDI controllable robotic string instrument. The twelve independent string-units compose the complete musical scale of 12 units. Each string can be plucked by a motor control guitar pick. A MIDI keyboard is attached to the instrument and serves as an interface for real-time interactions between the instrument and the audience. Irregular Incurve can also play preprogrammed music by itself. This paper presents the design concept and the technical solutions to realizing the functionality of Irregular Incurve. The future features are also discussed.
@inproceedings{Feng2010, author = {Feng, Xiaoyang}, title = {Irregular Incurve}, pages = {377--379}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177765}, url = {http://www.nime.org/proceedings/2010/nime2010_377.pdf}, keywords = {NIME, Robotics, Acoustic, Interactive, MIDI, Real time Performance, String Instrument, Arduino, Servo, Motor Control} }
Chikashi Miyama. 2010. Peacock : A Non-Haptic 3D Performance Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 380–382. http://doi.org/10.5281/zenodo.1177859
Abstract
Download PDF DOI
Peacock is a newly designed interface for improvisational performances. The interface is equipped with thirty-five proximity sensors arranged in five rows and seven columns. The sensors detect the movements of a performer’s hands and arms in a three-dimensional space above them. The interface digitizes the output of the sensors into sets of high precision digital packets, and sends them to a patch running in Pdextended with a sufficiently high bandwidth for performances with almost no computational resource consumption in Pd. The precision, speed, and efficiency of the system enable the sonification of hand gestures in realtime without the need to attach any physical devices to the performer’s body. This paper traces the interface’s evolution, discussing relevant technologies, hardware construction, system design, and input monitoring.
@inproceedings{Miyama2010, author = {Miyama, Chikashi}, title = {Peacock : A Non-Haptic {3D} Performance Interface}, pages = {380--382}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177859}, url = {http://www.nime.org/proceedings/2010/nime2010_380.pdf}, keywords = {Musical interface, Sensor technologies, Computer music, Hardware and software design} }
Jukka Holm, Harri Holm, and Jarno Seppänen. 2010. Associating Emoticons with Musical Genres. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 383–386. http://doi.org/10.5281/zenodo.1177809
Abstract
Download PDF DOI
Music recommendation systems can observe user’s personal preferences and suggest new tracks from a large online catalog. In the case of context-aware recommenders, user’s current emotional state plays an important role. One simple way to visualize emotions and moods is graphical emoticons. In this study, we researched a high-level mapping between genres, as descriptions of music, and emoticons, as descriptions of emotions and moods. An online questionnaire with 87 participants was arranged. Based on the results, we present a list of genres that could be used as a starting point for making recommendations fitting the current mood of the user.
@inproceedings{Holm2010, author = {Holm, Jukka and Holm, Harri and Sepp\''{a}nen, Jarno}, title = {Associating Emoticons with Musical Genres}, pages = {383--386}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177809}, url = {http://www.nime.org/proceedings/2010/nime2010_383.pdf}, keywords = {Music, music recommendation, context, facial expression, mood, emotion, emoticon, and musical genre.} }
Yoichi Nagashima. 2010. Untouchable Instrument "Peller-Min". Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 387–390. http://doi.org/10.5281/zenodo.1177865
Abstract
Download PDF DOI
This paper is a report on the development of a new musical instrument in which the main concept is "Untouchable". The key concept of this instrument is "sound generation by body gesture (both hands)" and "sound generation by kneading with hands". The new composition project had completed as the premiere of a new work "controllable untouchableness" with this new instrument in December 2009.
@inproceedings{Nagashima2010, author = {Nagashima, Yoichi}, title = {Untouchable Instrument "Peller-Min"}, pages = {387--390}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177865}, url = {http://www.nime.org/proceedings/2010/nime2010_387.pdf}, keywords = {Theremin, untouchable, distance sensor, Propeller processor} }
Javier Jaimovich. 2010. Ground Me ! An Interactive Sound Art Installation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 391–394. http://doi.org/10.5281/zenodo.1177813
Abstract
Download PDF DOI
This paper describes the design, implementation and outcome of Ground Me!, an interactive sound installation set up in the Sonic Lab of the Sonic Arts Research Centre. The site-specific interactive installation consists of multiple copper poles hanging from the Sonic Lab’s ceiling panels, which trigger samples of electricity sounds when grounded through the visitor’s’ body to the space’s metallic floor.
@inproceedings{Jaimovich2010, author = {Jaimovich, Javier}, title = {Ground Me ! An Interactive Sound Art Installation}, pages = {391--394}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177813}, url = {http://www.nime.org/proceedings/2010/nime2010_391.pdf}, keywords = {Interactive sound installation, body impedance, skin conductivity, site-specific sound installation, human network, Sonic Lab, Arduino.} }
Norma S. Savage, Syed R. Ali, and Norma E. Chavez. 2010. Mmmmm: A Multi-modal Mobile Music Mixer. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 395–398. http://doi.org/10.5281/zenodo.1177887
Abstract
Download PDF DOI
This paper presents Mmmmm; a Multimodal Mobile Music Mixer that provides DJs a new interface for mixing musicon the Nokia N900 phones. Mmmmm presents a novel way for DJ to become more interactive with their audience andvise versa. The software developed for the N900 mobilephone utilizes the phones built-in accelerometer sensor andBluetooth audio streaming capabilities to mix and apply effects to music using hand gestures and have the mixed audiostream to Bluetooth speakers, which allows the DJ to moveabout the environment and get familiarized with their audience, turning the experience of DJing into an interactiveand audience engaging process. Mmmmm is designed so that the DJ can utilize handgestures and haptic feedback to help them perform the various tasks involved in DJing (mixing, applying effects, andetc). This allows the DJ to focus on the crowd, thus providing the DJ a better intuition of what kind of music ormusical mixing style the audience is more likely to enjoyand engage with. Additionally, Mmmmm has an Ambient Tempo Detection mode in which the phones camera is utilized to detect the amount of movement in the environment and suggest to the DJ the tempo of music that should be played. This mode utilizes frame differencing and pixelchange overtime to get a sense of how fast the environmentis changing, loosely correlating to how fast the audience isdancing or the lights are flashing in the scene. By determining the ambient tempo of the environment the DJ canget a better sense for the type of music that would fit bestfor their venue.Mmmmm helps novice DJs achieve a better music repertoire by allowing them to interact with their audience andreceive direct feedback on their performance. The DJ canchoose to utilize these modes of interaction and performance or utilize traditional DJ controls using MmmmmsN900 touch screen based graphics user interface.
@inproceedings{Savage2010, author = {Savage, Norma S. and Ali, Syed R. and Chavez, Norma E.}, title = {Mmmmm: A Multi-modal Mobile Music Mixer}, pages = {395--398}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177887}, url = {http://www.nime.org/proceedings/2010/nime2010_395.pdf}, keywords = {Multi-modal, interaction, music, mixer, mobile, interactive, DJ, smart phones, Nokia, n900, touch screen, accelerometer, phone, audience} }
Chih-Chieh Tsai, Cha-Lin Liu, and Teng-Wen Chang. 2010. An Interactive Responsive Skin for Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 399–402. http://doi.org/10.5281/zenodo.1177915
Abstract
Download PDF DOI
With the decreasing audience of classical music performance, this research aims to develop a performance-enhancement system, called AIDA, to help classical performers better communicating with their audiences. With three procedures Input-Processing-Output, AIDA system can sense and analyze the body information of performers and further reflect it onto the responsive skin. Thus abstract and intangible emotional expressions of performers are transformed into tangible and concrete visual elements, which clearly facilitating the audiences’ threshold for music appreciation.
@inproceedings{Tsai2010, author = {Tsai, Chih-Chieh and Liu, Cha-Lin and Chang, Teng-Wen}, title = {An Interactive Responsive Skin for Music}, pages = {399--402}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177915}, url = {http://www.nime.org/proceedings/2010/nime2010_399.pdf}, keywords = {Interactive Performance, Ambient Environment, Responsive Skin, Music performance.} }
Nick Bryan-Kinns, Robin Fencott, Oussama Metatla, Shahin Nabavian, and Jennifer G. Sheridan. 2010. Interactional Sound and Music : Listening to CSCW, Sonification, and Sound Art. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 403–406. http://doi.org/10.5281/zenodo.1177727
Abstract
Download PDF DOI
In this paper we outline the emerging field of Interactional Sound and Music which concerns itself with multi-person technologically mediated interactions primarily using audio. We present several examples of interactive systems in our group, and reflect on how they were designed and evaluated. Evaluation techniques for collective, performative, and task oriented activities are outlined and compared. We emphasise the importance of designing for awareness in these systems, and provide examples of different awareness mechanisms.
@inproceedings{BryanKinns2010, author = {Bryan-Kinns, Nick and Fencott, Robin and Metatla, Oussama and Nabavian, Shahin and Sheridan, Jennifer G.}, title = {Interactional Sound and Music : Listening to CSCW, Sonification, and Sound Art}, pages = {403--406}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177727}, url = {http://www.nime.org/proceedings/2010/nime2010_403.pdf}, keywords = {Interactional, sound, music, mutual engagement, improvisation, composition, collaboration, awareness.} }
Ståle A. Skogstad, Alexander Refsum Jensenius, and Kristian Nymoen. 2010. Using IR Optical Marker Based Motion Capture for Exploring Musical Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 407–410. http://doi.org/10.5281/zenodo.1177895
Abstract
Download PDF DOI
The paper presents a conceptual overview of how optical infrared marker based motion capture systems (IrMoCap) can be used in musical interaction. First we present a review of related work of using IrMoCap for musical control. This is followed by a discussion of possible features which can be exploited. Finally, the question of mapping movement features to sound features is presented and discussed.
@inproceedings{Skogstad2010, author = {Skogstad, Ståle A. and Jensenius, Alexander Refsum and Nymoen, Kristian}, title = {Using {IR} Optical Marker Based Motion Capture for Exploring Musical Interaction}, pages = {407--410}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177895}, url = {http://www.nime.org/proceedings/2010/nime2010_407.pdf}, keywords = {nime10} }
Benjamin Buch, Pieter Coussement, and Lüder Schmidt. 2010. ”playing robot” : An Interactive Sound Installation in Human-Robot Interaction Design for New Media Art. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 411–414. http://doi.org/10.5281/zenodo.1177729
Abstract
Download PDF DOI
In this study artistic human-robot interaction design is introduced as a means for scientific research and artistic investigations. It serves as a methodology for situated cognitionintegrating empirical methodology and computational modeling, and is exemplified by the installation playing robot.Its artistic purpose is to aid to create and explore robots as anew medium for art and entertainment. We discuss the useof finite state machines to organize robots’ behavioral reactions to sensor data, and give a brief outlook on structuredobservation as a potential method for data collection.
@inproceedings{Buch2010, author = {Buch, Benjamin and Coussement, Pieter and Schmidt, L\''{u}der}, title = {''playing robot'' : An Interactive Sound Installation in Human-Robot Interaction Design for New Media Art}, pages = {411--414}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177729}, url = {http://www.nime.org/proceedings/2010/nime2010_411.pdf}, keywords = {dynamic mapping,embodiment,finite state au-,human-robot interaction,new media art,nime10,structured,tomata} }
Reboursière Loı̈c, Christian Frisson, Otso Lähdeoja, John A. Mills, Cécile Picard-Limpens, and Todor Todoroff. 2010. Multimodal Guitar : A Toolbox For Augmented Guitar Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 415–418. http://doi.org/10.5281/zenodo.1177881
Abstract
Download PDF DOI
This project aims at studying how recent interactive and interactions technologies would help extend how we play theguitar, thus defining the "multimodal guitar". Our contributions target three main axes: audio analysis, gestural control and audio synthesis. For this purpose, we designed anddeveloped a freely-available toolbox for augmented guitarperformances, compliant with the PureData and Max/MSPenvironments, gathering tools for: polyphonic pitch estimation, fretboard visualization and grouping, pressure sensing,modal synthesis, infinite sustain, rearranging looping and "smart" harmonizing.
@inproceedings{Reboursiere2010, author = {Reboursi\`{e}re, Lo\''{\i}c and Frisson, Christian and L\''{a}hdeoja, Otso and Mills, John A. and Picard-Limpens, C\'{e}cile and Todoroff, Todor}, title = {Multimodal Guitar : A Toolbox For Augmented Guitar Performances}, pages = {415--418}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177881}, url = {http://www.nime.org/proceedings/2010/nime2010_415.pdf}, keywords = {Augmented guitar, audio synthesis, digital audio effects, multimodal interaction, gestural sensing, polyphonic tran- scription, hexaphonic guitar} }
Michael Berger. 2010. The GRIP MAESTRO : Idiomatic Mappings of Emotive Gestures for Control of Live Electroacoustic Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 419–422. http://doi.org/10.5281/zenodo.1177719
Abstract
Download PDF DOI
This paper introduces my research in physical interactive design with my "GRIP MAESTRO" electroacoustic performance interface. It then discusses the considerations involved in creating intuitive software mappings of emotive performative gestures such that they are idiomatic not only of the sounds they create but also of the physical nature of the interface itself.
@inproceedings{Berger2010, author = {Berger, Michael}, title = {The GRIP MAESTRO : Idiomatic Mappings of Emotive Gestures for Control of Live Electroacoustic Music}, pages = {419--422}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177719}, url = {http://www.nime.org/proceedings/2010/nime2010_419.pdf}, keywords = {emotive gesture and music,hall effect,human-controller interaction,musical mapping strategies,nime10,novel musical instrument,passive haptic feedback,sensor-augmented hand-exerciser} }
Kimberlee Headlee, Tatyana Koziupa, and Diana Siwiak. 2010. Sonic Virtual Reality Game : How Does Your Body Sound ? Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 423–426. http://doi.org/10.5281/zenodo.1177801
Abstract
Download PDF DOI
In this paper, we present an interactive system that uses the body as a generative tool for creating music. We explore innovative ways to make music, create self-awareness, and provide the opportunity for unique, interactive social experiences. The system uses a multi-player game paradigm, where players work together to add layers to a soundscape of three distinct environments. Various sensors and hardware are attached to the body and transmit signals to a workstation, where they are processed using Max/MSP. The game is divided into three levels, each of a different soundscape. The underlying purpose of our system is to move the player’s focus away from complexities of the modern urban world toward a more internalized meditative state. The system is currently viewed as an interactive installation piece, but future iterations have potential applications in music therapy, bio games, extended performance art, and as a prototype for new interfaces for musical expression.
@inproceedings{Headlee2010, author = {Headlee, Kimberlee and Koziupa, Tatyana and Siwiak, Diana}, title = {Sonic Virtual Reality Game : How Does Your Body Sound ?}, pages = {423--426}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177801}, url = {http://www.nime.org/proceedings/2010/nime2010_423.pdf}, keywords = {biomusic, collaborative, expressive, hci, interactive, interactivity design, interface for musical expression, multimodal, musical mapping strategies,nime10,performance,sonification} }
Alex Stahl and Patricia Clemens. 2010. Auditory Masquing : Wearable Sound Systems for Diegetic Character Voices. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 427–430. http://doi.org/10.5281/zenodo.1177899
Abstract
Download PDF DOI
Maintaining a sense of personal connection between increasingly synthetic performers and increasingly diffuse audiences is vital to storytelling and entertainment. Sonic intimacy is important, because voice is one of the highestbandwidth channels for expressing our real and imagined selves.New tools for highly focused spatialization could help improve acoustical clarity, encourage audience engagement, reduce noise pollution and inspire creative expression. We have a particular interest in embodied, embedded systems for vocal performance enhancement and transformation. This short paper describes work in progress on a toolkit for high-quality wearable sound suits. Design goals include tailored directionality and resonance, full bandwidth, and sensible ergonomics. Engineering details to accompany a demonstration of recent prototypes are presented, highlighting a novel magnetostrictive flextensional transducer. Based on initial observations we suggest that vocal acoustic output from the torso, and spatial perception of situated low frequency sources, are two areas deserving greater attention and further study.
@inproceedings{Stahl2010, author = {Stahl, Alex and Clemens, Patricia}, title = {Auditory Masquing : Wearable Sound Systems for Diegetic Character Voices}, pages = {427--430}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177899}, url = {http://www.nime.org/proceedings/2010/nime2010_427.pdf}, keywords = {magnetostrictive flextensional transducer,nime10,paralinguistics,sound reinforcement,spatialization,speech enhancement,transformation,voice,wearable systems} }
Paul Rothman. 2010. The Ghost : An Open-Source, User Programmable MIDI Performance Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 431–435. http://doi.org/10.5281/zenodo.1177885
Abstract
Download PDF DOI
The Ghost has been developed to create a merger between the standard MIDI keyboard controller, MIDI/digital guitars and alternative desktop controllers. Using a custom software editor, The Ghost’s controls can be mapped to suit the users performative needs. The interface takes its interaction and gestural cues from the guitar but it is not a MIDI guitar. The Ghost’s hardware, firmware and software will be open sourced with the hopes of creating a community of users that are invested in creating music with controller.
@inproceedings{Rothman2010, author = {Rothman, Paul}, title = {The Ghost : An Open-Source, User Programmable {MIDI} Performance Controller}, pages = {431--435}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177885}, url = {http://www.nime.org/proceedings/2010/nime2010_431.pdf}, keywords = {Controller, MIDI, Live Performance, Programmable, Open-Source} }
Garth Paine. 2010. Towards a Taxonomy of Realtime Interfaces for Electronic Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 436–439. http://doi.org/10.5281/zenodo.1177873
Abstract
Download PDF DOI
This paper presents a discussion regarding organology classification and taxonomies for digital musical instruments (DMI), arising from the TIEM (Taxonomy of Interfaces for Electronic Music performance) survey (http://tiem.emf.org/), conducted as part of an Australian Research Council Linkage project titled "Performance Practice in New Interfaces for Realtime Electronic Music Performance". This research is being carried out at the VIPRe Lab at, the University of Western Sydney in partnership with the Electronic Music Foundation (EMF), Infusion Systems1 and The Input Devices and Music Interaction Laboratory (IDMIL) at McGill University. The project seeks to develop a schema of new interfaces for realtime electronic music performance.
@inproceedings{Paine2010, author = {Paine, Garth}, title = {Towards a Taxonomy of Realtime Interfaces for Electronic Music Performance}, pages = {436--439}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177873}, url = {http://www.nime.org/proceedings/2010/nime2010_436.pdf}, keywords = {Instrument, Interface, Organology, Taxonomy.} }
Robyn Taylor, Guy Schofield, John Shearer, Pierre Boulanger, Jayne Wallace, and Patrick Olivier. 2010. humanaquarium : A Participatory Performance System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 88–93. http://doi.org/10.5281/zenodo.1177905
Abstract
Download PDF DOI
humanaquarium is a self-contained, transportable performance environment that is used to stage technology-mediated interactive performances in public spaces. Drawing upon the creative practices of busking and street performance, humanaquarium incorporates live musicians, real-time audiovisual content generation, and frustrated total internal reflection (FTIR) technology to facilitate participatory interaction by members of the public.
@inproceedings{Taylor2010, author = {Taylor, Robyn and Schofield, Guy and Shearer, John and Boulanger, Pierre and Wallace, Jayne and Olivier, Patrick}, title = {humanaquarium : A Participatory Performance System}, pages = {88--93}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177905}, url = {http://www.nime.org/proceedings/2010/nime2010_440.pdf}, keywords = {busking, collaborative interface, creative practice, experience centered design, frustrated total internal reflection (FTIR), multi-touch screen, multimedia, participatory performance} }
Hyun-Soo Kim, Je-Han Yoon, and Moon-Sik Jung. 2010. Interactive Music Studio : The Soloist. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 444–446. http://doi.org/10.5281/zenodo.1177825
Abstract
Download PDF DOI
In this paper, we present and demonstrate Samsung’s new concept music creation engine and music composer application for mobile devices such as touch phones or MP3 players, ‘Interactive Music Studio : the soloist’.
@inproceedings{Kim2010, author = {Kim, Hyun-Soo and Yoon, Je-Han and Jung, Moon-Sik}, title = {Interactive Music Studio : The Soloist}, pages = {444--446}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177825}, url = {http://www.nime.org/proceedings/2010/nime2010_444.pdf}, keywords = {Mobile device, music composer, pattern composing, MIDI} }
Pierre Alexandre Tremblay and Diemo Schwarz. 2010. Surfing the Waves : Live Audio Mosaicing of an Electric Bass Performance as a Corpus Browsing Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 447–450. http://doi.org/10.5281/zenodo.1177913
Abstract
Download PDF DOI
In this paper, the authors describe how they use an electric bass as a subtle, expressive and intuitive interface to browse the rich sample bank available to most laptop owners. This is achieved by audio mosaicing of the live bass performance audio, through corpus-based concatenative synthesis (CBCS) techniques, allowing a mapping of the multi-dimensional expressivity of the performance onto foreign audio material, thus recycling the virtuosity acquired on the electric instrument with a trivial learning curve. This design hypothesis is contextualised and assessed within the Sandbox#n series of bass+laptop meta-instruments, and the authors describe technical means of the implementation through the use of the open-source CataRT CBCS system adapted for live mosaicing. They also discuss their encouraging early results and provide a list of further explorations to be made with that rich new interface.
@inproceedings{Tremblay2010, author = {Tremblay, Pierre Alexandre and Schwarz, Diemo}, title = {Surfing the Waves : Live Audio Mosaicing of an Electric Bass Performance as a Corpus Browsing Interface}, pages = {447--450}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177913}, url = {http://www.nime.org/proceedings/2010/nime2010_447.pdf}, keywords = {laptop improvisation, corpus-based concatenative synthesis, haptic interface, multi-dimensional mapping, audio mosaic} }
A. Cavan Fyans, Michael Gurevich, and Paul Stapleton. 2010. Examining the Spectator Experience. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 451–454. http://doi.org/10.5281/zenodo.1177775
Abstract
Download PDF DOI
Drawing on a model of spectator understanding of error inperformance in the literature, we document a qualitativeexperiment that explores the relationships between domainknowledge, mental models, intention and error recognitionby spectators of performances with electronic instruments.Participants saw two performances with contrasting instruments, with controls on their mental model and understanding of intention. Based on data from a subsequent structured interview, we identify themes in participants’ judgements and understanding of performance and explanationsof their spectator experience. These reveal both elementsof similarity and difference between the two performances,instruments and between domain knowledge groups. Fromthese, we suggest and discuss implications for the design ofnovel performative interactions with technology.
@inproceedings{Fyans2010, author = {Fyans, A. Cavan and Gurevich, Michael and Stapleton, Paul}, title = {Examining the Spectator Experience}, pages = {451--454}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177775}, url = {http://www.nime.org/proceedings/2010/nime2010_451.pdf}, keywords = {error,intention,mental model,nime10,qualitative,spectator} }
Nick Collins, Chris Kiefer, Zeeshan Patoli, and Martin White. 2010. Musical Exoskeletons : Experiments with a Motion Capture Suit. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 455–458. http://doi.org/10.5281/zenodo.1177749
Abstract
Download PDF DOI
Gaining access to a prototype motion capture suit designedby the Animazoo company, the Interactive Systems groupat the University of Sussex have been investigating application areas. This paper describes our initial experimentsin mapping the suit control data to sonic attributes for musical purposes. Given the lab conditions under which weworked, an agile design cycle methodology was employed,with live coding of audio software incorporating fast feedback, and more reflective preparations between sessions, exploiting both individual and pair programming. As the suitprovides up to 66 channels of information, we confront achallenging mapping problem, and techniques are describedfor automatic calibration, and the use of echo state networksfor dimensionality reduction.
@inproceedings{Collins2010a, author = {Collins, Nick and Kiefer, Chris and Patoli, Zeeshan and White, Martin}, title = {Musical Exoskeletons : Experiments with a Motion Capture Suit}, pages = {455--458}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177749}, url = {http://www.nime.org/proceedings/2010/nime2010_455.pdf}, keywords = {Motion Capture, Musical Controller, Mapping, Agile Design} }
Jim Murphy, Ajay Kapur, and Carl Burgin. 2010. The Helio : A Study of Membrane Potentiometers and Long Force Sensing Resistors for Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 459–462. http://doi.org/10.5281/zenodo.1177863
Abstract
Download PDF DOI
This paper describes a study of membrane potentiometers and long force sensing resistors as tools to enable greater interaction between performers and audiences. This is accomplished through the building of a new interface called the Helio. In preparation for the Helio’s construction, a variety of brands of membrane potentiometers and long force sensing resistors were analyzed for their suitability for use in a performance interface. Analog and digital circuit design considerations are discussed. We discuss in detail the design process and performance scenarios explored with the Helio.
@inproceedings{Murphy2010, author = {Murphy, Jim and Kapur, Ajay and Burgin, Carl}, title = {The Helio : A Study of Membrane Potentiometers and Long Force Sensing Resistors for Musical Interfaces}, pages = {459--462}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177863}, url = {http://www.nime.org/proceedings/2010/nime2010_459.pdf}, keywords = {Force Sensing Resistors, Membrane Potentiometers, Force Sensing Resistors, Haptic Feedback, Helio} }
Stuart Taylor and Jonathan Hook. 2010. FerroSynth : A Ferromagnetic Music Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 463–466. http://doi.org/10.5281/zenodo.1177907
Abstract
Download PDF DOI
We present a novel user interface device based around ferromagnetic sensing. The physical form of the interface can easily be reconfigured by simply adding and removing a variety of ferromagnetic objects to the device’s sensing surface. This allows the user to change the physical form of the interface resulting in a variety of different interaction modes. When used in a musical context, the performer can leverage the physical reconfiguration of the device to affect the method of playing and ultimately the sound produced. We describe the implementation of the sensing system, along with a range of mapping techniques used to transform the sensor data into musical output, including both the direct synthesis of sound and also the generation of MIDI data for use with Ableton Live. We conclude with a discussion of future directions for the device.
@inproceedings{Taylor2010a, author = {Taylor, Stuart and Hook, Jonathan}, title = {FerroSynth : A Ferromagnetic Music Interface}, pages = {463--466}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177907}, url = {http://www.nime.org/proceedings/2010/nime2010_463.pdf}, keywords = {Ferromagnetic sensing, ferrofluid, reconfigurable user interface, wave terrain synthesis, MIDI controller.} }
Josh M. Dubrau and Mark Havryliv. 2010. P[a]ra[pra]xis : Towards Genuine Realtime ’Audiopoetry.’ Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 467–468. http://doi.org/10.5281/zenodo.117777
Abstract
Download PDF DOI
P[a]ra[pra]xis is an ongoing collaborative project incorporating a two-piece software package which explores human relations to language through dynamic sound and text production. Incorporating an exploration of the potential functions and limitations of the ‘sign’ and the intrusions of the Unconscious into the linguistic utterance via parapraxes, or ‘Freudian slips’, our software utilises realtime subject response to automatically- generated changes in a narrative of their own writing to create music. This paper considers the relative paucity of truly interactive realtime text and audio works and provides an account of current and future potential for the simultaneous production of realtime poetry and electronic music through the P[a]ra[pra]xis software. It also provides the basis for a demonstration session in which we hope to show users how the program works, discuss possibilities for different applications of the software, and collect data for future collaborative work.
@inproceedings{Dubrau2010, author = {Dubrau, Josh M. and Havryliv, Mark}, title = {P[a]ra[pra]xis : Towards Genuine Realtime 'Audiopoetry'}, pages = {467--468}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.117777}, url = {http://www.nime.org/proceedings/2010/nime2010_467.pdf}, keywords = {language sonification, new media poetry, realtime, Lacan, semiotics, collaborative environment, psychoanalysis, Freud} }
Kris M. Kitani and Hideki Koike. 2010. ImprovGenerator : Online Grammatical Induction for On-the-Fly Improvisation Accompaniment. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 469–472. http://doi.org/10.5281/zenodo.1177827
Abstract
Download PDF DOI
We propose an online generative algorithm to enhance musical expression via intelligent improvisation accompaniment.Our framework called the ImprovGenerator, takes a livestream of percussion patterns and generates an improvisedaccompaniment track in real-time to stimulate new expressions in the improvisation. We use a mixture model togenerate an accompaniment pattern, that takes into account both the hierarchical temporal structure of the liveinput patterns and the current musical context of the performance. The hierarchical structure is represented as astochastic context-free grammar, which is used to generateaccompaniment patterns based on the history of temporalpatterns. We use a transition probability model to augmentthe grammar generated pattern to take into account thecurrent context of the performance. In our experiments weshow how basic beat patterns performed by a percussioniston a cajon can be used to automatically generate on-the-flyimprovisation accompaniment for live performance.
@inproceedings{Kitani2010, author = {Kitani, Kris M. and Koike, Hideki}, title = {ImprovGenerator : Online Grammatical Induction for On-the-Fly Improvisation Accompaniment}, pages = {469--472}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177827}, url = {http://www.nime.org/proceedings/2010/nime2010_469.pdf}, keywords = {Machine Improvisation, Grammatical Induction, Stochastic Context-Free Grammars, Algorithmic Composition} }
Christian Frisson, Macq Benoı̂t, Stéphane Dupont, Xavier Siebert, Damien Tardieu, and Thierry Dutoit. 2010. DeviceCycle : Rapid and Reusable Prototyping of Gestural Interfaces, Applied to Audio Browsing by Similarity. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 473–476. http://doi.org/10.5281/zenodo.1177771
Abstract
Download PDF DOI
This paper presents the development of rapid and reusablegestural interface prototypes for navigation by similarity inan audio database and for sound manipulation, using theAudioCycle application. For this purpose, we propose andfollow guidelines for rapid prototyping that we apply usingthe PureData visual programming environment. We havemainly developed three prototypes of manual control: onecombining a 3D mouse and a jog wheel, a second featuring a force-feedback 3D mouse, and a third taking advantage of the multitouch trackpad. We discuss benefits andshortcomings we experienced while prototyping using thisapproach.
@inproceedings{Frisson2010, author = {Frisson, Christian and Macq, Beno\^{\i}t and Dupont, St\'{e}phane and Siebert, Xavier and Tardieu, Damien and Dutoit, Thierry}, title = {DeviceCycle : Rapid and Reusable Prototyping of Gestural Interfaces, Applied to Audio Browsing by Similarity}, pages = {473--476}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177771}, url = {http://www.nime.org/proceedings/2010/nime2010_473.pdf}, keywords = {Human-computer interaction, gestural interfaces, rapid prototyping, browsing by similarity, audio database} }
Alexander Müller, Fabian Hemmert, Götz Wintergerst, and Ron Jagodzinski. 2010. Reflective Haptics : Resistive Force Feedback for Musical Performances with Stylus-Controlled Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 477–478. http://doi.org/10.5281/zenodo.1177835
Abstract
Download PDF DOI
In this paper we present a novel system for tactile actuation in stylus-based musical interactions. The proposed controller aims to support rhythmical musical performance. The system builds on resistive force feedback, which is achieved through a brakeaugmented ball pen stylus on a sticky touch-sensitive surface. Along the device itself, we present musical interaction principles that are enabled through the aforementioned tactile response. Further variations of the device and perspectives of the friction-based feedback are outlined.
@inproceedings{Muller2010, author = {M\''{u}ller, Alexander and Hemmert, Fabian and Wintergerst, G\''{o}tz and Jagodzinski, Ron}, title = {Reflective Haptics : Resistive Force Feedback for Musical Performances with Stylus-Controlled Instruments}, pages = {477--478}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177835}, url = {http://www.nime.org/proceedings/2010/nime2010_477.pdf}, keywords = {force feedback, haptic feedback, interactive, pen controller} }
Alison Mattek, Mark Freeman, and Eric Humphrey. 2010. Revisiting Cagean Composition Methodology with a Modern Computational Implementation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 479–480. http://doi.org/10.5281/zenodo.1177847
Abstract
Download PDF DOI
The American experimental tradition in music emphasizes a process-oriented – rather than goal-oriented – composition style. According to this tradition, the composition process is considered an experiment beginning with a problem resolved by the composer. The noted experimental composer John Cage believed that the artist’s role in composition should be one of coexistence, as opposed to the traditional view of directly controlling the process. Consequently, Cage devel- oped methods of composing that upheld this philosophy by utilizing musical charts and the I Ching, also known as the Chinese Book of Changes. This project investigates these methods and models them via an interactive computer system to explore the use of modern interfaces in experimental composition.
@inproceedings{Mattek2010, author = {Mattek, Alison and Freeman, Mark and Humphrey, Eric}, title = {Revisiting Cagean Composition Methodology with a Modern Computational Implementation}, pages = {479--480}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177847}, url = {http://www.nime.org/proceedings/2010/nime2010_479.pdf}, keywords = {Multi-touch Interfaces, Computer-Assisted Composition} }
Sam Ferguson, Emery Schubert, and Catherine Stevens. 2010. Movement in a Contemporary Dance Work and its Relation to Continuous Emotional Response. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 481–484. http://doi.org/10.5281/zenodo.1177767
Abstract
Download PDF DOI
In this paper, we describe a comparison between parameters drawn from 3-dimensional measurement of a dance performance, and continuous emotional response data recorded from an audience present during this performance. A continuous time series representing the mean movement as the dance unfolds is extracted from the 3-dimensional data. The audiences’ continuous emotional response data are also represented as a time series, and the series are compared. We concluded that movement in the dance performance directly influences the emotional arousal response of the audience.
@inproceedings{Ferguson2010, author = {Ferguson, Sam and Schubert, Emery and Stevens, Catherine}, title = {Movement in a Contemporary Dance Work and its Relation to Continuous Emotional Response}, pages = {481--484}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177767}, url = {http://www.nime.org/proceedings/2010/nime2010_481.pdf}, keywords = {Dance, Emotion, Motion Capture, Continuous Response.} }
Teemu Ahmaniemi. 2010. Gesture Controlled Virtual Instrument with Dynamic Vibrotactile Feedback. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 485–488. http://doi.org/10.5281/zenodo.1177711
Abstract
Download PDF DOI
This paper investigates whether a dynamic vibrotactile feedback improves the playability of a gesture controlled virtual instrument. The instrument described in this study is based on a virtual control surface that player strikes with a hand held sensor-actuator device. We designed two tactile cues to augment the stroke across the control surface: a static and dynamic cue. The static cue was a simple burst of vibration triggered when crossing the control surface. The dynamic cue was continuous vibration increasing in amplitude when approaching the surface. We arranged an experiment to study the influence of the tactile cues in performance. In a tempo follow task, the dynamic cue yielded significantly the best temporal and periodic accuracy and control of movement velocity and amplitude. The static cue did not significantly improve the rhythmic accuracy but assisted the control of movement velocity compared to the condition without tactile feedback at all. The findings of the study indicate that careful design of dynamic vibrotactile feedback can improve the controllability of gesture based virtual instrument.
@inproceedings{Ahmaniemi2010, author = {Ahmaniemi, Teemu}, title = {Gesture Controlled Virtual Instrument with Dynamic Vibrotactile Feedback}, pages = {485--488}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177711}, url = {http://www.nime.org/proceedings/2010/nime2010_485.pdf}, keywords = {Virtual instrument, Gesture, Tactile feedback, Motor control} }
Jeffrey Hass. 2010. Creating Integrated Music and Video for Dance : Lessons Learned and Lessons Ignored. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 489–492. http://doi.org/10.5281/zenodo.1177793
Abstract
Download PDF DOI
In his demonstration, the author discusses the sequential progress of his technical and aesthetic decisions as composer and videographer for four large-scale works for dance through annotated video examples of live performances and PowerPoint slides. In addition, he discusses his current real-time dance work with wireless sensor interfaces using sewable LilyPad Arduino modules and Xbee radio hardware.
@inproceedings{Hass2010, author = {Hass, Jeffrey}, title = {Creating Integrated Music and Video for Dance : Lessons Learned and Lessons Ignored}, pages = {489--492}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177793}, url = {http://www.nime.org/proceedings/2010/nime2010_489.pdf}, keywords = {dance, video processing, video tracking, LilyPad Arduino.} }
Warren Burt. 2010. Packages for ArtWonk : New Mathematical Tools for Composers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 493–496. http://doi.org/10.5281/zenodo.1177733
Abstract
Download PDF DOI
This paper describes a series of mathematical functions implemented by the author in the commercial algorithmic software language ArtWonk, written by John Dunn, which are offered with that language as resources for composers. It gives a history of the development of the functions, with an emphasis on how I developed them for use in my compositions.
@inproceedings{Burt2010, author = {Burt, Warren}, title = {Packages for ArtWonk : New Mathematical Tools for Composers}, pages = {493--496}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177733}, url = {http://www.nime.org/proceedings/2010/nime2010_493.pdf}, keywords = {Algorithmic composition, mathematical composition, probability distributions, fractals, additive sequences} }
Jace Miller and Tracy Hammond. 2010. Wiiolin : a Virtual Instrument Using the Wii Remote. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 497–500. http://doi.org/10.5281/zenodo.1177853
Abstract
Download PDF DOI
The console gaming industry is experiencing a revolution in terms of user control, and a large part to Nintendo’s introduction of the Wii remote. The online open source development community has embraced the Wii remote, integrating the inexpensive technology into numerous applications. Some of the more interesting applications demonstrate how the remote hardware can be leveraged for nonstandard uses. In this paper we describe a new way of interacting with the Wii remote and sensor bar to produce music. The Wiiolin is a virtual instrument which can mimic a violin or cello. Sensor bar motion relative to the Wii remote and button presses are analyzed in real-time to generate notes. Our design is novel in that it involves the remote’s infrared camera and sensor bar as an integral part of music production, allowing users to change notes by simply altering the angle of their wrist, and henceforth, bow. The Wiiolin introduces a more realistic way of instrument interaction than other attempts that rely on button presses and accelerometer data alone.
@inproceedings{Miller2010, author = {Miller, Jace and Hammond, Tracy}, title = {Wiiolin : a Virtual Instrument Using the Wii Remote}, pages = {497--500}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177853}, url = {http://www.nime.org/proceedings/2010/nime2010_497.pdf}, keywords = {Wii remote, virtual instrument, violin, cello, motion recognition, human computer interaction, gesture recognition.} }
Max Meier and Max Schranner. 2010. The Planets. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 501–504. http://doi.org/10.5281/zenodo.1177851
Abstract
Download PDF DOI
‘The Planets’ combines a novel approach for algorithmic composition with new human-computer interaction paradigms and realistic painting techniques. The main inspiration for it was the composition ‘The Planets’ from Gustav Holst who portrayed each planet in our solar system with music. Our application allows to interactively compose music in real-time by arranging planet constellations on an interactive table. The music generation is controlled by painted miniatures of the planets and the sun which are detected by the table and supplemented with an additional graphical visualization, creating a unique audio-visual experience. A video of the application can be found in [1].
@inproceedings{Meier2010, author = {Meier, Max and Schranner, Max}, title = {The Planets}, pages = {501--504}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2010}, address = {Sydney, Australia}, issn = {2220-4806}, doi = {10.5281/zenodo.1177851}, url = {http://www.nime.org/proceedings/2010/nime2010_501.pdf}, keywords = {algorithmic composition, soft constraints, tangible interaction} }
2009
Mike Collicutt, Carmine Casciato, and Marcelo M. Wanderley. 2009. From Real to Virtual : A Comparison of Input Devices for Percussion Tasks. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 1–6. http://doi.org/10.5281/zenodo.1177491
Abstract
Download PDF DOI
This paper presents an evaluation and comparison of four input devices for percussion tasks: a standard tom drum, Roland V-Drum, and two established examples of gestural controllers: the Buchla Lightning II, and the Radio Baton. The primary goal of this study was to determine how players’ actions changed when moving from an acoustic instrument like the tom drum, to a gestural controller like the Buchla Lightning, which bears little resemblance to an acoustic percussion instrument. Motion capture data was analyzed by comparing a subject’s hand height variability and timing accuracy across the four instruments as they performed simple musical tasks. Results suggest that certain gestures such as hand height amplitude can be adapted to these gestural controllers with little change and that in general subjects’ timing variability is significantly affected when playing on the Lightning and Radio Baton when compared to the more familiar tom drum and VDrum. Possible explanations and other observations are also presented.
@inproceedings{Collicutt2009, author = {Collicutt, Mike and Casciato, Carmine and Wanderley, Marcelo M.}, title = {From Real to Virtual : A Comparison of Input Devices for Percussion Tasks}, pages = {1--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177491}, url = {http://www.nime.org/proceedings/2009/nime2009_001.pdf}, keywords = {Evaluation of Input Devices, Motion Capture, Buchla Lightning II, Radio Baton. } }
Aristotelis Hadjakos, Erwin Aitenbichler, and Max Mühlhäuser. 2009. Probabilistic Model of Pianists’ Arm Touch Movements. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 7–12. http://doi.org/10.5281/zenodo.1177567
Abstract
Download PDF DOI
Measurement of pianists’ arm movement provides a signal,which is composed of controlled movements and noise. Thenoise is composed of uncontrolled movement generated bythe interaction of the arm with the piano action and measurement error. We propose a probabilistic model for armtouch movements, which allows to estimate the amount ofnoise in a joint. This estimation helps to interpret the movement signal, which is of interest for augmented piano andpiano pedagogy applications.
@inproceedings{Hadjakos2009, author = {Hadjakos, Aristotelis and Aitenbichler, Erwin and M\''{u}hlh\''{a}user, Max}, title = {Probabilistic Model of Pianists' Arm Touch Movements}, pages = {7--12}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177567}, url = {http://www.nime.org/proceedings/2009/nime2009_007.pdf}, keywords = {Piano, arm movement, gesture, classification, augmented instrument, inertial sensing. } }
Steven Gelineck and Stefania Serafin. 2009. A Quantitative Evaluation of the Differences between Knobs and Sliders. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 13–18. http://doi.org/10.5281/zenodo.1177549
Abstract
Download PDF DOI
This paper presents a HCI inspired evaluation of simple physical interfaces used to control physical models. Specifically knobs and sliders are compared in a creative and exploratory framework, which simulates the natural environment in which an electronic musician would normally explore a new instrument. No significant difference was measured between using knobs and sliders for controlling parameters of a physical modeling electronic instrument. Thereported difference between the tested instruments were mostlydue to the sound synthesis models.
@inproceedings{Gelineck2009, author = {Gelineck, Steven and Serafin, Stefania}, title = {A Quantitative Evaluation of the Differences between Knobs and Sliders}, pages = {13--18}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177549}, url = {http://www.nime.org/proceedings/2009/nime2009_013.pdf}, keywords = {Evaluation, Interfaces, Sliders, Knobs, Physi- cal Modeling, Electronic Musicians, Exploration, Creativ- ity, Affordances. } }
Ricardo Pedrosa and Karon E. Maclean. 2009. Evaluation of 3D Haptic Target Rendering to Support Timing in Music Tasks. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 19–24. http://doi.org/10.5281/zenodo.1177657
Abstract
Download PDF DOI
Haptic feedback is an important element that needs to be carefully designed in computer music interfaces. This paper presents an evaluation of several force renderings for target acquisition in space when used to support a music related task. The study presented here addresses only one musical aspect: the need to repeat elements accurately in time and in content. Several force scenarios will be rendered over a simple 3D target acquisition task and users’ performance will be quantitatively and qualitatively evaluated. The results show how the users’ subjective preference for a particular kind of force support does not always correlate to a quantitative measurement of performance enhancement. We describe a way in which a control mapping for a musical interface could be achieved without contradicting the users’ preferences as obtained from the study.
@inproceedings{Pedrosa2009, author = {Pedrosa, Ricardo and Maclean, Karon E.}, title = {Evaluation of {3D} Haptic Target Rendering to Support Timing in Music Tasks}, pages = {19--24}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177657}, url = {http://www.nime.org/proceedings/2009/nime2009_019.pdf}, keywords = {music interfaces, force feedback, tempo, comfort, target acquisition. } }
William Hsu and Marc Sosnick. 2009. Evaluating Interactive Music Systems : An HCI Approach. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 25–28. http://doi.org/10.5281/zenodo.1177579
Abstract
Download PDF DOI
In this paper, we discuss a number of issues related to the design of evaluation tests for comparing interactive music systems for improvisation. Our testing procedure covers rehearsal and performance environments, and captures the experiences of a musician/participant as well as an audience member/observer. We attempt to isolate salient components of system behavior, and test whether the musician or audience are able to discern between systems with significantly different behavioral components. We report on our experiences with our testing methodology, in comparative studies of our London and ARHS improvisation systems [1].
@inproceedings{Hsu2009, author = {Hsu, William and Sosnick, Marc}, title = {Evaluating Interactive Music Systems : An HCI Approach}, pages = {25--28}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177579}, url = {http://www.nime.org/proceedings/2009/nime2009_025.pdf}, keywords = {Interactive music systems, human computer interaction, evaluation tests. } }
Neal Spowage. 2009. The Ghetto Bastard : A Portable Noise Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 29–30. http://doi.org/10.5281/zenodo.1177683
Abstract
Download PDF DOI
Due to the accelerating development of ‘rapidly to become redundant’ technologies, there is a growing mountain of perfectly serviceable discarded electronic devices hiding quietly at the bottom of almost every domestic rubbish pile or at the back of nearly every second hand shop. If you add in to this scenario the accelerating nature of our society where people don’t have time or the motivation in their lives to sell or auction their redundant electronics, one can discover a plethora of discarded materials available for salvage. Using this as a starting point, I have produced a portable noise instrument from recycled materials, that is primarily an artistic led venture, built specifically for live performance.
@inproceedings{Spowage2009, author = {Spowage, Neal}, title = {The Ghetto Bastard : A Portable Noise Instrument}, pages = {29--30}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177683}, url = {http://www.nime.org/proceedings/2009/nime2009_029.pdf}, keywords = {nime09} }
Eric Humphrey and Colby Leider. 2009. The Navi Activity Monitor : Toward Using Kinematic Data to Humanize Computer Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 31–32. http://doi.org/10.5281/zenodo.1177581
Abstract
Download PDF DOI
Motivated by previous work aimed at developing mathematical models to describe expressive timing in music, and specifically the final ritardandi, using measured kinematic data, we further investigate the linkage of locomotion and timing in music. The natural running behavior of four subjects is measured with a wearable sensor prototype and analyzed to create normalized tempo curves. The resulting curves are then used to modulate the final ritard of MIDI scores, which are also performed by an expert musician. A Turing-inspired listening test is conducted to observe a human listener’s ability to determine the nature of the performer.
@inproceedings{Humphrey2009, author = {Humphrey, Eric and Leider, Colby}, title = {The Navi Activity Monitor : Toward Using Kinematic Data to Humanize Computer Music}, pages = {31--32}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177581}, url = {http://www.nime.org/proceedings/2009/nime2009_031.pdf}, keywords = {Musical kinematics, expressive tempo, machine music. } }
Alexander Müller and Georg Essl. 2009. Utilizing Tactile Feedback to Guide Movements Between Sounds. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 33–34. http://doi.org/10.5281/zenodo.1177623
Abstract
Download PDF DOI
Vibetone is a musical input device which was build to explore tactile feedback in gesture based interaction. It is a prototype aimed to allow the performer to play both continuously and discrete pitched sounds in the same space. Our primary focus is on tactile feedback to guide the artist’s movements during his performance. Thus, also untrained users are enabled to musical expression through bodily actions and precisely arm movements, guided through tactile feedback signals.
@inproceedings{Muller2009, author = {M\''{u}ller, Alexander and Essl, Georg}, title = {Utilizing Tactile Feedback to Guide Movements Between Sounds}, pages = {33--34}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177623}, url = {http://www.nime.org/proceedings/2009/nime2009_033.pdf}, keywords = {tactile feedback, intuitive interaction, gestural interaction, MIDI controller } }
Sam Ferguson and Kirsty Beilharz. 2009. An Interface for Live Interactive Sonification. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 35–36. http://doi.org/10.5281/zenodo.1177511
Abstract
Download PDF DOI
Sonification is generally considered in a statistical data analysis context. This research discusses the development of an interface for live control of sonification – for controlling and altering sonifications over the course of their playback. This is designed primarily with real-time sources in mind, rather than with static datasets, and is intended as a performative, live data-art creative activity. The interface enables the performer to use the interface as an instrument for iterative interpretations and variations of sonifications of multiple datastreams. Using the interface, the performer can alter the scale, granularity, timbre, hierarchy of elements, spatialisation, spectral filtering, key/modality, rhythmic distribution and register ‘on-the-fly’ to both perform data-generated music, and investigate data in a live exploratory, interactive manner.
@inproceedings{Ferguson2009, author = {Ferguson, Sam and Beilharz, Kirsty}, title = {An Interface for Live Interactive Sonification}, pages = {35--36}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177511}, url = {http://www.nime.org/proceedings/2009/nime2009_035.pdf}, keywords = {Sonification, Interactive Sonification, Auditory Display. } }
Alexander Reben, Mat Laibowitz, and Joseph A. Paradiso. 2009. Responsive Music Interfaces for Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 37–38. http://doi.org/10.5281/zenodo.1177663
Abstract
Download PDF DOI
In this project we have developed reactive instruments for performance. Reactive instruments provide feedback for the performer thereby providing a more dynamic experience. This is achieved through the use of haptics and robotics. Haptics provide a feedback system to the control surface. Robotics provides a way to actuate the instruments and their control surfaces. This allows a highly coordinated "dance" between performer and the instrument. An application for this idea is presented as a linear slide interface. Reactive interfaces represent a dynamic way for music to be portrayed in performance.
@inproceedings{Reben2009, author = {Reben, Alexander and Laibowitz, Mat and Paradiso, Joseph A.}, title = {Responsive Music Interfaces for Performance}, pages = {37--38}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177663}, url = {http://www.nime.org/proceedings/2009/nime2009_037.pdf}, keywords = {haptics, robotics, dynamic interfaces } }
Chi-Hsia Lai. 2009. Hands On Stage : A Sound and Image Performance Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 39–40. http://doi.org/10.5281/zenodo.1177609
Abstract
Download PDF DOI
Hands On Stage, designed from a percussionist’s perspective, is a new performance interface designed for audiovisual improvisation. It comprises a custom-built table interface and a performance system programmed in two environments, SuperCollider 3 and Isadora. This paper traces the interface’s evolution over matters of relevant technology, concept, construction, system design, and its creative outcomes.
@inproceedings{Lai2009, author = {Lai, Chi-Hsia}, title = {Hands On Stage : A Sound and Image Performance Interface}, pages = {39--40}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177609}, url = {http://www.nime.org/proceedings/2009/nime2009_039.pdf}, keywords = {audiovisual, interface design, performance. } }
Kyle McDonald, Dane Kouttron, Curtis Bahn, Jonas Braasch, and Pauline Oliveros. 2009. The Vibrobyte : A Haptic Interface for Co-Located Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 41–42. http://doi.org/10.5281/zenodo.1177627
Abstract
Download PDF DOI
The Vibrobyte is a wireless haptic interface specialized forco-located musical performance. The hardware is designedaround the open source Arduino platform, with haptic control data encapsulated in OSC messages, and OSC/hardwarecommunications handled by Processing. The Vibrobyte wasfeatured at the International Computer Music Conference2008 (ICMC) in a telematic performance between ensembles in Belfast, Palo Alto (California, USA), and Troy (NewYork, USA).
@inproceedings{McDonald2009, author = {McDonald, Kyle and Kouttron, Dane and Bahn, Curtis and Braasch, Jonas and Oliveros, Pauline}, title = {The Vibrobyte : A Haptic Interface for Co-Located Performance}, pages = {41--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177627}, url = {http://www.nime.org/proceedings/2009/nime2009_041.pdf}, keywords = {haptics,interface,nime09,performance,telematic} }
Meason Wiley and Ajay Kapur. 2009. Multi-Laser Gestural Interface — Solutions for Cost-Effective and Open Source Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 43–44. http://doi.org/10.5281/zenodo.1177709
Abstract
Download PDF DOI
This paper describes a cost-effective, modular, open source framework for a laser interface design that is open to community development, interaction and user modification. The following paper highlights ways in which we are implementing the multi-laser gestural interface in musical, visual, and robotic contexts.
@inproceedings{Wiley2009, author = {Wiley, Meason and Kapur, Ajay}, title = {Multi-Laser Gestural Interface --- Solutions for Cost-Effective and Open Source Controllers}, pages = {43--44}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177709}, url = {http://www.nime.org/proceedings/2009/nime2009_043.pdf}, keywords = {Lasers, photocell sensor, UltraSound, Open Source controller design, digital gamelan, digital tanpura } }
Ryo Kanda, Mitsuyo Hashida, and Haruhiro Katayose. 2009. Mims : Interactive Multimedia Live Performance System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 45–47. http://doi.org/10.5281/zenodo.1177595
Abstract
Download PDF DOI
We introduce Mims, which is an interactive-multimedia live-performance system, where pieces rendered by a performer’s voice are translated into floating objects called voice objects. The voice objects are generated from the performer’s current position on the screen, and absorbed by another flying object called Mims. Voice sounds are modulated by the behavior of Mims. Performers can control these objects and sound effects by using their own gestures. Mims provides performers and their audiences with expressive visual feedback in terms of sound manipulations and results.
@inproceedings{Kanda2009, author = {Kanda, Ryo and Hashida, Mitsuyo and Katayose, Haruhiro}, title = {Mims : Interactive Multimedia Live Performance System}, pages = {45--47}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177595}, url = {http://www.nime.org/proceedings/2009/nime2009_045.pdf}, keywords = {Interaction, audience, performer, visualize, sensor, physical, gesture. } }
Suguru Goto and Rob Powell. 2009. netBody — "Augmented Body and Virtual Body II" with the System, BodySuit, Powered Suit and Second Life — Its Introduction of an Application of the System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 48–49. http://doi.org/10.5281/zenodo.1177559
Abstract
Download PDF DOI
This is intended to introduce the system, which combines BodySuit, especially Powered Suit, and Second Life, as well as its possibilities and its uses in a musical performance application. The system which we propose contains both a gesture controller and robots at the same time. In this system, the Data Suit, BodySuit controls the avatar in Second Life and Second Life controls the exoskeleton, Powered Suit in real time. These are related with each other in conjunction with Second Life in Internet. BodySuit doesn’t contain a hand-held controller. A performer, for example a dancer, wears a suit. Gestures are transformed into electronic signals by sensors. Powered Suit is another suit that a dancer wears, but gestures are generated by motors. This is a sort of wearable robot. Second Life is software that is developed by Linden Lab. It allows creating a virtual world and a virtual human (avatar) in Internet. Working together with BodySuit, Powered Suit, and Second Life the idea behind the system is that a human body is augmented by electronic signals and is reflected in a virtual world in order to be able to perform interactively.
@inproceedings{Goto2009a, author = {Goto, Suguru and Powell, Rob}, title = {netBody --- "Augmented Body and Virtual Body II" with the System, BodySuit, Powered Suit and Second Life --- Its Introduction of an Application of the System}, pages = {48--49}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177559}, url = {http://www.nime.org/proceedings/2009/nime2009_048.pdf}, keywords = {artificial intelligence,gesture controller,humanoid robot,interaction,internet,nime09,robot} }
Keisuke Ogawa and Yasuo Kuhara. 2009. Life Game Orchestra as an Interactive Music Composition System Translating Cellular Patterns of Automata into Musical Scales. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 50–51. http://doi.org/10.5281/zenodo.1177647
Abstract
Download PDF DOI
We developed a system called Life Game Orchestra that generates music by translating cellular patterns of Conway’s Game of Life into musical scales. A performer can compose music by controlling varying cell patterns and sounds with visual and auditory fun. A performer assigns the elements of tone to two-dimensional cell patterns in the matrix of the Game of Life. Our system searches defined cell patterns in the varying matrix dynamically. If the patterns are matched, corresponding tones are generated. A performer can make cells in the matrix by moving in front of a camera and interactively influencing the generation of music. The progress of the Game of Life is controlled with a clock defined by the performer to configure the groove of the music. By running multiple matrices with different pattern mapping, clock timing, and instruments, we can perform an ensemble. The Life Game Orchestra is a fusion system of the design of a performer and the emergence of cellular automata as a complex system.
@inproceedings{Ogawa2009, author = {Ogawa, Keisuke and Kuhara, Yasuo}, title = {Life Game Orchestra as an Interactive Music Composition System Translating Cellular Patterns of Automata into Musical Scales}, pages = {50--51}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177647}, url = {http://www.nime.org/proceedings/2009/nime2009_050.pdf}, keywords = {Conway's Game of Life, Cellular automata, Cell pattern, scale, Interactive composition, performance. } }
John Toenjes. 2009. Natural Materials on Stage : Custom Controllers for Aesthetic Effect. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 52–53. http://doi.org/10.5281/zenodo.1177693
Abstract
Download PDF DOI
This article describes the implications of design and materials of computer controllers used in the context of interactive dance performance. Size, shape, and layout all influence audience perception of the performer, and materials imply context for further interpretation of the interactive performance work. It describes the construction of the "Control/Recorder" and the "VideoLyre", two custom computer control surfaces made for Leonardo’s Chimes, a work by Toenjes, Marchant and Smith, and how these controllers contribute to theatrical aesthetic intent.
@inproceedings{Toenjes2009, author = {Toenjes, John}, title = {Natural Materials on Stage : Custom Controllers for Aesthetic Effect}, pages = {52--53}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177693}, url = {http://www.nime.org/proceedings/2009/nime2009_052.pdf}, keywords = {control surface, interface, tactile, natural, organic, interactive dance. } }
Sarah Keith. 2009. Controlling Live Generative Electronic Music with Deviate. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 54–55. http://doi.org/10.5281/zenodo.1177599
Abstract
Download PDF DOI
Deviate generates multiple streams of melodic and rhythmic output in real-time, according to user-specified control parameters. This performance system has been implemented using Max 5 [1] within the genre of popular contemporary electronic music, incorporating techno, IDM, and related forms. The aim of this project is not musical style synthesis, but to construct an environment in which a range of creative and musical goals may be achieved. A key aspect is control over generative processes, as well as consistent yet varied output. An approach is described which frees the user from determining note-level output while allowing control to be maintained over larger structural details, focusing specifically on the melodic aspect of this system. Audio examples are located online at http://www.cetenbaath.com/cb/about-deviate/.
@inproceedings{Keith2009, author = {Keith, Sarah}, title = {Controlling Live Generative Electronic Music with Deviate}, pages = {54--55}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177599}, url = {http://www.nime.org/proceedings/2009/nime2009_054.pdf}, keywords = {generative, performance, laptop, popular music } }
Andy Dolphin. 2009. SpiralSet : A Sound Toy Utilizing Game Engine Technologies. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 56–57. http://doi.org/10.5281/zenodo.1177497
Abstract
Download PDF DOI
SpiralSet is a sound toy incorporating game enginesoftware used in conjunction with a spectral synthesissound engine constructed in Max/MSP/Jitter. SpiralSetwas presented as an interactive installation piece at theSonic Arts Expo 2008, in Brighton, UK. A custom madesensor-based interface is used for control of the system.The user interactions are designed to be quickly accessiblein an installation context, yet allowing the potential forsonic depth and variation.
@inproceedings{Dolphin2009, author = {Dolphin, Andy}, title = {SpiralSet : A Sound Toy Utilizing Game Engine Technologies}, pages = {56--57}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177497}, url = {http://www.nime.org/proceedings/2009/nime2009_056.pdf}, keywords = {Sound Toys, Game Engines, Animated Interfaces, Spectral Synthesis, Open Work, Max/MSP. } }
Mingfei Gao and Craig Hanson. 2009. LUMI : Live Performance Paradigms Utilizing Software Integrated Touch Screen and Pressure Sensitive Button Matrix. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 58–59. http://doi.org/10.5281/zenodo.1177547
Abstract
Download PDF DOI
This paper explores a rapidly developed, new musical interface involving a touch-screen, 32 pressure sensitive button pads, infrared sensor, 8 knobs and cross-fader. We provide a versatile platform for computer-based music performance and production using a human computer interface that has strong visual and tactile feedback as well as robust software that exploits the strengths of each individual system component.
@inproceedings{Gao2009, author = {Gao, Mingfei and Hanson, Craig}, title = {LUMI : Live Performance Paradigms Utilizing Software Integrated Touch Screen and Pressure Sensitive Button Matrix}, pages = {58--59}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177547}, url = {http://www.nime.org/proceedings/2009/nime2009_058.pdf}, keywords = {live performance interface,lumi,nime09,pressure} }
Nicholas Gillian, Benjamin Knapp, and Sile O’Modhrain. 2009. The SARC EyesWeb Catalog : A Pattern Recognition Toolbox for Musician-Computer Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 60–61. http://doi.org/10.5281/zenodo.1177551
Abstract
Download PDF DOI
This paper presents the SARC EyesWeb Catalog (SEC), agroup of blocks designed for real-time gesture recognitionthat have been developed for the open source program EyesWeb. We describe how the recognition of real-time bodymovements can be used for musician-computer-interaction.
@inproceedings{Gillian2009, author = {Gillian, Nicholas and Knapp, Benjamin and O'Modhrain, Sile}, title = {The {SAR}C EyesWeb Catalog : A Pattern Recognition Toolbox for Musician-Computer Interaction}, pages = {60--61}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177551}, url = {http://www.nime.org/proceedings/2009/nime2009_060.pdf}, keywords = {SARC EyesWeb Catalog, gesture recognition } }
Hiroki Nishino. 2009. A 2D Fiducial Tracking Method based on Topological Region Adjacency and Angle Information. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 62–63. http://doi.org/10.5281/zenodo.1177643
Abstract
Download PDF DOI
We describe a new method for 2D fiducial tracking. We use region adjacency information together with angles between regions to encode IDs inside fiducials, whereas previous research by Kaltenbrunner and Bencina utilize region adjacency tree. Our method supports a wide ID range and is fast enough to accommodate real-time video. It is also very robust against false positive detection.
@inproceedings{Nishino2009, author = {Nishino, Hiroki}, title = {A {2D} Fiducial Tracking Method based on Topological Region Adjacency and Angle Information}, pages = {62--63}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177643}, url = {http://www.nime.org/proceedings/2009/nime2009_062.pdf}, keywords = {fiducial tracking, computer vision, tangible user interface, interaction techniques. } }
Jorge Solis, Takeshi Ninomiya, Klaus Petersen, Masaki Takeuchi, and Atsuo Takanishi. 2009. Anthropomorphic Musical Performance Robots at Waseda University : Increasing Understanding of the Nature of Human Musical Interaction Abstract. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 64–69. http://doi.org/10.5281/zenodo.1177681
Abstract
Download PDF DOI
During several decades, the research at Waseda University has been focused on developing anthropomorphic robots capable performing musical instruments. As a result of our research efforts, the Waseda Flutist Robot WF-4RIV and the Waseda Saxophonist Robot WAS-1 have been designed to reproduce the human player performance. As a long-term goal, we are proposing to enable the interaction between musical performance robots as well as with human players. In general the communication of humans within a band is a special case of conventional human social behavior. Rhythm, harmony and timbre of the music played represent the emotional states of the musicians. So the development of an artificial entity that participates in such an interaction may contribute to the better understanding of some of the mechanisms that enable the communication of humans in musical terms. Therefore, we are not considering a musical performance robot (MPR) just as a mere sophisticated MIDI instrument. Instead, its human-like design and the integration of perceptual capabilities may enable to act on its own autonomous initiative based on models which consider its own physical constrains. In this paper, we present an overview of our research approaches towards enabling the interaction between musical performance robots as well as with musicians.
@inproceedings{Solis2009, author = {Solis, Jorge and Ninomiya, Takeshi and Petersen, Klaus and Takeuchi, Masaki and Takanishi, Atsuo}, title = {Anthropomorphic Musical Performance Robots at Waseda University : Increasing Understanding of the Nature of Human Musical Interaction Abstract}, pages = {64--69}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177681}, url = {http://www.nime.org/proceedings/2009/nime2009_064.pdf}, keywords = {nime09} }
Gil Weinberg, Brian Blosser, Trishul Mallikarjuna, and Aparna Raman. 2009. The Creation of a Multi-Human, Multi-Robot Interactive Jam Session. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 70–73. http://doi.org/10.5281/zenodo.1177705
Abstract
Download PDF DOI
This paper presents an interactive and improvisational jam session, including human players and two robotic musicians. The project was developed in an effort to create novel and inspiring music through human-robot collaboration. The jam session incorporates Shimon, a newly-developed socially-interactive robotic marimba player, and Haile, a perceptual robotic percussionist developed in previous work. The paper gives an overview of the musical perception modules, adaptive improvisation modes and human-robot musical interaction models that were developed for the session. The paper also addresses the musical output that can be created from increased interconnections in an expanded multiple-robot multiplehuman ensemble, and suggests directions for future work.
@inproceedings{Weinberg2009a, author = {Weinberg, Gil and Blosser, Brian and Mallikarjuna, Trishul and Raman, Aparna}, title = {The Creation of a Multi-Human, Multi-Robot Interactive Jam Session}, pages = {70--73}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177705}, url = {http://www.nime.org/proceedings/2009/nime2009_070.pdf}, keywords = {Robotic musicianship, Shimon, Haile. } }
Nan-Wei Gong, Mat Laibowitz, and Joseph A. Paradiso. 2009. MusicGrip : A Writing Instrument for Music Control. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 74–77. http://doi.org/10.5281/zenodo.1177555
Abstract
Download PDF DOI
In this project, we have developed a real-time writing instrument for music control. The controller, MusicGrip, can capture the subtle dynamics of the user’s grip while writing or drawing and map this to musical control signals and sonic outputs. This paper discusses this conversion of the common motor motion of handwriting into an innovative form of music expression. The presented example instrument can be used to integrate the composing aspect of music with painting and writing, creating a new art form from the resultant aural and visual representation of the collaborative performing process.
@inproceedings{Gong2009, author = {Gong, Nan-Wei and Laibowitz, Mat and Paradiso, Joseph A.}, title = {MusicGrip : A Writing Instrument for Music Control}, pages = {74--77}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177555}, url = {http://www.nime.org/proceedings/2009/nime2009_074.pdf}, keywords = {Interactive music control, writing instrument, pen controller, MIDI, group performing activity. } }
Grant Partridge, Pourang Irani, and Gordon Fitzell. 2009. Let Loose with WallBalls, a Collaborative Tabletop Instrument for Tomorrow. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 78–81. http://doi.org/10.5281/zenodo.1177655
Abstract
Download PDF DOI
Tabletops—and by extension, tabletop computers— naturally facilitate group work. In particular, they provide a fascinating platform for exploring the possibilities of collaborative audio improvisation. Existing tabletop instruments (and digital instruments in general) tend to impose either a steep learning curve on novice players or a frustrating ceiling of expressivity upon experts. We introduce WallBalls, an intuitive tabletop instrument designed to support both novice and expert performance. At first glance, WallBalls resembles a toy, game or whimsical sketchpad, but it quickly reveals itself as a deeply expressive and highly adaptable sample-based instrument capable of facilitating a startling variety of collaborative sound art.
@inproceedings{Partridge2009, author = {Partridge, Grant and Irani, Pourang and Fitzell, Gordon}, title = {Let Loose with WallBalls, a Collaborative Tabletop Instrument for Tomorrow}, pages = {78--81}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177655}, url = {http://www.nime.org/proceedings/2009/nime2009_078.pdf}, keywords = {Tabletop computers, collaborative instruments, collaborative composition, group improvisation, spatial audio interfaces, customizable instruments. } }
Hye Ki Min. 2009. SORISU : Sound with Numbers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 82–85. http://doi.org/10.5281/zenodo.1177631
Abstract
Download PDF DOI
It is surely not difficult for anyone with experience in thesubject known as Music Theory to realize that there is avery definite and precise relationship between music andmathematics. This paper describes the SoriSu, a newelectronic musical instrument based on Sudoku puzzles,which probe the expressive possibilities of mathematicalconcepts in music. The concept proposes a new way ofmapping numbers to sound. This interface was designed toprovide easy and pleasing access to music for users whoare unfamiliar or uncomfortable with current musicaldevices. The motivation behind the project is presented, aswell as hardware and software design.
@inproceedings{Min2009, author = {Min, Hye Ki}, title = {SORISU : Sound with Numbers}, pages = {82--85}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177631}, url = {http://www.nime.org/proceedings/2009/nime2009_082.pdf}, keywords = {Numbers, Game Interfaces, Mathematics and Sound, Mathematics in Music, Puzzles, Tangible User Interfaces. } }
Yotam Mann, Jeff Lubow, and Adrian Freed. 2009. The Tactus : a Tangible , Rhythmic Grid Interface Using Found-Objects. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 86–89. http://doi.org/10.5281/zenodo.1177625
Abstract
Download PDF DOI
This paper describes the inspiration and implementation of a tactile, tabletop synthesizer/step sequencer. The Tactus is an expandable and inexpensive musical interface for the creation of loop-based music inspired by the Bubblegum Sequencer [2]. An optical camera, coupled with a computer running Max/MSP/Jitter can turn almost any matrix-like object into a step sequencer. The empty cells in the gridded object are filled with a fitting, colored object; the placement of which is analogous to adding an instrument or switching on a box in a step sequencer grid. The color and column position of every element in the matrix are used as parameters for a synthesizer while the row position of that element corresponds to the moment within the loop that entry is sounded. The two dimensional array can be positioned anywhere within the camera’s visibility. Both the translation and rotation of the physical matrix are assigned to global parameters that affect the music while preserving the color and order of the cells. A rotation of 180 degrees, for example, will not reverse the sequence, but instead change an assigned global parameter.
@inproceedings{Mann2009, author = {Mann, Yotam and Lubow, Jeff and Freed, Adrian}, title = {The Tactus : a Tangible , Rhythmic Grid Interface Using Found-Objects}, pages = {86--89}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177625}, url = {http://www.nime.org/proceedings/2009/nime2009_086.pdf}, keywords = {nime09} }
Jason A. Hockman, Marcelo M. Wanderley, and Ichiro Fujinaga. 2009. Real-Time Phase Vocoder Manipulation by Runner’s Pace. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 90–93. http://doi.org/10.5281/zenodo.1177575
Abstract
Download PDF DOI
This paper presents a method for using a runner’s pacefor real-time control of the time-scaling facility of a phasevocoder, resulting in the automated synchronization of anaudio track tempo to the generated control signal. The increase in usage of portable music players during exercisehas given rise to the development of new personal exerciseaids, most notably the Nike+iPod system, which relies onembedded sensor technologies to provide kinematic workout statistics. There are also systems that select songs basedon the measured step frequency of a runner. The proposedsystem also uses the pace of a runner, but this information isused to change the tempo of the music.
@inproceedings{Hockman2009, author = {Hockman, Jason A. and Wanderley, Marcelo M. and Fujinaga, Ichiro}, title = {Real-Time Phase Vocoder Manipulation by Runner's Pace}, pages = {90--93}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177575}, url = {http://www.nime.org/proceedings/2009/nime2009_090.pdf}, keywords = {NIME, synchronization, exercise, time-scaling. } }
Kristian Nymoen and Alexander R. Jensenius. 2009. A Discussion of Multidimensional Mapping in Nymophone2. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 94–97. http://doi.org/10.5281/zenodo.1177645
Abstract
Download PDF DOI
The paper presents Nymophone2, an acoustic instrument with a complex relationship between performance actions and emergent sound. A method for describing the multidimensional control actions needed to play the instrument is presented and discussed.
@inproceedings{Nymoen2009, author = {Nymoen, Kristian and Jensenius, Alexander R.}, title = {A Discussion of Multidimensional Mapping in Nymophone2}, pages = {94--97}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177645}, url = {http://www.nime.org/proceedings/2009/nime2009_094.pdf}, keywords = {nime09} }
Daniel Schlessinger and Julius O. Smith. 2009. The Kalichord : A Physically Modeled Electro-Acoustic Plucked String Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 98–101. http://doi.org/10.5281/zenodo.1177671
Abstract
Download PDF DOI
We present the Kalichord: a small, handheld electro/acoustic instrument in which the player’s right hand plucks virtual strings while his left hand uses buttons to play independent bass lines. The Kalichord uses the analog signal from plucked acoustic tines to excite a physical string model, allowing a nuanced and intuitive plucking experience. First, we catalog instruments related to the Kalichord. Then we examine the use of analog signals to excite a physical string model and discuss the expressiveness and form factors that this technique affords. We then describe the overall construction of the Kalichord and possible playing styles, and finally we consider ways we hope to improve upon the current prototype.
@inproceedings{Schlessinger2009, author = {Schlessinger, Daniel and Smith, Julius O.}, title = {The Kalichord : A Physically Modeled Electro-Acoustic Plucked String Instrument}, pages = {98--101}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177671}, url = {http://www.nime.org/proceedings/2009/nime2009_098.pdf}, keywords = {Kalichord, physical model, tine, piezo, plucked string, electro-acoustic instruments, kalimba, accordion } }
Otso Lähdeoja. 2009. Augmenting Chordophones with Hybrid Percussive Sound Possibilities. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 102–105. http://doi.org/10.5281/zenodo.1177607
Abstract
Download PDF DOI
In this paper we describe an approach for introducing newelectronic percussive sound possibilities for stringinstruments by "listening" to the sounds of the instrument’sbody and extracting audio and data from the wood’sacoustic vibrations. A method for capturing, localizing andanalyzing the percussive hits on the instrument’s body ispresented, in connection with an audio-driven electronicpercussive sound module. The system introduces a newgesture-sound relationship in the electric string instrumentplaying environment, namely the use of percussivetechniques on the instrument’s body which are null inregular circumstances due to selective and exclusivemicrophone use for the strings. Instrument bodypercussions are widely used in the acoustic instrumentalpraxis. They yield a strong potential for providing anextended soundscape via instrument augmentation, directlycontrolled by the musician through haptic manipulation ofthe instrument itself. The research work was carried out onthe electric guitar, but the method used can apply to anystring instrument with a resonating body.
@inproceedings{Lahdeoja2009, author = {L\''{a}hdeoja, Otso}, title = {Augmenting Chordophones with Hybrid Percussive Sound Possibilities}, pages = {102--105}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177607}, url = {http://www.nime.org/proceedings/2009/nime2009_102.pdf}, keywords = {augmented instrument,chordophone,contact microphone systems,electric,electronic percussion,even with,guitar,leaving the instrument body,nime09,there is always a,trade-off,virtually mute} }
Mark Kahrs, David Skulina, Stefan Bilbao, and Murray Campbell. 2009. An Electroacoustically Controlled Vibrating Plate. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 106–109. http://doi.org/10.5281/zenodo.1177593
Abstract
Download PDF DOI
Large vibrating plates are used as thunder sheets in orchestras. We have extended the use of flat plates by cementing aflat panel electroacoustic transducer on a large brass sheet.Because of the thickness of the panel, the output is subject tononlinear distortion. When combined with a real-time inputand signal processing algorithm, the active brass plate canbecome an effective musical instrument for performance ofnew music.
@inproceedings{Kahrs2009, author = {Kahrs, Mark and Skulina, David and Bilbao, Stefan and Campbell, Murray}, title = {An Electroacoustically Controlled Vibrating Plate}, pages = {106--109}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177593}, url = {http://www.nime.org/proceedings/2009/nime2009_106.pdf}, keywords = {Electroacoustics, flat panel } }
Scott Smallwood, Perry R. Cook, Dan Trueman, and Lawrence McIntyre. 2009. Don’t Forget the Loudspeaker — A History of Hemispherical Speakers at Princeton , Plus a DIY Guide. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 110–115. http://doi.org/10.5281/zenodo.1177679
Abstract
Download PDF DOI
This paper gives a historical overview of the development of alternative sonic display systems at Princeton University; in particular, the design, construction, and use in live performance of a series of spherical and hemispherical speaker systems. We also provide a DIY guide to constructing the latest series of loudspeakers that we are currently using in our research and music making.
@inproceedings{Smallwood2009a, author = {Smallwood, Scott and Cook, Perry R. and Trueman, Dan and McIntyre, Lawrence}, title = {Don't Forget the Loudspeaker --- A History of Hemispherical Speakers at Princeton , Plus a DIY Guide}, pages = {110--115}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177679}, url = {http://www.nime.org/proceedings/2009/nime2009_110.pdf}, keywords = {loudspeakers, hemispherical speakers, sonic display systems, laptop orchestras. } }
Adrian Freed and Andrew Schmeder. 2009. Features and Future of Open Sound Control version 1.1 for NIME. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 116–120. http://doi.org/10.5281/zenodo.1177517
Abstract
Download PDF DOI
The history and future of Open Sound Control (OSC) is discussed and the next iteration of the OSC specification is introduced with discussion of new features to support NIME community activities. The roadmap to a major revision of OSC is developed.
@inproceedings{Freed2009a, author = {Freed, Adrian and Schmeder, Andrew}, title = {Features and Future of Open Sound Control version 1.1 for NIME}, pages = {116--120}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177517}, url = {http://www.nime.org/proceedings/2009/nime2009_116.pdf}, keywords = {Open Sound Control, Time Tag, OSC, Reservation Protocols. } }
Andrew Schmeder and Adrian Freed. 2009. A Low-level Embedded Service Architecture for Rapid DIY Design of Real-time Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 121–124. http://doi.org/10.5281/zenodo.1177673
Abstract
Download PDF DOI
An on-the-fly reconfigurable low-level embedded servicearchitecture is presented as a means to improve scalability, improve conceptual comprehensibility, reduce humanerror and reduce development time when designing newsensor-based electronic musical instruments with real-timeresponsiveness. The implementation of the concept ina project called micro-OSC is described. Other sensorinterfacing products are evaluated in the context of DIYprototyping of musical instruments. The capabilities ofthe micro-OSC platform are demonstrated through a set ofexamples including resistive sensing, mixed digital-analogsystems, many-channel sensor interfaces and time-basedmeasurement methods.
@inproceedings{Schmeder2009, author = {Schmeder, Andrew and Freed, Adrian}, title = {A Low-level Embedded Service Architecture for Rapid DIY Design of Real-time Musical Instruments}, pages = {121--124}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177673}, url = {http://www.nime.org/proceedings/2009/nime2009_121.pdf}, keywords = {real-time musical interface, DIY design, em- bedded web services, rapid prototyping, reconfigurable firmware } }
Hans-Christoph Steiner. 2009. Firmata : Towards Making Microcontrollers Act Like Extensions of the Computer. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 125–130. http://doi.org/10.5281/zenodo.1177689
Abstract
Download PDF DOI
Firmata is a generic protocol for communicating with microcontrollers from software on a host computer. The central goal is to make the microcontroller an extension of theprogramming environment on the host computer in a manner that feels natural in that programming environment. Itwas designed to be open and flexible so that any programming environment can support it, and simple to implementboth on the microcontroller and the host computer to ensurea wide range of implementations. The current reference implementation is a library for Arduino/Wiring and is includedwith Arduino software package since version 0012. Thereare matching software modules for a number of languages,like Pd, OpenFrameworks, Max/MSP, and Processing.
@inproceedings{Steiner2009, author = {Steiner, Hans-Christoph}, title = {Firmata : Towards Making Microcontrollers Act Like Extensions of the Computer}, pages = {125--130}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177689}, url = {http://www.nime.org/proceedings/2009/nime2009_125.pdf}, keywords = {arduino,microcontroller,nime09,processing,pure data} }
Marije A. Baalman, Harry C. Smoak, Christopher L. Salter, Joseph Malloch, and Marcelo M. Wanderley. 2009. Sharing Data in Collaborative, Interactive Performances : the SenseWorld DataNetwork. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 131–134. http://doi.org/10.5281/zenodo.1177471
Abstract
Download PDF DOI
The SenseWorld DataNetwork framework addresses the is- sue of sharing and manipulating multiple data streams among different media systems in a heterogenous interactive per- formance environment. It is intended to facilitate the cre- ation, rehearsal process and performance practice of collab- orative interactive media art works, by making the sharing of data (from sensors or internal processes) between collab- orators easier, faster and more flexible.
@inproceedings{Baalman2009a, author = {Baalman, Marije A. and Smoak, Harry C. and Salter, Christopher L. and Malloch, Joseph and Wanderley, Marcelo M.}, title = {Sharing Data in Collaborative, Interactive Performances : the SenseWorld DataNetwork}, pages = {131--134}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177471}, url = {http://www.nime.org/proceedings/2009/nime2009_131.pdf}, keywords = {Data exchange, collaborative performance, interactive performance, interactive art works, sensor data, OpenSoundControl, SuperCollider, Max/MSP} }
Nicolas Bouillot and Jeremy R. Cooperstock. 2009. Challenges and Performance of High-Fidelity Audio Streaming for Interactive Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 135–140. http://doi.org/10.5281/zenodo.1177485
Abstract
Download PDF DOI
Low-latency streaming of high-quality audio has the potential to dramatically transform the world of interactive musical applications. We provide methods for accurately measuring the end-to-end latency and audio quality of a delivered audio stream and apply these methods to an empirical evaluation of several streaming engines. In anticipationof future demands for emerging applications involving audio interaction, we also review key features of streamingengines and discuss potential challenges that remain to beovercome.
@inproceedings{Bouillot2009, author = {Bouillot, Nicolas and Cooperstock, Jeremy R.}, title = {Challenges and Performance of High-Fidelity Audio Streaming for Interactive Performances}, pages = {135--140}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177485}, url = {http://www.nime.org/proceedings/2009/nime2009_135.pdf}, keywords = {Networked Musical Performance, high-fidelity audio streaming, glitch detection, latency measurement } }
Todor Todoroff, Frédéric Bettens, Reboursière Loı̈c, and Wen-Yang Chu. 2009. ”Extension du Corps Sonore” — Dancing Viola. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 141–146. http://doi.org/10.5281/zenodo.1177691
Abstract
Download PDF DOI
”Extension du corps sonore” is long-term project initiatedby Musiques Nouvelles [4], a contemporary music ensemble in Mons. It aims at giving instrumental music performers an extended control over the sound of their instrument byextending the understanding of the sound body from the instrument only to the combination of the instrument and thewhole body of the performer. The development started atARTeM and got the benefit of a three month numediartresearch project [1] that focused on three axes of research:pre-processing of sensor data, gesture recognition and mapping through interpolation. The objectives were the development of computing methods and flexible Max/MSP externals to be later integrated in the ARTeM software framework for the concerts with viola player Dominica Eyckmans. They could be used in a variety of other artistic worksand will be made available on the numediart website [1],where more detailed information can be found in the Quarterly Progress Scientific Report #4.
@inproceedings{Todoroff2009, author = {Todoroff, Todor and Bettens, Fr\'{e}d\'{e}ric and Reboursi\`{e}re, Lo\''{\i}c and Chu, Wen-Yang}, title = {''Extension du Corps Sonore'' --- Dancing Viola}, pages = {141--146}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177691}, url = {http://www.nime.org/proceedings/2009/nime2009_141.pdf}, keywords = {Sensor data pre-processing, gesture recognition, mapping, interpolation, extension du corps sonore } }
Colby Leider, Doug Mann, Daniel Plazas, Michael Battaglia, and Reid Draper. 2009. The elBo and footPad : Toward Personalized Hardware for Audio Manipulation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 147–148. http://doi.org/10.5281/zenodo.1177617
Abstract
Download PDF DOI
We describe initial prototypes and a design strategy for new, user-customized audio-manipulation and editing tools. These tools are designed to enable intuitive control of audio-processing tasks while anthropomorphically matching the target user.
@inproceedings{Leider2009a, author = {Leider, Colby and Mann, Doug and Plazas, Daniel and Battaglia, Michael and Draper, Reid}, title = {The elBo and footPad : Toward Personalized Hardware for Audio Manipulation}, pages = {147--148}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177617}, url = {http://www.nime.org/proceedings/2009/nime2009_147.pdf}, keywords = {user modeling, user customization } }
Langdon Crawford and William D. Fastenow. 2009. The Midi-AirGuitar , A serious Musical Controller with a Funny Name Music Technology Program. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 149–150. http://doi.org/10.5281/zenodo.1177495
Abstract
Download PDF DOI
The MIDI-Airguitar is a hand held musical controller based on Force Sensing Resister (FSR) and Accelerometer technology. The hardware and software implementation of the MIDI-Airguitars are described below. Current practices of the authors in performance are discussed.
@inproceedings{Crawford2009, author = {Crawford, Langdon and Fastenow, William D.}, title = {The Midi-AirGuitar , A serious Musical Controller with a Funny Name Music Technology Program}, pages = {149--150}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177495}, url = {http://www.nime.org/proceedings/2009/nime2009_149.pdf}, keywords = {nime09} }
Niels Böttcher and Smilen Dimitrov. 2009. An Early Prototype of the Augmented PsychoPhone. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 151–152. http://doi.org/10.5281/zenodo.1177467
Abstract
Download PDF DOI
In this poster we present the early prototype of the augmented Psychophone — a saxophone with various applied sensors, allowing the saxophone player to attach effects like pitch shifting, wah-wah and ring modulation to the saxophone, simply by moving the saxophone as one would do when really being enthusiastic and involved in the performance. The possibility of scratching on the previously recorded sound is also possible directly on the saxophone.
@inproceedings{Bottcher2009, author = {B\''{o}ttcher, Niels and Dimitrov, Smilen}, title = {An Early Prototype of the Augmented PsychoPhone}, pages = {151--152}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177467}, url = {http://www.nime.org/proceedings/2009/nime2009_151.pdf}, keywords = {Augmented saxophone, Physical computing, hyper instruments, mapping. } }
Diana Siwiak, Jonathan Berger, and Yao Yang. 2009. Catch Your Breath — Musical Biofeedback for Breathing Regulation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 153–154. http://doi.org/10.5281/zenodo.1177675
Abstract
Download PDF DOI
Catch Your Breath is an interactive audiovisual bio-feedbacksystem adapted from a project designed to reduce respiratory irregularity in patients undergoing 4D CT scans for oncological diagnosis. The system is currently implementedand assessed as a potential means to reduce motion-induceddistortion in CT images.A museum installation based on the same principle wascreated in which an inexpensive wall-mounted web camera tracks an IR sensor embedded into a pendant worn bythe user. The motion of the subjects breathing is trackedand interpreted as a real-time variable tempo adjustment toa stored musical file. The subject can then adjust his/herbreathing to synchronize with a separate accompanimentline. When the breathing is regular and is at the desiredtempo, the audible result sounds synchronous and harmonious. The accompaniment’s tempo progresses and gradually decrease which causes the breathing to synchronize andslow down, thus increasing relaxation.
@inproceedings{Siwiak2009, author = {Siwiak, Diana and Berger, Jonathan and Yang, Yao}, title = {Catch Your Breath --- Musical Biofeedback for Breathing Regulation}, pages = {153--154}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177675}, url = {http://www.nime.org/proceedings/2009/nime2009_153.pdf}, keywords = {sensor, music, auditory display. } }
Lijuan Peng and David Gerhard. 2009. A Wii-Based Gestural Interface for Computer Conducting Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 155–156. http://doi.org/10.5281/zenodo.1177659
Abstract
Download PDF DOI
With the increase of sales of Wii game consoles, it is becoming commonplace for the Wii remote to be used as analternative input device for other computer systems. In thispaper, we present a system which makes use of the infraredcamera within the Wii remote to capture the gestures of aconductor using a baton with an infrared LED and battery.Our system then performs data analysis with gesture classification and following, and finally displays the gestures using visual baton trajectories and audio feedback. Gesturetrajectories are displayed in real time and can be comparedto the corresponding diagram shown in a textbook. In addition, since a conductor normally does not look at a screenwhile conducting, tones are played to represent a certainbeat in a conducting gesture. Further, the system can be controlled entirely with the baton, removing the need to switchfrom baton to mouse. The interface is intended to be usedfor pedagogy purposes.
@inproceedings{Peng2009, author = {Peng, Lijuan and Gerhard, David}, title = {A Wii-Based Gestural Interface for Computer Conducting Systems}, pages = {155--156}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177659}, url = {http://www.nime.org/proceedings/2009/nime2009_155.pdf}, keywords = {Conducting, Gesture, Infrared, Learning, Wii. } }
Dale E. Parson. 2009. Chess-Based Composition and Improvisation for Non-Musicians. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 157–158. http://doi.org/10.5281/zenodo.1177653
Abstract
Download PDF DOI
”Music for 32 Chess Pieces” is a software system that supports composing, performing and improvising music by playing a chess game. A game server stores a representation of the state of a game, validates proposed moves by players, updates game state, and extracts a graph of piece-to-piece relationships. It also loads a plugin code module that acts as a composition. A plugin maps pieces and relationships on the board, such as support or attack relationships, to a timed sequence of notes and accents. The server transmits notes in a sequence to an audio renderer process via network datagrams. Two players can perform a composition by playing chess, and a player can improvise by adjusting a plugin’s music mapping parameters via a graphical user interface. A composer can create a new composition by writing a new plugin that uses a distinct algorithm for mapping game rules and states to music. A composer can also write a new note-to-sound mapping program in the audio renderer language. This software is available at http://faculty.kutztown.edu/parson/music/ParsonMusic.html.
@inproceedings{Parson2009, author = {Parson, Dale E.}, title = {Chess-Based Composition and Improvisation for Non-Musicians}, pages = {157--158}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177653}, url = {http://www.nime.org/proceedings/2009/nime2009_157.pdf}, keywords = {algorithmic composition, chess, ChucK, improvisation, Max/MSP, SuperCollider. } }
Andy Dolphin. 2009. MagNular : Symbolic Control of an External Sound Engine Using an Animated Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 159–160. http://doi.org/10.5281/zenodo.1177499
Abstract
Download PDF DOI
This paper reports on work in progress on the creativeproject MagNular, part of a wider practical study of thepotential collaborative compositional applications of gameengine technologies. MagNular is a sound toy utilizingcomputer game and physics engine technologies to createan animated interface used in conjunction with an externalsound engine developed within Max/MSP. The playercontrols virtual magnets that attract or repel numerousparticle objects, moving them freely around the virtualspace. Particle object collision data is mapped to controlsound onsets and synthesis/DSP (Digital SignalProcessing) parameters. The user "composes" bycontrolling and influencing the simulated physicalbehaviors of the particle objects within the animatedinterface.
@inproceedings{Dolphin2009a, author = {Dolphin, Andy}, title = {MagNular : Symbolic Control of an External Sound Engine Using an Animated Interface}, pages = {159--160}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177499}, url = {http://www.nime.org/proceedings/2009/nime2009_159.pdf}, keywords = {Sound Toys, Open Work, Game Engines, Animated Interfaces, Max/MSP. } }
Noah Feehan. 2009. Audio Orienteering – Navigating an Invisible Terrain. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 161–162. http://doi.org/10.5281/zenodo.1177505
Abstract
Download PDF DOI
AUDIO ORIENTEERING is a collaborative performance environment in which physical tokens are used to navigate an invisible sonic landscape. In this paper, I describe the hardware and software used to implement a prototype audio terrain with multiple interaction modes and sonic behaviors mapped onto three-dimensional space.
@inproceedings{Feehan2009, author = {Feehan, Noah}, title = {Audio Orienteering -- Navigating an Invisible Terrain}, pages = {161--162}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177505}, url = {http://www.nime.org/proceedings/2009/nime2009_161.pdf}, keywords = {wii, 3-d positioning, audio terrain, collaborative performance. } }
Staas de Jong. 2009. Developing the Cyclotactor. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 163–164. http://doi.org/10.5281/zenodo.1177591
Abstract
Download PDF DOI
This paper presents developments in the technology underlying the cyclotactor, a finger-based tactile I/O device for musical interaction. These include significant improvements both in the basic characteristics of tactile interaction and in the related (vibro)tactile sample rates, latencies, and timing precision. After presenting the new prototype’s tactile output force landscape, some of the new possibilities for interaction are discussed, especially those for musical interaction with zero audio/tactile latency.
@inproceedings{DeJong2009, author = {de Jong, Staas}, title = {Developing the Cyclotactor}, pages = {163--164}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177591}, url = {http://www.nime.org/proceedings/2009/nime2009_163.pdf}, keywords = {Musical controller, tactile interface. } }
Sébastien Schiesser. 2009. midOSC : a Gumstix-Based MIDI-to-OSC Converter. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 165–168. http://doi.org/10.5281/zenodo.1177669
Abstract
Download PDF DOI
A MIDI-to-OSC converter is implemented on a commercially available embedded linux system, tighly integratedwith a microcontroller. A layered method is developed whichpermits the conversion of serial data such as MIDI to OSCformatted network packets with an overall system latencybelow 5 milliseconds for common MIDI messages.The Gumstix embedded computer provide an interesting and modular platform for the development of such anembedded applications. The project shows great potentialto evolve into a generic sensors-to-OSC ethernet converterwhich should be very useful for artistic purposes and couldbe used as a fast prototyping interface for gesture acquisitiondevices.
@inproceedings{Schiesser2009, author = {Schiesser, S\'{e}bastien}, title = {midOSC : a Gumstix-Based {MIDI-to-OSC} Converter}, pages = {165--168}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177669}, url = {http://www.nime.org/proceedings/2009/nime2009_165.pdf}, keywords = {MIDI, Open Sound Control, converter, gumstix } }
Yoichi Nagashima. 2009. Parallel Processing System Design with "Propeller" Processor. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 169–170. http://doi.org/10.5281/zenodo.1177635
Abstract
Download PDF DOI
This is a technical and experimental report of parallel processing, using the "Propeller" chip. Its eight 32 bits processors (cogs) can operate simultaneously, either independently or cooperatively, sharing common resources through a central hub. I introduce this unique processor and discuss about the possibility to develop interactive systems and smart interfaces in media arts, because we need many kinds of tasks at a same time with NIMErelated systems and installations. I will report about (1) Propeller chip and its powerful IDE, (2) external interfaces for analog/digital inputs/outputs, (3) VGA/NTSC/PAL video generation, (4) audio signal processing, and (5) originally-developed MIDI input/output method. I also introduce three experimental prototype systems.
@inproceedings{Nagashima2009, author = {Nagashima, Yoichi}, title = {Parallel Processing System Design with "Propeller" Processor}, pages = {169--170}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177635}, url = {http://www.nime.org/proceedings/2009/nime2009_169.pdf}, keywords = {Propeller, parallel processing, MIDI, sensor, interfaces. } }
A. Cavan Fyans, Michael Gurevich, and Paul Stapleton. 2009. Where Did It All Go Wrong ? A Model of Error From the Spectator’s Perspective. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 171–172. http://doi.org/10.5281/zenodo.1177519
Abstract
Download PDF DOI
The development of new interfaces for musical expressionhas created a need to study how spectators comprehend newperformance technologies and practices. As part of a largerproject examining how interactions with technology can becommunicated with the spectator, we relate our model ofspectator understanding of error to the NIME discourse surrounding transparency, mapping, skill and success.
@inproceedings{Fyans2009, author = {Fyans, A. Cavan and Gurevich, Michael and Stapleton, Paul}, title = {Where Did It All Go Wrong ? A Model of Error From the Spectator's Perspective}, pages = {171--172}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177519}, url = {http://www.nime.org/proceedings/2009/nime2009_171.pdf}, keywords = {performance, skill, transparency, design, HCI } }
Nicolas d’Alessandro and Thierry Dutoit. 2009. Advanced Techniques for Vertical Tablet Playing A Overview of Two Years of Practicing the HandSketch 1.x. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 173–174. http://doi.org/10.5281/zenodo.1177465
Abstract
Download PDF DOI
In this paper we present new issues and challenges relatedto the vertical tablet playing. The approach is based on apreviously presented instrument, the HANDSKETCH. Thisinstrument has now been played regularly for more than twoyears by several performers. Therefore this is an opportunityto propose a better understanding of the performing strategy.We present the behavior of the whole body as an underlyingaspect in the manipulation of the instrument.
@inproceedings{dAlessandro2009, author = {d'Alessandro, Nicolas and Dutoit, Thierry}, title = {Advanced Techniques for Vertical Tablet Playing A Overview of Two Years of Practicing the HandSketch 1.x}, pages = {173--174}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177465}, url = {http://www.nime.org/proceedings/2009/nime2009_173.pdf}, keywords = {graphic tablet, playing position, techniques } }
Andreas Höofer, Aristotelis Hadjakos, and Max Mühlhäuser. 2009. Gyroscope-Based Conducting Gesture Recognition. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 175–176. http://doi.org/10.5281/zenodo.1177565
Abstract
Download PDF DOI
This paper describes a method for classification of different beat gestures within traditional beat patterns based on gyroscope data and machine learning techniques and provides a quantitative evaluation.
@inproceedings{Hoofer2009, author = {H\''{o}ofer, Andreas and Hadjakos, Aristotelis and M\''{u}hlh\''{a}user, Max}, title = {Gyroscope-Based Conducting Gesture Recognition}, pages = {175--176}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177565}, url = {http://www.nime.org/proceedings/2009/nime2009_175.pdf}, keywords = {nime09} }
Edgar Berdahl, Günter Niemeyer, and Julius O. Smith. 2009. Using Haptics to Assist Performers in Making Gestures to a Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 177–182. http://doi.org/10.5281/zenodo.1177481
Abstract
Download PDF DOI
Haptic technology, providing force cues and creating a programmable physical instrument interface, can assist musicians in making gestures. The finite reaction time of thehuman motor control system implies that the execution of abrief musical gesture does not rely on immediate feedbackfrom the senses, rather it is preprogrammed to some degree.Consequently, we suggest designing relatively simple anddeterministic interfaces for providing haptic assistance.In this paper, we consider the specific problem of assisting a musician in selecting pitches from a continuous range.We build on a prior study by O’Modhrain of the accuracyof pitches selected by musicians on a Theremin-like hapticinterface. To improve the assistance, we augment the interface with programmed detents so that the musician can feelthe locations of equal tempered pitches. Nevertheless, themusician can still perform arbitrary pitch inflections such asglissandi, falls, and scoops. We investigate various formsof haptic detents, including fixed detent levels and forcesensitive detent levels. Preliminary results from a subjecttest confirm improved accuracy in pitch selection broughtabout by detents.
@inproceedings{Berdahl2009b, author = {Berdahl, Edgar and Niemeyer, G\''{u}nter and Smith, Julius O.}, title = {Using Haptics to Assist Performers in Making Gestures to a Musical Instrument}, pages = {177--182}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177481}, url = {http://www.nime.org/proceedings/2009/nime2009_177.pdf}, keywords = {Haptic, detent, pitch selection, human motor system, feedback control, response time, gravity well } }
Edgar Berdahl, Günter Niemeyer, and Julius O. Smith. 2009. Using Haptic Devices to Interface Directly with Digital Waveguide-Based Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 183–186. http://doi.org/10.5281/zenodo.1177479
Abstract
Download PDF DOI
A haptic musical instrument is an electronic musical instrument that provides the musician not only with audio feedback but also with force feedback. By programming feedback controllers to emulate the laws of physics, many haptic musical instruments have been previously designed thatmimic real acoustic musical instruments. The controllerprograms have been implemented using finite difference and(approximate) hybrid digital waveguide models. We presenta novel method for constructing haptic musical instrumentsin which a haptic device is directly interfaced with a conventional digital waveguide model by way of a junction element, improving the quality of the musician’s interactionwith the virtual instrument. We introduce both the explicitdigital waveguide control junction and the implicit digitalwaveguide control junction.
@inproceedings{Berdahl2009a, author = {Berdahl, Edgar and Niemeyer, G\''{u}nter and Smith, Julius O.}, title = {Using Haptic Devices to Interface Directly with Digital Waveguide-Based Musical Instruments}, pages = {183--186}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177479}, url = {http://www.nime.org/proceedings/2009/nime2009_183.pdf}, keywords = {haptic musical instrument, digital waveguide, control junction, explicit, implicit, teleoperation } }
Mark Havryliv, Fazel Naghdy, Greg Schiemer, and Timothy Hurd. 2009. Haptic Carillon – Analysis & Design of the Carillon Mechanism. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 187–192. http://doi.org/10.5281/zenodo.1177569
Abstract
Download PDF DOI
The carillon is one of the few instruments that elicit sophisticated haptic interaction from amateur and professional players alike. Like the piano keyboard, the velocity of a player’s impact on each carillon key, or baton, affects the quality of the resultant tone; unlike the piano, each carillon baton returns a different forcefeedback. Force-feedback varies widely from one baton to the next across the entire range of the instrument and with further idiosyncratic variation from one instrument to another. This makes the carillon an ideal candidate for haptic simulation. The application of synthesized forcefeedback based on an analysis of forces operating in a typical carillon mechanism offers a blueprint for the design of an electronic practice clavier and with it the solution to a problem that has vexed carillonists for centuries, namely the inability to rehearse repertoire in private. This paper will focus on design and implementation of a haptic carillon clavier derived from an analysis of the Australian National Carillon in Canberra.
@inproceedings{Havryliv2009, author = {Havryliv, Mark and Naghdy, Fazel and Schiemer, Greg and Hurd, Timothy}, title = {Haptic Carillon -- Analysis \& Design of the Carillon Mechanism}, pages = {187--192}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177569}, url = {http://www.nime.org/proceedings/2009/nime2009_187.pdf}, keywords = {Haptics, force-feedback, mechanical analysis. } }
Hans Leeuw. 2009. The Electrumpet , a Hybrid Electro-Acoustic Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 193–198. http://doi.org/10.5281/zenodo.1177613
Abstract
Download PDF DOI
The Electrumpet is an enhancement of a normal trumpet with a variety of electronic sensors and buttons. It is a new hybrid instrument that facilitates simultaneous acoustic and electronic playing. The normal playing skills of a trumpet player apply to the new instrument. The placing of the buttons and sensors is not a hindrance to acoustic use of the instrument and they are conveniently located. The device can be easily attached to and detached from a normal Bb-trumpet. The device has a wireless connection with the computer through Bluetooth-serial (Arduino). Audio and data processing in the computer is effected by three separate instances of MAX/MSP connected through OSC (controller data) and Soundflower (sound data). The current prototype consists of 7 analogue sensors (4 valve-like potentiometers, 2 pressure sensors, 1 "Ribbon" controller) and 9 digital switches. An LCD screen that is controlled by a separate Arduino (mini) is attached to the trumpet and displays the current controller settings that are sent through a serial connection.
@inproceedings{Leeuw2009, author = {Leeuw, Hans}, title = {The Electrumpet , a Hybrid Electro-Acoustic Instrument}, pages = {193--198}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177613}, url = {http://www.nime.org/proceedings/2009/nime2009_193.pdf}, keywords = {Trumpet, multiple Arduinos, Bluetooth, LCD, low latency, OSC, MAX/MSP. } }
Emmanuelle Gallin and Marc Sirguy. 2009. Sensor Technology and the Remaking of Instruments from the Past. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 199–202. http://doi.org/10.5281/zenodo.1177521
Abstract
Download PDF DOI
Starting from a parallelism between the effervescence of the 1920s in the exploration of new ways of controlling music and the actual revolution in the design of new control possibilities, this paper aims to explore the possibilities of rethinking instruments from the past towards instruments of the future. Through three examples (the experience of the Persephone, the design of the Persephone2 and the 4 strings ribbon cello project), I will explore the contemporary notion of “instruments of the future” vs. controls that people expect from such instruments nowadays.
@inproceedings{Gallin2009, author = {Gallin, Emmanuelle and Sirguy, Marc}, title = {Sensor Technology and the Remaking of Instruments from the Past}, pages = {199--202}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177521}, url = {http://www.nime.org/proceedings/2009/nime2009_199.pdf}, keywords = {Controller, Sensor, MIDI, USB, Computer Music, ribbon controllers, ribbon cello. } }
Sarah Nicolls. 2009. Twenty-First Century Piano. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 203–206. http://doi.org/10.5281/zenodo.1177641
Abstract
Download PDF DOI
“The reinvigoration of the role of the human body” - as John Richards recently described trends in using homemade electronics to move away from laptop performance [1] - is mirrored in an ambition of instrumentalists to interact more closely with the electronic sounds they are helping to create. For these players, there has often been a one-way street of the ‘instrument feeds MAX patch’ paradigm and arguments are made here for more complete performance feedback systems. Instrumentalists come to the question of interactivity with a whole array of gestures, sounds and associations already in place, so must choose carefully the means by which the instrumental performance is augmented. Frances-Marie Uitti [2] is a pioneer in the field, creating techniques to amplify the cellist’s innate performative gestures and in parallel developing the instrument. This paper intends to give an overview of the author’s work in developing interactivity in piano performance, mechanical augmentation of the piano and possible structural developments of the instrument to bring it into the twenty-first century.
@inproceedings{Nicolls2009, author = {Nicolls, Sarah}, title = {Twenty-First Century Piano}, pages = {203--206}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177641}, url = {http://www.nime.org/proceedings/2009/nime2009_203.pdf}, keywords = {sensor, gestural, technology, performance, piano, motors, interactive } }
Andrew Johnston, Linda Candy, and Ernest Edmonds. 2009. Designing for Conversational Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 207–212. http://doi.org/10.5281/zenodo.1177585
Abstract
Download PDF DOI
In this paper we describe an interaction framework which classifies musicians’ interactions with virtual musical instruments into three modes: instrumental, ornamental and conversational. We argue that conversational interactions are the most difficult to design for, but also the most interesting. To illustrate our approach to designing for conversational interactions we describe the performance work Partial Reflections 3 for two clarinets and interactive software. This software uses simulated physical models to create a virtual sound sculpture which both responds to and produces sounds and visuals.
@inproceedings{Johnston2009, author = {Johnston, Andrew and Candy, Linda and Edmonds, Ernest}, title = {Designing for Conversational Interaction}, pages = {207--212}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177585}, url = {http://www.nime.org/proceedings/2009/nime2009_207.pdf}, keywords = {Music, instruments, interaction. } }
Michael Gurevich, Paul Stapleton, and Peter Bennett. 2009. Designing for Style in New Musical Interactions. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 213–217. http://doi.org/10.5281/zenodo.1177563
Abstract
Download PDF DOI
In this paper we discuss the concept of style, focusing in particular on methods of designing new instruments that facilitate the cultivation and recognition of style. We distinguishbetween style and structure of an interaction and discuss thesignificance of this formulation within the context of NIME.Two workshops that were conducted to explore style in interaction design are described, from which we identify elements of style that can inform and influence the design process. From these, we suggest steps toward designing forstyle in new musical interactions.
@inproceedings{Gurevich2009, author = {Gurevich, Michael and Stapleton, Paul and Bennett, Peter}, title = {Designing for Style in New Musical Interactions}, pages = {213--217}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177563}, url = {http://www.nime.org/proceedings/2009/nime2009_213.pdf}, keywords = {expression, style, structure, skill, virtuosity } }
Perry R. Cook. 2009. Re-Designing Principles for Computer Music Controllers : a Case Study of SqueezeVox Maggie. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 218–221. http://doi.org/10.5281/zenodo.1177493
Abstract
Download PDF DOI
This paper revisits/extends “Principles for Designing Computer Music Controllers” (NIME 2001), subsequently updated in a NIME 2007 keynote address. A redesign of SqueezeVox Maggie (a reoccurring NIME character) is used as an example of which principles have held fast over the years, and which have changed due to advances in technology. A few new principles are also added to the list.
@inproceedings{Cook2009, author = {Cook, Perry R.}, title = {Re-Designing Principles for Computer Music Controllers : a Case Study of SqueezeVox Maggie}, pages = {218--221}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177493}, url = {http://www.nime.org/proceedings/2009/nime2009_218.pdf}, keywords = {HCI, Composed Instruments, Voice Synthesis, Wireless, Batteries, Laptop Orchestras, SenSAs.} }
Jaroslaw Kapuscinski and Javier Sanchez. 2009. Interfacing Graphic and Musical Elements in Counterlines. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 222–225. http://doi.org/10.5281/zenodo.1177597
Abstract
Download PDF DOI
This paper reports on initial stages of research leading to the development of an intermedia performance Counterlines — a duet for Disklavier and Wacom Cintiq, in which both performers generate audiovisual gestures that relate to each other contrapuntally. The pianist generates graphic elements while playing music and the graphic performer generates piano notes by drawing lines. The paper focuses on interfacing sounds and images performed by the pianist. It provides rationale for the choice of materials of great simplicity and describes our approach to mapping.
@inproceedings{Kapuscinski2009, author = {Kapuscinski, Jaroslaw and Sanchez, Javier}, title = {Interfacing Graphic and Musical Elements in Counterlines}, pages = {222--225}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177597}, url = {http://www.nime.org/proceedings/2009/nime2009_222.pdf}, keywords = {intermedia, Disklavier, piano, Wacom Cintiq, mapping, visual music } }
Richard Polfreman. 2009. FrameWorks 3D : Composition in the Third Dimension. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 226–229. http://doi.org/10.5281/zenodo.1177661
Abstract
Download PDF DOI
Music composition on computer is a challenging task, involving a range of data types to be managed within a single software tool. A composition typically comprises a complex arrangement of material, with many internal relationships between data in different locations repetition, inversion, retrograde, reversal and more sophisticated transformations. The creation of such complex artefacts is labour intensive, and current systems typically place a significant cognitive burden on the composer in terms of maintaining a work as a coherent whole. FrameWorks 3D is an attempt to improve support for composition tasks within a Digital Audio Workstation (DAW) style environment via a novel three-dimensional (3D) user-interface. In addition to the standard paradigm of tracks, regions and tape recording analogy, FrameWorks displays hierarchical and transformational information in a single, fully navigable workspace. The implementation combines Java with Max/MSP to create a cross-platform, user-extensible package and will be used to assess the viability of such a tool and to develop the ideas further.
@inproceedings{Polfreman2009, author = {Polfreman, Richard}, title = {FrameWorks {3D} : Composition in the Third Dimension}, pages = {226--229}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177661}, url = {http://www.nime.org/proceedings/2009/nime2009_226.pdf}, keywords = {Digital Audio Workstation, graphical user-interfaces, 3D graphics, Max/MSP, Java. } }
Adrian Freed. 2009. Novel and Forgotten Current-steering Techniques for Resistive Multitouch, Duotouch, and Polytouch Position Sensing with Pressure. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 230–235. http://doi.org/10.5281/zenodo.1177515
Abstract
Download PDF DOI
A compendium of foundational circuits for interfacing resistive pressure and position sensors is presented with example applications for music controllers and tangible interfaces.
@inproceedings{Freed2009, author = {Freed, Adrian}, title = {Novel and Forgotten Current-steering Techniques for Resistive Multitouch, Duotouch, and Polytouch Position Sensing with Pressure}, pages = {230--235}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177515}, url = {http://www.nime.org/proceedings/2009/nime2009_230.pdf}, keywords = {Piezoresistive Touch Sensor Pressure Sensing Current Steering Multitouch. } }
Randy Jones, Peter Driessen, Andrew Schloss, and George Tzanetakis. 2009. A Force-Sensitive Surface for Intimate Control. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 236–241. http://doi.org/10.5281/zenodo.1177589
Abstract
Download PDF DOI
This paper presents a new force-sensitive surface designedfor playing music. A prototype system has been implemented using a passive capacitive sensor, a commodity multichannel audio interface, and decoding software running ona laptop computer. This setup has been a successful, lowcost route to a number of experiments in intimate musicalcontrol.
@inproceedings{Jones2009a, author = {Jones, Randy and Driessen, Peter and Schloss, Andrew and Tzanetakis, George}, title = {A Force-Sensitive Surface for Intimate Control}, pages = {236--241}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177589}, url = {http://www.nime.org/proceedings/2009/nime2009_236.pdf}, keywords = {Multitouch, sensors, tactile, capacitive, percussion controllers. } }
Greg Kellum and Alain Crevoisier. 2009. A Flexible Mapping Editor for Multi-touch Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 242–245. http://doi.org/10.5281/zenodo.1177601
Abstract
Download PDF DOI
This paper introduces a flexible mapping editor, which transforms multi-touch devices into musical instruments. The editor enables users to create interfaces by dragging and dropping components onto the interface and attaching actions to them, which will be executed when certain userdefined conditions obtain. The editor receives touch information via the non-proprietary communication protocol, TUIO [9], and can, therefore, be used together with a variety of different multi-touch input devices.
@inproceedings{Kellum2009, author = {Kellum, Greg and Crevoisier, Alain}, title = {A Flexible Mapping Editor for Multi-touch Musical Instruments}, pages = {242--245}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177601}, url = {http://www.nime.org/proceedings/2009/nime2009_242.pdf}, keywords = {NIME, multi-touch, multi-modal interface, sonic interaction design. } }
Chris Kiefer, Nick Collins, and Geraldine Fitzpatrick. 2009. Phalanger : Controlling Music Software With Hand Movement Using A Computer Vision and Machine Learning Approach. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 246–249. http://doi.org/10.5281/zenodo.1177603
Abstract
Download PDF DOI
Phalanger is a system which facilitates the control of music software with hand and finger motion, with the aim of creating a fluid style of interaction that promotes musicality. The system is purely video based, requires no wearables or accessories and uses affordable and accessible technology. It employs a neural network for background segmentation, a combination of imaging techniques for frame analysis, and a support vector machine (SVM) for recognition of hand positions. System evaluation showed the SVM to reliably differentiate between eight different classes. An initial formative user evaluation with ten musicians was carried out to help build a picture of how users responded to the system; this highlighted areas that need improvement and lent some insight into useful features for the next version.
@inproceedings{Kiefer2009, author = {Kiefer, Chris and Collins, Nick and Fitzpatrick, Geraldine}, title = {Phalanger : Controlling Music Software With Hand Movement Using A Computer Vision and Machine Learning Approach}, pages = {246--249}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177603}, url = {http://www.nime.org/proceedings/2009/nime2009_246.pdf}, keywords = {nime09} }
Teresa M. Nakra, Yuri Ivanov, Paris Smaragdis, and Chris Ault. 2009. The UBS Virtual Maestro : an Interactive Conducting System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 250–255. http://doi.org/10.5281/zenodo.1177637
Abstract
Download PDF DOI
The UBS Virtual Maestro is an interactive conducting system designed by Immersion Music to simulate the experience of orchestral conducting for the general public attending a classical music concert. The system utilizes the Wii Remote, which users hold and move like a conducting baton to affect the tempo and dynamics of an orchestral video/audio recording. The accelerometer data from the Wii Remote is used to control playback speed and volume in real-time. The system is housed in a UBSbranded kiosk that has toured classical performing arts venues throughout the United States and Europe in 2007 and 2008. In this paper we share our experiences in designing this standalone system for thousands of users, and lessons that we learned from the project.
@inproceedings{Nakra2009, author = {Nakra, Teresa M. and Ivanov, Yuri and Smaragdis, Paris and Ault, Chris}, title = {The UBS Virtual Maestro : an Interactive Conducting System}, pages = {250--255}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177637}, url = {http://www.nime.org/proceedings/2009/nime2009_250.pdf}, keywords = {conducting, gesture, interactive installations, Wii Remote } }
Elena Jessop. 2009. The Vocal Augmentation and Manipulation Prosthesis (VAMP): A Conducting-Based Gestural Controller for Vocal Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 256–259. http://doi.org/10.5281/zenodo.1177583
Abstract
Download PDF DOI
This paper describes The Vocal Augmentation and Manipulation Prosthesis (VAMP) a gesture-based wearable controller for live-time vocal performance. This controller allows a singer to capture and manipulate single notes that he or she sings, using a gestural vocabulary developed from that of choral conducting. By drawing from a familiar gestural vocabulary, this controller and the associated mappings can be more intuitive and expressive for both performer and audience.
@inproceedings{Jessop2009, author = {Jessop, Elena}, title = {The Vocal Augmentation and Manipulation Prosthesis (VAMP): A Conducting-Based Gestural Controller for Vocal Performance}, pages = {256--259}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177583}, url = {http://www.nime.org/proceedings/2009/nime2009_256.pdf}, keywords = {musical expressivity, vocal performance, gestural control, conducting. } }
Tomás Henriques. 2009. Double Slide Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 260–261. http://doi.org/10.5281/zenodo.1177571
Abstract
Download PDF DOI
The Double Slide Controller is a new electronic music instrument that departs from the slide trombone as a model for its design. Going much beyond a mere simulation of its acoustic counterpart it introduces truly innovative features: two powerful and versatile sets of gesture driven interfaces actuated by the hands of the performer, as well as featuring two independent slides, one for each hand/arm of the musician. The combination of these features make this instrument a great tool to explore new venues in musical expression, given the many degrees of technical and musical complexity that can be achieved during its performance.
@inproceedings{Henriques2009, author = {Henriques, Tom\'{a}s}, title = {Double Slide Controller}, pages = {260--261}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177571}, url = {http://www.nime.org/proceedings/2009/nime2009_260.pdf}, keywords = {Musical Instrument, Sensor technologies, Computer Music, Hardware and Software Design.} }
Edgar Berdahl, Günter Niemeyer, and Julius O. Smith. 2009. HSP : A Simple and Effective Open-Source Platform for Implementing Haptic Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 262–263. http://doi.org/10.5281/zenodo.1177477
Abstract
Download PDF DOI
When we asked a colleague of ours why people do not make more haptic musical instruments, he replied that he thought they were “too hard to program and too expensive.” We decided to solve these perceived problems by introducing HSP, a simple platform for implementing haptic musical instruments. HSP obviates the need for employing low-level embedded control software because the haptic device is controlled directly from within the Pure Data (Pd) software running on a general purpose computer. Positions can be read from the haptic device, and forces can be written to the device using messages in Pd. Various additional objects have been created to facilitate rapid prototyping of useful haptic musical instruments in Pd. HSP operates under Linux, OS X, and Windows and supports the mass-produced Falcon haptic device from NovInt, which can currently be obtained for as little as US$150. All of the above make HSP an especially excellent choice for pedagogical environments where multiple workstations are required and example programs should be complete yet simple.
@inproceedings{Berdahl2009, author = {Berdahl, Edgar and Niemeyer, G\''{u}nter and Smith, Julius O.}, title = {HSP : A Simple and Effective Open-Source Platform for Implementing Haptic Musical Instruments}, pages = {262--263}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177477}, url = {http://www.nime.org/proceedings/2009/nime2009_262.pdf}, keywords = { haptic musical instrument, HSP, haptics, computer music, physical modeling, Pure Data (Pd), NovInt} }
Tarik Barri. 2009. Versum : Audiovisual Composing in 3d. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 264–265. http://doi.org/10.5281/zenodo.1177473
Abstract
Download PDF DOI
This paper introduces the new audiovisual sequencing system "Versum" that allows users to compose in three dimensions. In the present paper the conceptual soil from which this system has sprung is discussed first. Secondly, the basic concepts with which Versum operates are explained, providing a general idea of what is meant by sequencing in three dimensions and explaining what compositions made in Versum can look and sound like. Thirdly, the practical ways in which a composer can use Versum to make his own audiovisual compositions are presented by means of a more detailed description of the different graphical user interface elements. Fourthly, a short description is given of the modular structure of the software underlying Versum. Finally, several foresights regarding the directions in which Versum will continue to develop in the near future are presented.
@inproceedings{Barri2009, author = {Barri, Tarik}, title = {Versum : Audiovisual Composing in 3d}, pages = {264--265}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177473}, url = {http://www.nime.org/proceedings/2009/nime2009_264.pdf}, keywords = {audiovisual, sequencing, collaboration. } }
Jamie Bullock and Lamberto Coccioli. 2009. Towards a Humane Graphical User Interface for Live Electronic Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 266–267. http://doi.org/10.5281/zenodo.1177489
Abstract
Download PDF DOI
In this paper we describe findings related to user interfacerequirements for live electronic music arising from researchconducted as part of the first three-year phase of the EUfunded Integra project. A number of graphical user interface(GUI) prototypes developed during the Integra project initial phase are described and conclusions drawn about theirdesign and implementation.
@inproceedings{Bullock2009, author = {Bullock, Jamie and Coccioli, Lamberto}, title = {Towards a Humane Graphical User Interface for Live Electronic Music}, pages = {266--267}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177489}, url = {http://www.nime.org/proceedings/2009/nime2009_266.pdf}, keywords = {Integra, User Interface, Usability, Design, Live Electronics, Music Technology } }
Tomas Laurenzo, Rodrı́guez Ernesto, and Juan Fabrizio Castro. 2009. YARMI : an Augmented Reality Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 268–269. http://doi.org/10.5281/zenodo.1177611
Abstract
Download PDF DOI
In this paper, we present YARMI, a collaborative, networked, tangible, musical instrument. YARMI operates on augmented-reality space (shared between the performers and the public), presenting a multiple tabletop interface where several musical sequencers and real–time effects machines can be operated.
@inproceedings{Laurenzo2009, author = {Laurenzo, Tomas and Rodr\'{\i}guez, Ernesto and Castro, Juan Fabrizio}, title = {YARMI : an Augmented Reality Musical Instrument}, pages = {268--269}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177611}, url = {http://www.nime.org/proceedings/2009/nime2009_268.pdf}, keywords = {Interactive music instruments, visual interfaces, visual feedback, tangible interfaces, augmented reality, collaborative music, networked musical instruments, real-time musical systems, musical sequencer. } }
Georg Essl. 2009. SpeedDial : Rapid and On-The-Fly Mapping of Mobile Phone Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 270–273. http://doi.org/10.5281/zenodo.1177503
Abstract
Download PDF DOI
When creating new musical instruments on a mobile phone platform one has to map sensory input to synthesis algorithms. We propose that the very task of this mapping belongs in the creative process and to this end we develop a way to rapidly and on-the-fly edit the mapping of mobile phone instruments. The result is that the meaning of the instruments can continuously be changed during a live performance.
@inproceedings{Essl2009, author = {Essl, Georg}, title = {SpeedDial : Rapid and On-The-Fly Mapping of Mobile Phone Instruments}, pages = {270--273}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177503}, url = {http://www.nime.org/proceedings/2009/nime2009_270.pdf}, keywords = {mobile phone instruments,nime,nime09,on-the-fly} }
Sidney S. Fels, Bob Pritchard, and Allison Lenters. 2009. ForTouch : A Wearable Digital Ventriloquized Actor. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 274–275. http://doi.org/10.5281/zenodo.1177509
Abstract
Download PDF DOI
We have constructed an easy-to-use portable, wearable gesture-to-speech system based on the Glove-TalkII [1] and GRASSP [2] Digital Ventriloquized Actors (DIVAs). Our new portable system, called a ForTouch, is a specific model of a DIVA and refines the use of a formant speech synthesizer. Using ForTouch, a user can speak using hand gestures mapped to synthetic sound using a mapping function that preserves gesture trajectories. By making ForTouch portable and self-contained, speakers can communicate with others in the community and perform in new music/theatre stage productions. Figure 1 shows one performer using the ForTouch. ForTouch performers also allow us to study the relation between gestures and speech/song production.
@inproceedings{Fels2009, author = {Fels, Sidney S. and Pritchard, Bob and Lenters, Allison}, title = {ForTouch : A Wearable Digital Ventriloquized Actor}, pages = {274--275}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177509}, url = {http://www.nime.org/proceedings/2009/nime2009_274.pdf}, keywords = {nime09} }
Alex Mclean and Geraint Wiggins. 2009. Words , Movement and Timbre. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 276–279. http://doi.org/10.5281/zenodo.1177629
Abstract
Download PDF DOI
Phonetic symbols describe movements of the vocal tract,tongue and lips, and are combined into complex movementsforming the words of language. In music, vocables are wordsthat describe musical sounds, by relating vocal movementsto articulations of a musical instrument. We posit that vocable words allow the composers and listeners to engageclosely with dimensions of timbre, and that vocables couldsee greater use in electronic music interfaces. A preliminarysystem for controlling percussive physical modelling synthesis with textual words is introduced, with particular application in expressive specification of timbre during computer music performances.
@inproceedings{Mclean2009, author = {Mclean, Alex and Wiggins, Geraint}, title = {Words , Movement and Timbre}, pages = {276--279}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177629}, url = {http://www.nime.org/proceedings/2009/nime2009_276.pdf}, keywords = {nime09,timbre,vocable synthesis} }
Rebecca Fiebrink, Dan Trueman, and Perry R. Cook. 2009. A Meta-Instrument for Interactive, On-the-Fly Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 280–285. http://doi.org/10.5281/zenodo.1177513
Abstract
Download PDF DOI
Supervised learning methods have long been used to allow musical interface designers to generate new mappings by example. We propose a method for harnessing machine learning algorithms within a radically interactive paradigm, in which the designer may repeatedly generate examples, train a learner, evaluate outcomes, and modify parameters in real-time within a single software environment. We describe our meta-instrument, the Wekinator, which allows a user to engage in on-the-fly learning using arbitrary control modalities and sound synthesis environments. We provide details regarding the system implementation and discuss our experiences using the Wekinator for experimentation and performance.
@inproceedings{Fiebrink2009, author = {Fiebrink, Rebecca and Trueman, Dan and Cook, Perry R.}, title = {A Meta-Instrument for Interactive, On-the-Fly Machine Learning}, pages = {280--285}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177513}, url = {http://www.nime.org/proceedings/2009/nime2009_280.pdf}, keywords = {Machine learning, mapping, tools. } }
Jan C. Schacher. 2009. Action and Perception in Interactive Sound Installations : An Ecological Approach. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 286–289. http://doi.org/10.5281/zenodo.1177667
Abstract
Download PDF DOI
In this paper mappings and adaptation in the context of interactive sound installations are discussed. Starting from an ecological perspective on non-expert audience interaction a brief overview and discussion of mapping strategies with a special focus on adaptive systems using machine learning algorithms is given. An audio-visual interactive installation is analyzed and its implementation used to illustrate the issues of audience engagement and to discuss the efficiency of adaptive mappings.
@inproceedings{Schacher2009, author = {Schacher, Jan C.}, title = {Action and Perception in Interactive Sound Installations : An Ecological Approach}, pages = {286--289}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177667}, url = {http://www.nime.org/proceedings/2009/nime2009_286.pdf}, keywords = {Interaction, adaptive mapping, machine learning, audience engagement } }
Jonathon Kirk and Lee Weisert. 2009. The Argus Project : Underwater Soundscape Composition with Laser- Controlled Modulation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 290–292. http://doi.org/10.5281/zenodo.1177605
Abstract
Download PDF DOI
In this paper we describe and analyze The Argus Project, a sound installation involving the real-time processing and spatialized projection of sound sources from beneath a pond’s surface. The primary aim of The Argus Project is to project the natural sound sources from below the pond’s surface while tracking the changes in the environmental factors above the surface so as to map this data onto the real-time audio processing. The project takes as its conceptual model that of a feedback network, or, a process in which the factors that produce a result are themselves modified and reinforced by that result. Examples are given of the compositional process, the execution, and processing techniques.
@inproceedings{Kirk2009, author = {Kirk, Jonathon and Weisert, Lee}, title = {The Argus Project : Underwater Soundscape Composition with Laser- Controlled Modulation}, pages = {290--292}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177605}, url = {http://www.nime.org/proceedings/2009/nime2009_290.pdf}, keywords = {nime09} }
Michael St. Clair and Sasha Leitman. 2009. PlaySoundGround : An Interactive Musical Playground. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 293–296. http://doi.org/10.5281/zenodo.1177685
Abstract
Download PDF DOI
We describe a novel transformation of a playground - merry-go-round, teeter-totter (also referred to as a see-saw), swings, and climbing structure – from its traditional purpose to a collaborative and interactive musical performance system by equipping key structures with sensors that communicate with a computer. A set of Max/ MSP patches translate the physical gestures of playground play into a variety of performer-selected musical mappings. In addition to the electro-acoustic interactivity, the climbing structure incorporates acoustic musical instruments.
@inproceedings{StClair2009, author = {St. Clair, Michael and Leitman, Sasha}, title = {PlaySoundGround : An Interactive Musical Playground}, pages = {293--296}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177685}, url = {http://www.nime.org/proceedings/2009/nime2009_293.pdf}, keywords = {Real-time, Music, Playground, Interactive, Installation, Radical Collaboration, Play.} }
Daniel Jones, Tim Hodgson, Jane Grant, John Matthias, Nicholas Outram, and Nick Ryan. 2009. The Fragmented Orchestra. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 297–302. http://doi.org/10.5281/zenodo.1177587
Abstract
Download PDF DOI
The Fragmented Orchestra is a distributed musical instrument which combines live audio streams from geographically disparate sites, and granulates each according to thespike timings of an artificial spiking neural network. Thispaper introduces the work, outlining its historical context,technical architecture, neuronal model and network infrastructure, making specific reference to modes of interactionwith the public.
@inproceedings{Jones2009, author = {Jones, Daniel and Hodgson, Tim and Grant, Jane and Matthias, John and Outram, Nicholas and Ryan, Nick}, title = {The Fragmented Orchestra}, pages = {297--302}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177587}, url = {http://www.nime.org/proceedings/2009/nime2009_297.pdf}, keywords = {distributed,emergent,environmental,installation,neural network,nime09,sound,streaming audio} }
Ge Wang. 2009. Designing Smule’s Ocarina : The iPhone’s Magic Flute. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 303–307. http://doi.org/10.5281/zenodo.1177697
Abstract
Download PDF DOI
The Smule Ocarina is a wind instrument designed for the iPhone, fully leveraging its wide array of technologies: microphone input (for breath input), multitouch (for fingering), accelerometer, real-time sound synthesis, highperformance graphics, GPS/location, and persistent data connection. In this mobile musical artifact, the interactions of the ancient flute-like instrument are both preserved and transformed via breath-control and multitouch finger-holes, while the onboard global positioning and persistent data connection provide the opportunity to create a new social experience, allowing the users of Ocarina to listen to one another. In this way, Ocarina is also a type of social instrument that enables a different, perhaps even magical, sense of global connectivity.
@inproceedings{Wang2009, author = {Wang, Ge}, title = {Designing Smule's Ocarina : The iPhone's Magic Flute}, pages = {303--307}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177697}, url = {http://www.nime.org/proceedings/2009/nime2009_303.pdf}, keywords = {chuck,design,in,in real-time,interface,iphone,mobile music,multitouch,nime09,ocarina,pulsing waves,social,sonically and onscreen and,sound synthesis takes place,the breath is visualized} }
Nicholas Gillian, Sile O’Modhrain, and Georg Essl. 2009. Scratch-Off : A Gesture Based Mobile Music Game with Tactile Feedback. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 308–311. http://doi.org/10.5281/zenodo.1177553
Abstract
Download PDF DOI
This paper presents Scratch-Off, a new musical multiplayer DJ game that has been designed for a mobile phone. We describe how the game is used as a test platform for experimenting with various types of multimodal feedback. The game uses movement gestures made by the players to scratch a record and control crossfades between tracks, with the objective of the game to make the correct scratch at the correct time in relation to the music. Gestures are detected using the devices built-in tri-axis accelerometer and multi-touch screen display. The players receive visual, audio and various types of vibrotactile feedback to help them make the correct scratch on the beat of the music track. We also discuss the results of a pilot study using this interface.
@inproceedings{Gillian2009a, author = {Gillian, Nicholas and O'Modhrain, Sile and Essl, Georg}, title = {Scratch-Off : A Gesture Based Mobile Music Game with Tactile Feedback}, pages = {308--311}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177553}, url = {http://www.nime.org/proceedings/2009/nime2009_308.pdf}, keywords = {Mobile devices, gesture, audio games. } }
Gil Weinberg, Andrew Beck, and Mark Godfrey. 2009. ZooZBeat : a Gesture-based Mobile Music Studio. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 312–315. http://doi.org/10.5281/zenodo.1177703
Abstract
Download PDF DOI
ZooZBeat is a gesture-based mobile music studio. It is designed to provide users with expressive and creative access to music making on the go. ZooZBeat users shake the phone or tap the screen to enter notes. The result is quantized, mapped onto a musical scale, and looped. Users can then use tilt and shake movements to manipulate and share their creation in a group. Emphasis is placed on finding intuitive metaphors for mobile music creation and maintaining a balance between control and ease-of-use that allows non-musicians to begin creating music with the application immediately.
@inproceedings{Weinberg2009, author = {Weinberg, Gil and Beck, Andrew and Godfrey, Mark}, title = {ZooZBeat : a Gesture-based Mobile Music Studio}, pages = {312--315}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177703}, url = {http://www.nime.org/proceedings/2009/nime2009_312.pdf}, keywords = {mobile music, gestural control } }
Andrea Bianchi and Woon Seung Yeo. 2009. The Drummer : a Collaborative Musical Interface with Mobility. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 316–319. http://doi.org/10.5281/zenodo.1177483
Abstract
Download PDF DOI
It has been shown that collaborative musical interfaces encourage novice users to explore the sound space and promote their participation as music performers. Nevertheless, such interfaces are generally physically situated and can limit the possibility of movements on the stage, a critical factor in live music performance. In this paper we introduce the Drummer, a networked digital musical interface that allows multiple performers to design and play drum kits simultaneously while, at the same time, keeping their ability to freely move on the stage. The system consists of multiple Nintendo DS clients with an intuitive, user-configurable interface and a server computer which plays drum sounds. The Drummer Machine, a small piece of hardware to augment the performance of the Drummer, is also introduced.
@inproceedings{Bianchi2009, author = {Bianchi, Andrea and Yeo, Woon Seung}, title = {The Drummer : a Collaborative Musical Interface with Mobility}, pages = {316--319}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177483}, url = {http://www.nime.org/proceedings/2009/nime2009_316.pdf}, keywords = {collaborative interface, multiplayer, musical expression, musical control, game control, Nintendo DS.} }
Robert Wechsler. 2009. The Oklo Phenomenon. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 320–320. http://doi.org/10.5281/zenodo.1177701
BibTeX
Download PDF DOI
@inproceedings{Wechsler2009, author = {Wechsler, Robert}, title = {The Oklo Phenomenon}, pages = {320--320}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177701}, url = {http://www.nime.org/proceedings/2009/nime2009_320.pdf}, keywords = {nime09} }
David Lieberman. 2009. Anigraphical Etude 9. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 321–321. http://doi.org/10.5281/zenodo.1177619
BibTeX
Download PDF DOI
@inproceedings{Lieberman2009, author = {Lieberman, David}, title = {Anigraphical Etude 9}, pages = {321--321}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177619}, url = {http://www.nime.org/proceedings/2009/nime2009_321.pdf}, keywords = {nime09} }
Min Eui Hong. 2009. Cosmic Strings II. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 322–322. http://doi.org/10.5281/zenodo.1177577
BibTeX
Download PDF DOI
@inproceedings{Hong2009, author = {Hong, Min Eui}, title = {Cosmic Strings II}, pages = {322--322}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177577}, url = {http://www.nime.org/proceedings/2009/nime2009_322.pdf}, keywords = {nime09} }
Troy Rogers, Steven Kemper, and Scott Barton. 2009. Study no. 1 for PAM and MADI. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 323–323. http://doi.org/10.5281/zenodo.1177665
BibTeX
Download PDF DOI
@inproceedings{Rogers2009, author = {Rogers, Troy and Kemper, Steven and Barton, Scott}, title = {Study no. 1 for {PAM} and MADI}, pages = {323--323}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177665}, url = {http://www.nime.org/proceedings/2009/nime2009_323.pdf}, keywords = {nime09} }
Garth Paine and Michael Atherton. 2009. Fue Sho – Electrofusion. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 324–324. http://doi.org/10.5281/zenodo.1177651
BibTeX
Download PDF DOI
@inproceedings{Paine2009, author = {Paine, Garth and Atherton, Michael}, title = {Fue Sho -- Electrofusion}, pages = {324--324}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177651}, url = {http://www.nime.org/proceedings/2009/nime2009_324.pdf}, keywords = {nime09} }
Tarik Barri. 2009. Versum – Fluor. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 325–325. http://doi.org/10.5281/zenodo.1177475
BibTeX
Download PDF DOI
@inproceedings{Barri2009a, author = {Barri, Tarik}, title = {Versum -- Fluor}, pages = {325--325}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177475}, url = {http://www.nime.org/proceedings/2009/nime2009_325.pdf}, keywords = {nime09} }
Chikashi Miyama. 2009. Angry Sparrow. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 326–326. http://doi.org/10.5281/zenodo.1177633
BibTeX
Download PDF DOI
@inproceedings{Miyama2009, author = {Miyama, Chikashi}, title = {Angry Sparrow}, pages = {326--326}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177633}, url = {http://www.nime.org/proceedings/2009/nime2009_326.pdf}, keywords = {nime09} }
Eric Lyon, Benjamin Knapp, and Gascia Ouzounian. 2009. Biomuse Trio. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 327–327. http://doi.org/10.5281/zenodo.1177621
BibTeX
Download PDF DOI
@inproceedings{Lyon2009, author = {Lyon, Eric and Knapp, Benjamin and Ouzounian, Gascia}, title = {Biomuse Trio}, pages = {327--327}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177621}, url = {http://www.nime.org/proceedings/2009/nime2009_327.pdf}, keywords = {nime09} }
Suguru Goto. 2009. BodyJack. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 328–328. http://doi.org/10.5281/zenodo.1177557
BibTeX
Download PDF DOI
@inproceedings{Goto2009, author = {Goto, Suguru}, title = {BodyJack}, pages = {328--328}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177557}, url = {http://www.nime.org/proceedings/2009/nime2009_328.pdf}, keywords = {nime09} }
Marije A. Baalman. 2009. Code LiveCode Live, or livecode Embodied. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 329–329. http://doi.org/10.5281/zenodo.1177469
BibTeX
Download PDF DOI
@inproceedings{Baalman2009, author = {Baalman, Marije A.}, title = {Code LiveCode Live, or livecode Embodied}, pages = {329--329}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177469}, url = {http://www.nime.org/proceedings/2009/nime2009_329.pdf}, keywords = {nime09} }
Giuseppe Torre, Robert Sazdov, and Dorota Konczewska. 2009. MOLITVA — Composition for Voice, Live Electronics, Pointing-At Glove Device and 3-D Setup of Speakers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 330–330. http://doi.org/10.5281/zenodo.1177695
BibTeX
Download PDF DOI
@inproceedings{Torre2009, author = {Torre, Giuseppe and Sazdov, Robert and Konczewska, Dorota}, title = {MOLITVA --- Composition for Voice, Live Electronics, Pointing-At Glove Device and {3-D} Setup of Speakers}, pages = {330--330}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177695}, url = {http://www.nime.org/proceedings/2009/nime2009_330.pdf}, keywords = {nime09} }
Ben Neill and Eric Singer. 2009. Ben Neill and LEMUR. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 331–331. http://doi.org/10.5281/zenodo.1177639
BibTeX
Download PDF DOI
@inproceedings{Neill2009, author = {Neill, Ben and Singer, Eric}, title = {Ben Neill and LEMUR}, pages = {331--331}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177639}, url = {http://www.nime.org/proceedings/2009/nime2009_331.pdf}, keywords = {nime09} }
David Hindman and Evan Drummond. 2009. Performance: Modal Kombat Plays PONG. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 332–332. http://doi.org/10.5281/zenodo.1177573
BibTeX
Download PDF DOI
@inproceedings{Hindman2009, author = {Hindman, David and Drummond, Evan}, title = {Performance: Modal Kombat Plays {PON}G}, pages = {332--332}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177573}, url = {http://www.nime.org/proceedings/2009/nime2009_332.pdf}, keywords = {nime09} }
Colby Leider. 2009. Afflux/Reflux. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 333–333. http://doi.org/10.5281/zenodo.1177615
BibTeX
Download PDF DOI
@inproceedings{Leider2009, author = {Leider, Colby}, title = {Afflux/Reflux}, pages = {333--333}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177615}, url = {http://www.nime.org/proceedings/2009/nime2009_333.pdf}, keywords = {nime09} }
Ge Wang and Rebecca Fiebrink. 2009. PLOrk Beat Science 2.0. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 334–334. http://doi.org/10.5281/zenodo.1177699
BibTeX
Download PDF DOI
@inproceedings{Wang2009a, author = {Wang, Ge and Fiebrink, Rebecca}, title = {PLOrk Beat Science 2.0}, pages = {334--334}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177699}, url = {http://www.nime.org/proceedings/2009/nime2009_334.pdf}, keywords = {nime09} }
David Wessel. 2009. Hands On — A New Work from SLABS Controller and Generative Algorithms. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 335–335. http://doi.org/10.5281/zenodo.1177707
BibTeX
Download PDF DOI
@inproceedings{Wessel2009, author = {Wessel, David}, title = {Hands On --- A New Work from SLABS Controller and Generative Algorithms}, pages = {335--335}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177707}, url = {http://www.nime.org/proceedings/2009/nime2009_335.pdf}, keywords = {nime09} }
R. Luke Dubois and Lesley Flanigan. 2009. Bioluminescence. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 336–336. http://doi.org/10.5281/zenodo.1177501
BibTeX
Download PDF DOI
@inproceedings{Dubois2009, author = {Dubois, R. Luke and Flanigan, Lesley}, title = {Bioluminescence}, pages = {336--336}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177501}, url = {http://www.nime.org/proceedings/2009/nime2009_336.pdf}, keywords = {nime09} }
Ivika Bukvic and Eric Standley. 2009. Elemental & Cyrene Reefs. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 337–337. http://doi.org/10.5281/zenodo.1177487
BibTeX
Download PDF DOI
@inproceedings{Bukvic2009, author = {Bukvic, Ivika and Standley, Eric}, title = {Elemental \& Cyrene Reefs}, pages = {337--337}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177487}, url = {http://www.nime.org/proceedings/2009/nime2009_337.pdf}, keywords = {nime09} }
Scot Gresham-Lancaster and Steve Bull. 2009. Cellphonia: 4’33. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 338–338. http://doi.org/10.5281/zenodo.1177561
BibTeX
Download PDF DOI
@inproceedings{GreshamLancaster2009, author = {Gresham-Lancaster, Scot and Bull, Steve}, title = {Cellphonia: 4'33}, pages = {338--338}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177561}, url = {http://www.nime.org/proceedings/2009/nime2009_338.pdf}, keywords = {nime09} }
Dan Overholt, Byron Lahey, Anne-Marie Skriver Hansen, Winslow Burleson, and Camilla Norrgaard Jensen. 2009. Pendaphonics. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 339–339. http://doi.org/10.5281/zenodo.1177649
BibTeX
Download PDF DOI
@inproceedings{Overholt2009, author = {Overholt, Dan and Lahey, Byron and Skriver Hansen, Anne-Marie and Burleson, Winslow and Norrgaard Jensen, Camilla}, title = {Pendaphonics}, pages = {339--339}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177649}, url = {http://www.nime.org/proceedings/2009/nime2009_339.pdf}, keywords = {nime09} }
Scott Smallwood. 2009. Sound Lanterns. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 340–340. http://doi.org/10.5281/zenodo.1177677
BibTeX
Download PDF DOI
@inproceedings{Smallwood2009, author = {Smallwood, Scott}, title = {Sound Lanterns}, pages = {340--340}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177677}, url = {http://www.nime.org/proceedings/2009/nime2009_340.pdf}, keywords = {nime09} }
Phillip Stearns. 2009. AANN: Artificial Analog Neural Network. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 341–341. http://doi.org/10.5281/zenodo.1177687
BibTeX
@inproceedings{Stearns2009, author = {Stearns, Phillip}, title = {AANN: Artificial Analog Neural Network}, pages = {341--341}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2009}, address = {Pittsburgh, PA, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177687}, url = {http://www.nime.org/proceedings/2009/nime2009_341.pdf}, keywords = {nime09} }
2008
David Kim-Boyle. 2008. Network Musics — Play , Engagement and the Democratization of Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 3–8. http://doi.org/10.5281/zenodo.1179579
Abstract
Download PDF DOI
The rapid development of network communicationtechnologies has allowed composers to create new ways inwhich to directly engage participants in the exploration of newmusical environments. A number of distinctive aestheticapproaches to the musical application of networks will beoutlined in this paper each of which is mediated andconditioned by the technical and aesthetic foundations of thenetwork technologies themselves. Recent work in the field byartists such as Atau Tanaka and Metraform will be examined, aswill some of the earlier pioneering work in the genre by MaxNeuhaus. While recognizing the historical context ofcollaborative work, the , , author will examine how the strategiesemployed in the work of these artists have helped redefine anew aesthetics of engagement in which play, spatial andtemporal dislocation are amongst the genre’s definingcharacteristics.
@inproceedings{KimBoyle2008, author = {Kim-Boyle, David}, title = {Network Musics --- Play , Engagement and the Democratization of Performance}, pages = {3--8}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179579}, url = {http://www.nime.org/proceedings/2008/nime2008_003.pdf}, keywords = {Networks, collaborative, open-form, play, interface. } }
Àlvaro Barbosa. 2008. Ten-Hand Piano : A Networked Music Installation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 9–12. http://doi.org/10.5281/zenodo.1179487
Abstract
Download PDF DOI
This paper presents the latest developments of the Public Sound Objects (PSOs) system, an experimental framework to implement and test new concepts for Networked Music. The project of a Public interactive installation using the PSOs system was commissioned in 2007 by Casa da Musica, the main concert hall space in Porto. It resulted in a distributed musical structure with up to ten interactive performance terminals distributed along the Casa da Musica’s hallways, collectively controlling a shared acoustic piano. The installation allows the visitors to collaborate remotely with each other, within the building, using a software interface custom developed to facilitate collaborative music practices and with no requirements in terms previous knowledge of musical performance.
@inproceedings{Barbosa2008, author = {Barbosa, \`{A}lvaro}, title = {Ten-Hand Piano : A Networked Music Installation}, pages = {9--12}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179487}, url = {http://www.nime.org/proceedings/2008/nime2008_009.pdf}, keywords = {algorithmic composition,behavioral driven,electronic music instruments,interfaces,network music instruments,nime08,performance,public music,real-time collaborative,sound} }
Mike Wozniewski, Nicolas Bouillot, Zack Settel, and Jeremy R. Cooperstock. 2008. Large-Scale Mobile Audio Environments for Collaborative Musical Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 13–18. http://doi.org/10.5281/zenodo.1179651
Abstract
Download PDF DOI
New application spaces and artistic forms can emerge whenusers are freed from constraints. In the general case ofhuman-computer interfaces, users are often confined to afixed location, severely limiting mobility. To overcome thisconstraint in the context of musical interaction, we presenta system to manage large-scale collaborative mobile audioenvironments, driven by user movement. Multiple participants navigate through physical space while sharing overlaid virtual elements. Each user is equipped with a mobilecomputing device, GPS receiver, orientation sensor, microphone, headphones, or various combinations of these technologies. We investigate methods of location tracking, wireless audio streaming, and state management between mobiledevices and centralized servers. The result is a system thatallows mobile users, with subjective 3-D audio rendering,to share virtual scenes. The audio elements of these scenescan be organized into large-scale spatial audio interfaces,thus allowing for immersive mobile performance, locativeaudio installations, and many new forms of collaborativesonic activity.
@inproceedings{Wozniewski2008, author = {Wozniewski, Mike and Bouillot, Nicolas and Settel, Zack and Cooperstock, Jeremy R.}, title = {Large-Scale Mobile Audio Environments for Collaborative Musical Interaction}, pages = {13--18}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179651}, url = {http://www.nime.org/proceedings/2008/nime2008_013.pdf}, keywords = {sonic navigation, mobile music, spatial interaction, wireless audio streaming, locative media, collaborative interfaces } }
Angelo Fraietta. 2008. Open Sound Control : Constraints and Limitations. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 19–23. http://doi.org/10.5281/zenodo.1179537
Abstract
Download PDF DOI
Open Sound Control (OSC) is being used successfully as amessaging protocol among many computers, gesturalcontrollers and multimedia systems. Although OSC hasaddressed some of the shortcomings of MIDI, OSC cannotdeliver on its promises as a real-time communication protocolfor constrained embedded systems. This paper will examinesome of the advantages but also dispel some of the mythsconcerning OSC. The paper will also describe how some of thebest features of OSC can be used to develop a lightweightprotocol that is microcontroller friendly.
@inproceedings{Fraietta2008, author = {Fraietta, Angelo}, title = {Open Sound Control : Constraints and Limitations}, pages = {19--23}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179537}, url = {http://www.nime.org/proceedings/2008/nime2008_019.pdf}, keywords = {a,data transmission protocols,gestural controllers,has been implemented as,midi,nime08,open sound control,osc} }
Matteo Bozzolan and Giovanni Cospito. 2008. SMuSIM : a Prototype of Multichannel Spatialization System with Multimodal Interaction Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 24–27. http://doi.org/10.5281/zenodo.1179501
Abstract
Download PDF DOI
The continuous evolutions in the human-computer interfaces field have allowed the development of control devicesthat let have a more and more intuitive, gestural and noninvasive interaction.Such devices find a natural employment also in the musicapplied informatics and in particular in the electronic music,always searching for new expressive means.This paper presents a prototype of a system for the realtime control of sound spatialization in a multichannel configuration with a multimodal interaction interface. The spatializer, called SMuSIM, employs interaction devices thatrange from the simple and well-established mouse and keyboard to a classical gaming used joystick (gamepad), finallyexploiting more advanced and innovative typologies basedon image analysis (as a webcam).
@inproceedings{Bozzolan2008, author = {Bozzolan, Matteo and Cospito, Giovanni}, title = {SMuSIM : a Prototype of Multichannel Spatialization System with Multimodal Interaction Interface}, pages = {24--27}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179501}, url = {http://www.nime.org/proceedings/2008/nime2008_024.pdf}, keywords = {Sound spatialization, multimodal interaction, interaction interfaces, EyesWeb, Pure data. } }
Chris Nash and Alan Blackwell. 2008. Realtime Representation and Gestural Control of Musical Polytempi. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 28–33. http://doi.org/10.5281/zenodo.1179603
Abstract
Download PDF DOI
Over the last century, composers have made increasingly ambitious experiments with musical time, but have been impeded in expressing more temporally-complex musical processes by the limitations of both music notations and human performers. In this paper, we describe a computer-based notation and gestural control system for independently manipulating the tempi of musical parts within a piece, at performance time. We describe how the problem was approached, drawing upon feedback and suggestions from consultations across multiple disciplines, seeking analogous problems in other fields. Throughout, our approach is guided and, ultimately, assessed by an established professional composer, who was able to interact with a working prototype of the system.
@inproceedings{Nash2008, author = {Nash, Chris and Blackwell, Alan}, title = {Realtime Representation and Gestural Control of Musical Polytempi}, pages = {28--33}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179603}, url = {http://www.nime.org/proceedings/2008/nime2008_028.pdf}, keywords = {composition,gesture,nime08,performance,polytempi,realtime,tempo} }
Mikael Laurson and Mika Kuuskankare. 2008. Towards Idiomatic and Flexible Score-based Gestural Control with a Scripting Language. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 34–37. http://doi.org/10.5281/zenodo.1179589
BibTeX
Download PDF DOI
@inproceedings{Laurson2008, author = {Laurson, Mikael and Kuuskankare, Mika}, title = {Towards Idiomatic and Flexible Score-based Gestural Control with a Scripting Language}, pages = {34--37}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179589}, url = {http://www.nime.org/proceedings/2008/nime2008_034.pdf}, keywords = {synthesis control, expressive timing, playing styles } }
Alexandre Bouënard, Sylvie Gibet, and Marcelo M. Wanderley. 2008. Enhancing the Visualization of Percussion Gestures by Virtual Character Animation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 38–43. http://doi.org/10.5281/zenodo.1179497
Abstract
Download PDF DOI
A new interface for visualizing and analyzing percussion gestures is presented, proposing enhancements of existing motion capture analysis tools. This is achieved by offering apercussion gesture analysis protocol using motion capture.A virtual character dynamic model is then designed in order to take advantage of gesture characteristics, yielding toimprove gesture analysis with visualization and interactioncues of different types.
@inproceedings{Bouenard2008, author = {Bou\''{e}nard, Alexandre and Gibet, Sylvie and Wanderley, Marcelo M.}, title = {Enhancing the Visualization of Percussion Gestures by Virtual Character Animation}, pages = {38--43}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179497}, url = {http://www.nime.org/proceedings/2008/nime2008_038.pdf}, keywords = {Gesture and sound, interface, percussion gesture, virtual character, interaction. } }
Diana Young. 2008. Classification of Common Violin Bowing Techniques Using Gesture Data from a Playable Measurement System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 44–48. http://doi.org/10.5281/zenodo.1177457
BibTeX
Download PDF DOI
@inproceedings{Young2008, author = {Young, Diana}, title = {Classification of Common Violin Bowing Techniques Using Gesture Data from a Playable Measurement System}, pages = {44--48}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1177457}, url = {http://www.nime.org/proceedings/2008/nime2008_044.pdf}, keywords = {bowing, gesture, playing technique, principal component anal- ysis, classification } }
Jyri Pakarinen, Vesa Välimäki, and Tapio Puputti. 2008. Slide Guitar Synthesizer with Gestural Control. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 49–52. http://doi.org/10.5281/zenodo.1179607
Abstract
Download PDF DOI
This article discusses a virtual slide guitar instrument, recently introduced in [7]. The instrument consists of a novelphysics-based synthesis model and a gestural user interface.The synthesis engine uses energy-compensated time-varyingdigital waveguides. The string algorithm also contains aparametric model for synthesizing the tube-string contactsounds. The real-time virtual slide guitar user interface employs optical gesture recognition, so that the user can playthis virtual instrument simply by making slide guitar playing gestures in front of a camera.
@inproceedings{Pakarinen2008, author = {Pakarinen, Jyri and V\''{a}lim\''{a}ki, Vesa and Puputti, Tapio}, title = {Slide Guitar Synthesizer with Gestural Control}, pages = {49--52}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179607}, url = {http://www.nime.org/proceedings/2008/nime2008_049.pdf}, keywords = {Sound synthesis, slide guitar, gesture control, physical mod- eling } }
Otso Lähdeoja. 2008. An Approach to Instrument Augmentation : the Electric Guitar. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 53–56. http://doi.org/10.5281/zenodo.1179585
BibTeX
Download PDF DOI
@inproceedings{Lahdeoja2008, author = {L\''{a}hdeoja, Otso}, title = {An Approach to Instrument Augmentation : the Electric Guitar}, pages = {53--56}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179585}, url = {http://www.nime.org/proceedings/2008/nime2008_053.pdf}, keywords = {Augmented instrument, electric guitar, gesture-sound relationship } }
Juhani Räisänen. 2008. Sormina – a New Virtual and Tangible Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 57–60. http://doi.org/10.5281/zenodo.1179617
Abstract
Download PDF DOI
This paper describes the Sormina, a new virtual and tangibleinstrument, which has its origins in both virtual technology andthe heritage of traditional instrument design. The motivationbehind the project is presented, as well as hardware andsoftware design. Insights gained through collaboration withacoustic musicians are presented, as well as comparison tohistorical instrument design.
@inproceedings{Raisanen2008, author = {R\''{a}is\''{a}nen, Juhani}, title = {Sormina -- a New Virtual and Tangible Instrument}, pages = {57--60}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179617}, url = {http://www.nime.org/proceedings/2008/nime2008_057.pdf}, keywords = {Gestural controller, digital musical instrument, usability, music history, design. } }
Edgar Berdahl, Hans-Christoph Steiner, and Collin Oldham. 2008. Practical Hardware and Algorithms for Creating Haptic Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 61–66. http://doi.org/10.5281/zenodo.1179495
Abstract
Download PDF DOI
The music community has long had a strong interest in haptic technology. Recently, more effort has been put into making it more and more accessible to instrument designers.This paper covers some of these technologies with the aimof helping instrument designers add haptic feedback to theirinstruments. We begin by giving a brief overview of practicalactuators. Next, we compare and contrast using embeddedmicrocontrollers versus general purpose computers as controllers. Along the way, we mention some common softwareenvironments for implementing control algorithms. Then wediscuss the fundamental haptic control algorithms as well assome more complex ones. Finally, we present two practicaland effective haptic musical instruments: the haptic drumand the Cellomobo.
@inproceedings{Berdahl2008a, author = {Berdahl, Edgar and Steiner, Hans-Christoph and Oldham, Collin}, title = {Practical Hardware and Algorithms for Creating Haptic Musical Instruments}, pages = {61--66}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179495}, url = {http://www.nime.org/proceedings/2008/nime2008_061.pdf}, keywords = {haptic, actuator, practical, immersion, embedded, sampling rate, woofer, haptic drum, Cellomobo } }
Amit Zoran and Pattie Maes. 2008. Considering Virtual & Physical Aspects in Acoustic Guitar Design. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 67–70. http://doi.org/10.5281/zenodo.1177463
BibTeX
Download PDF DOI
@inproceedings{Zoran2008, author = {Zoran, Amit and Maes, Pattie}, title = {Considering Virtual \& Physical Aspects in Acoustic Guitar Design}, pages = {67--70}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1177463}, url = {http://www.nime.org/proceedings/2008/nime2008_067.pdf}, keywords = {nime08} }
Dylan Menzies. 2008. Virtual Intimacy : Phya as an Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 71–76. http://doi.org/10.5281/zenodo.1179599
Abstract
Download PDF DOI
Phya is an open source C++ library originally designed foradding physically modeled contact sounds into computergame environments equipped with physics engines. We review some aspects of this system, and also consider it fromthe purely aesthetic perspective of musical expression.
@inproceedings{Menzies2008, author = {Menzies, Dylan}, title = {Virtual Intimacy : Phya as an Instrument}, pages = {71--76}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179599}, url = {http://www.nime.org/proceedings/2008/nime2008_071.pdf}, keywords = {NIME, musical expression, virtual reality, physical model- ing, audio synthesis } }
Jennifer Butler. 2008. Creating Pedagogical Etudes for Interactive Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 77–80. http://doi.org/10.5281/zenodo.1179503
Abstract
Download PDF DOI
In this paper I discuss the importance of and need forpedagogical materials to support the development of newinterfaces and new instruments for electronic music. I describemy method for creating a graduated series of pedagogicaletudes composed using Max/MSP. The etudes will helpperformers and instrument designers learn the most commonlyused basic skills necessary to perform with interactiveelectronic music instruments. My intention is that the finalseries will guide a beginner from these initial steps through agraduated method, eventually incorporating some of the moreadvanced techniques regularly used by electronic musiccomposers.I describe the order of the series, and discuss the benefits (bothto performers and to composers) of having a logical sequence ofskill-based etudes. I also connect the significance of skilledperformers to the development of two essential areas that Iperceive are still just emerging in this field: the creation of acomposed repertoire and an increase in musical expressionduring performance.
@inproceedings{Butler2008, author = {Butler, Jennifer}, title = {Creating Pedagogical Etudes for Interactive Instruments}, pages = {77--80}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179503}, url = {http://www.nime.org/proceedings/2008/nime2008_077.pdf}, keywords = {composition,etudes,max,msp,musical controllers,musical expression,nime08,pedagogy,repertoire} }
Dan Stowell, Mark D. Plumbley, and Nick Bryan-Kinns. 2008. Discourse Analysis Evaluation Method for Expressive Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 81–86. http://doi.org/10.5281/zenodo.1179631
Abstract
Download PDF DOI
The expressive and creative affordances of an interface aredifficult to evaluate, particularly with quantitative methods.However, rigorous qualitative methods do exist and can beused to investigate such topics. We present a methodologybased around user studies involving Discourse Analysis ofspeech. We also present an example of the methodologyin use: we evaluate a musical interface which utilises vocaltimbre, with a user group of beatboxers.
@inproceedings{Stowell2008, author = {Stowell, Dan and Plumbley, Mark D. and Bryan-Kinns, Nick}, title = {Discourse Analysis Evaluation Method for Expressive Musical Interfaces}, pages = {81--86}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179631}, url = {http://www.nime.org/proceedings/2008/nime2008_081.pdf}, keywords = {discourse analysis,evaluation,nime08,qualitative methods,voice} }
Chris Kiefer, Nick Collins, and Geraldine Fitzpatrick. 2008. HCI Methodology For Evaluating Musical Controllers : A Case Study. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 87–90. http://doi.org/10.5281/zenodo.1179577
Abstract
Download PDF DOI
There is small but useful body of research concerning theevaluation of musical interfaces with HCI techniques. Inthis paper, we present a case study in implementing thesetechniques; we describe a usability experiment which evaluated the Nintendo Wiimote as a musical controller, andreflect on the effectiveness of our choice of HCI methodologies in this context. The study offered some valuable results,but our picture of the Wiimote was incomplete as we lackeddata concerning the participants’ instantaneous musical experience. Recent trends in HCI are leading researchers totackle this problem of evaluating user experience; we reviewsome of their work and suggest that with some adaptation itcould provide useful new tools and methodologies for computer musicians.
@inproceedings{Kiefer2008, author = {Kiefer, Chris and Collins, Nick and Fitzpatrick, Geraldine}, title = {HCI Methodology For Evaluating Musical Controllers : A Case Study}, pages = {87--90}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179577}, url = {http://www.nime.org/proceedings/2008/nime2008_087.pdf}, keywords = {HCI Methodology, Wiimote, Evaluating Musical Interaction } }
Olivier Bau, Atau Tanaka, and Wendy E. Mackay. 2008. The A20 : Musical Metaphors for Interface Design. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 91–96. http://doi.org/10.5281/zenodo.1179489
Abstract
Download PDF DOI
We combine two concepts, the musical instrument as metaphorand technology probes, to explore how tangible interfaces canexploit the semantic richness of sound. Using participatorydesign methods from Human-Computer Interaction (HCI), wedesigned and tested the A20, a polyhedron-shaped, multichannel audio input/output device. The software maps soundaround the edges and responds to the user’s gestural input,allowing both aural and haptic modes of interaction as well asdirect manipulation of media content. The software is designedto be very flexible and can be adapted to a wide range ofshapes. Our tests of the A20’s perceptual and interactionproperties showed that users can successfully detect soundplacement, movement and haptic effects on this device. Ourparticipatory design workshops explored the possibilities of theA20 as a generative tool for the design of an extended,collaborative personal music player. The A20 helped users toenact scenarios of everyday mobile music player use and togenerate new design ideas.
@inproceedings{Bau2008, author = {Bau, Olivier and Tanaka, Atau and Mackay, Wendy E.}, title = {The A20 : Musical Metaphors for Interface Design}, pages = {91--96}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179489}, url = {http://www.nime.org/proceedings/2008/nime2008_091.pdf}, keywords = {Generative design tools, Instrument building, Multi-faceted audio, Personal music devices, Tangible user interfaces, Technology probes } }
Tobias Grosshauser. 2008. Low Force Pressure Measurement : Pressure Sensor Matrices for Gesture Analysis , Stiffness Recognition and Augmented Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 97–102. http://doi.org/10.5281/zenodo.1179551
Abstract
Download PDF DOI
The described project is a new approach to use highly sensitive low force pressure sensor matrices for malposition, cramping and tension of hands and fingers, gesture and keystroke analysis and for new musical expression. In the latter, sensors are used as additional touch sensitive switches and keys. In pedagogical issues, new ways of technology enhanced teaching, self teaching and exercising are described. The used sensors are custom made in collaboration with the ReactiveS Sensorlab.
@inproceedings{Grosshauser2008, author = {Grosshauser, Tobias}, title = {Low Force Pressure Measurement : Pressure Sensor Matrices for Gesture Analysis , Stiffness Recognition and Augmented Instruments}, pages = {97--102}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179551}, url = {http://www.nime.org/proceedings/2008/nime2008_097.pdf}, keywords = {Pressure Measurement, Force, Sensor, Finger, Violin, Strings, Piano, Left Hand, Right Hand, Time Line, Cramping, Gesture and Posture Analysis. } }
Giuseppe Torre, Javier Torres, and Mikael Fernström. 2008. The Development of Motion Tracking Algorithms for Low Cost Inertial Measurement Units. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 103–106. http://doi.org/10.5281/zenodo.1179641
Abstract
Download PDF DOI
In this paper, we describe an algorithm for the numericalevaluation of the orientation of an object to which a clusterof accelerometers, gyroscopes and magnetometers has beenattached. The algorithm is implemented through a set ofMax/Msp and pd new externals. Through the successfulimplementation of the algorithm, we introduce Pointingat, a new gesture device for the control of sound in a 3Denvironment. This work has been at the core of the Celeritas Project, an interdisciplinary research project on motiontracking technology and multimedia live performances between the Tyndall Institute of Cork and the InteractionDesign Centre of Limerick.
@inproceedings{Torre2008, author = {Torre, Giuseppe and Torres, Javier and Fernstr\''{o}m, Mikael}, title = {The Development of Motion Tracking Algorithms for Low Cost Inertial Measurement Units}, pages = {103--106}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179641}, url = {http://www.nime.org/proceedings/2008/nime2008_103.pdf}, keywords = {eu-,ler,max,micro-electro-mechanical,msp,nime08,orientation matrix,pd,pitch yaw and roll,quaternion,sensors,surement unit,tracking orientation,wimu,wireless inertial mea-} }
Adrian Freed. 2008. Application of new Fiber and Malleable Materials for Agile Development of Augmented Instruments and Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 107–112. http://doi.org/10.5281/zenodo.1179539
Abstract
Download PDF DOI
The paper introduces new fiber and malleable materials,including piezoresistive fabric and conductive heat-shrinktubing, and shows techniques and examples of how they maybe used for rapid prototyping and agile development of musicalinstrument controllers. New implementations of well-knowndesigns are covered as well as enhancements of existingcontrollers. Finally, two new controllers are introduced that aremade possible by these recently available materials andconstruction techniques.
@inproceedings{Freed2008, author = {Freed, Adrian}, title = {Application of new Fiber and Malleable Materials for Agile Development of Augmented Instruments and Controllers}, pages = {107--112}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179539}, url = {http://www.nime.org/proceedings/2008/nime2008_107.pdf}, keywords = {Agile Development, Rapid Prototyping, Conductive fabric, Piezoresistive fabric, conductive heatshrink tubing, augmented instruments. } }
Alain Crevoisier and Greg Kellum. 2008. Transforming Ordinary Surfaces into Multi-touch Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 113–116. http://doi.org/10.5281/zenodo.1179517
Abstract
Download PDF DOI
In this paper, we describe a set of hardware and software tools for creating musical controllers with any flat surface or simple object, such as tables, walls, metallic plates, wood boards, etc. The system makes possible to transform such physical objects and surfaces into virtual control interfaces, by using computer vision technologies to track the interaction made by the musician, either with the hands, mallets or sticks. These new musical interfaces, freely reconfigurable, can be used to control standard sound modules or effect processors, by defining zones on their surface and assigning them musical commands, such as the triggering of notes or the modulation of parameters.
@inproceedings{Crevoisier2008, author = {Crevoisier, Alain and Kellum, Greg}, title = {Transforming Ordinary Surfaces into Multi-touch Controllers}, pages = {113--116}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179517}, url = {http://www.nime.org/proceedings/2008/nime2008_113.pdf}, keywords = {Computer Vision, Multi-touch Interaction, Musical Interfaces. } }
Nicholas Ward, Kedzie Penfield, Sile O’Modhrain, and Benjamin Knapp. 2008. A Study of Two Thereminists : Towards Movement Informed Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 117–121. http://doi.org/10.5281/zenodo.1179649
Abstract
Download PDF DOI
This paper presents a comparison of the movement styles of two theremin players based on observation and analysis of video recordings. The premise behind this research is that a consideration of musicians’ movements could form the basis for a new framework for the design of new instruments. Laban Movement Analysis is used to qualitatively analyse the movement styles of the musicians and to argue that the Recuperation phase of their phrasing is essential to achieve satisfactory performance.
@inproceedings{Ward2008, author = {Ward, Nicholas and Penfield, Kedzie and O'Modhrain, Sile and Knapp, Benjamin}, title = {A Study of Two Thereminists : Towards Movement Informed Instrument Design}, pages = {117--121}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179649}, url = {http://www.nime.org/proceedings/2008/nime2008_117.pdf}, keywords = {Effort Phrasing, Recuperation, Laban Movement Analysis, Theremin } }
Vassilios-Fivos A. Maniatakos and Christian Jacquemin. 2008. Towards an Affective Gesture Interface for Expressive Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 122–127. http://doi.org/10.5281/zenodo.1179595
BibTeX
Download PDF DOI
@inproceedings{Maniatakos2008, author = {Maniatakos, Vassilios-Fivos A. and Jacquemin, Christian}, title = {Towards an Affective Gesture Interface for Expressive Music Performance}, pages = {122--127}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179595}, url = {http://www.nime.org/proceedings/2008/nime2008_122.pdf}, keywords = {affective computing, interactive performance, HMM, gesture recognition, intelligent mapping, affective interface } }
Anna Källblad, Anders Friberg, Karl Svensson, and Elisabet S. Edelholm. 2008. Hoppsa Universum – An Interactive Dance Installation for Children. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 128–133. http://doi.org/10.5281/zenodo.1179573
Abstract
Download PDF DOI
It started with an idea to create an empty space in which you activated music and light as you moved around. In responding to the music and lighting you would activate more or different sounds and thereby communicate with the space through your body. This led to an artistic research project in which children’s spontaneous movement was observed, a choreography made based on the children’s movements and music written and recorded for the choreography. This music was then decomposed and choreographed into an empty space at Botkyrka konsthall creating an interactive dance installation. It was realized using an interactive sound and light system in which 5 video cameras were detecting the motion in the room connected to a 4-channel sound system and a set of 14 light modules. During five weeks people of all ages came to dance and move around in the installation. The installation attracted a wide range of people of all ages and the tentative evaluation indicates that it was very positively received and that it encouraged free movement in the intended way. Besides observing the activity in the installation interviews were made with schoolchildren age 7 who had participated in the installation.
@inproceedings{Kallblad2008, author = {K\''{a}llblad, Anna and Friberg, Anders and Svensson, Karl and Edelholm, Elisabet S.}, title = {Hoppsa Universum -- An Interactive Dance Installation for Children}, pages = {128--133}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179573}, url = {http://www.nime.org/proceedings/2008/nime2008_128.pdf}, keywords = {Installation, dance, video recognition, children's movement, interactive multimedia } }
Antonio Camurri, Corrado Canepa, Paolo Coletta, Barbara Mazzarino, and Gualtiero Volpe. 2008. Mappe per Affetti Erranti : a Multimodal System for Social Active Listening and Expressive Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 134–139. http://doi.org/10.5281/zenodo.1179505
BibTeX
Download PDF DOI
@inproceedings{Camurri2008, author = {Camurri, Antonio and Canepa, Corrado and Coletta, Paolo and Mazzarino, Barbara and Volpe, Gualtiero}, title = {Mappe per Affetti Erranti : a Multimodal System for Social Active Listening and Expressive Performance}, pages = {134--139}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179505}, url = {http://www.nime.org/proceedings/2008/nime2008_134.pdf}, keywords = {Active listening of music, expressive interfaces, full-body motion analysis and expressive gesture processing, multimodal interactive systems for music and performing arts applications, collaborative environments, social interaction. } }
Sergio Canazza and Antonina Dattolo. 2008. New Data Structure for Old Musical Open Works. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 140–143. http://doi.org/10.5281/zenodo.1179507
Abstract
Download PDF DOI
Musical open works can be often thought like sequences of musical structures, which can be arranged by anyone who had access to them and who wished to realize the work. This paper proposes an innovative agent-based system to model the information and organize it in structured knowledge; to create effective, graph-centric browsing perspectives and views for the user; to use , , authoring tools for the performance of open work of electro-acoustic music.
@inproceedings{Canazza2008, author = {Canazza, Sergio and Dattolo, Antonina}, title = {New Data Structure for Old Musical Open Works}, pages = {140--143}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179507}, url = {http://www.nime.org/proceedings/2008/nime2008_140.pdf}, keywords = {Musical Open Work, Multimedia Information Systems, Software Agents, zz-structures. } }
Arne Eigenfeldt and Ajay Kapur. 2008. An Agent-based System for Robotic Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 144–149. http://doi.org/10.5281/zenodo.1179527
Abstract
Download PDF DOI
This paper presents an agent-based architecture for robotic musical instruments that generate polyphonic rhythmic patterns that continuously evolve and develop in a musically "intelligent" manner. Agent-based software offers a new method for real-time composition that allows for complex interactions between individual voices while requiring very little user interaction or supervision. The system described, Kinetic Engine, is an environment in which individual software agents, emulate drummers improvising within a percussion ensemble. Player agents assume roles and personalities within the ensemble, and communicate with one another to create complex rhythmic interactions. In this project, the ensemble is comprised of a 12-armed musical robot, MahaDeviBot, in which each limb has its own software agent controlling what it performs.
@inproceedings{Eigenfeldt2008, author = {Eigenfeldt, Arne and Kapur, Ajay}, title = {An Agent-based System for Robotic Musical Performance}, pages = {144--149}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179527}, url = {http://www.nime.org/proceedings/2008/nime2008_144.pdf}, keywords = {Robotic Musical Instruments, Agents, Machine Musicianship. } }
Maurizio Goina and Pietro Polotti. 2008. Elementary Gestalts for Gesture Sonification. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 150–153. http://doi.org/10.5281/zenodo.1179549
Abstract
Download PDF DOI
In this paper, we investigate the relationships between gesture and sound by means of an elementary gesture sonification. This work takes inspiration from Bauhaus’ ideals and Paul Klee’s investigation into forms and pictorial representation. In line with these ideas, the main aim of this work is to reduce gesture to a combination of a small number of elementary components (gestalts) used to control a corresponding small set of sounds. By means of a demonstrative tool, we introduce here a line of research that is at its initial stage. The envisaged goal of future developments is a novel system that could be a composing/improvising tool as well as an interface for interactive dance and performance.
@inproceedings{Goina2008, author = {Goina, Maurizio and Polotti, Pietro}, title = {Elementary Gestalts for Gesture Sonification}, pages = {150--153}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179549}, url = {http://www.nime.org/proceedings/2008/nime2008_150.pdf}, keywords = {Bauhaus, Klee, gesture analysis, sonification. } }
Stefano Delle Monache, Pietro Polotti, Stefano Papetti, and Davide Rocchesso. 2008. Sonically Augmented Found Objects. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 154–157. http://doi.org/10.5281/zenodo.1179519
Abstract
Download PDF DOI
We present our work with augmented everyday objectstransformed into sound sources for music generation. The idea isto give voice to objects through technology. More specifically, theparadigm of the birth of musical instruments as a sonification ofobjects used in domestic or work everyday environments is hereconsidered and transposed into the technologically augmentedscenarios of our contemporary world.
@inproceedings{DelleMonache2008, author = {Delle Monache, Stefano and Polotti, Pietro and Papetti, Stefano and Rocchesso, Davide}, title = {Sonically Augmented Found Objects}, pages = {154--157}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179519}, url = {http://www.nime.org/proceedings/2008/nime2008_154.pdf}, keywords = {Rag-time washboard, sounding objects, physics-based sound synthesis, interactivity, sonification, augmented everyday objects. } }
Jean-Marc Pelletier. 2008. Sonified Motion Flow Fields as a Means of Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 158–163. http://doi.org/10.5281/zenodo.1179611
Abstract
Download PDF DOI
This paper describes a generalized motion-based framework forthe generation of large musical control fields from imaging data.The framework is general in the sense that it does not depend ona particular source of sensing data. Real-time images of stageperformers, pre-recorded and live video, as well as more exoticdata from imaging systems such as thermography, pressuresensor arrays, etc. can be used as a source of control. Featurepoints are extracted from the candidate images, from whichmotion vector fields are calculated. After some processing, thesemotion vectors are mapped individually to sound synthesisparameters. Suitable synthesis techniques include granular andmicrosonic algorithms, additive synthesis and micro-polyphonicorchestration. Implementation details of this framework isdiscussed, as well as suitable creative and artistic uses andapproaches.
@inproceedings{Pelletier2008, author = {Pelletier, Jean-Marc}, title = {Sonified Motion Flow Fields as a Means of Musical Expression}, pages = {158--163}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179611}, url = {http://www.nime.org/proceedings/2008/nime2008_158.pdf}, keywords = {Computer vision, control field, image analysis, imaging, mapping, microsound, motion flow, sonification, synthesis } }
Josh Dubrau and Mark Havryliv. 2008. P[a]ra[pra]xis : Poetry in Motion. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 164–167. http://doi.org/10.5281/zenodo.1179525
BibTeX
Download PDF DOI
@inproceedings{Dubrau2008, author = {Dubrau, Josh and Havryliv, Mark}, title = {P[a]ra[pra]xis : Poetry in Motion}, pages = {164--167}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179525}, url = {http://www.nime.org/proceedings/2008/nime2008_164.pdf}, keywords = {Poetry, language sonification, psychoanalysis, linguistics, Freud, realtime poetry. } }
Jan C. Schacher. 2008. Davos Soundscape, a Location Based Interactive Composition. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 168–171. http://doi.org/10.5281/zenodo.1179623
Abstract
Download PDF DOI
Moving out of doors with digital tools and electronic music and creating musically rich experiences is made possible by the increased availability of ever smaller and more powerful mobile computers. Composing music for and in a landscape instead of for a closed architectural space offers new perspectives but also raises questions about interaction and composition of electronic music. The work we present here was commissioned by a festival and ran on a daily basis over a period of three months. A GPS-enabled embedded Linux system is assembled to serve as a location-aware sound platform. Several challenges have to be overcome both technically and artistically to achieve a seamless experience and provide a simple device to be handed to the public. By building this interactive experience, which relies as much on the user’s willingness to explore the invisible sonic landscape as on the ability to deploy the technology, a number of new avenues for exploring electronic music and interactivity in location-based media open up. New ways of composing music for and in a landscape and for creating audience interaction are explored.
@inproceedings{Schacher2008, author = {Schacher, Jan C.}, title = {Davos Soundscape, a Location Based Interactive Composition}, pages = {168--171}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179623}, url = {http://www.nime.org/proceedings/2008/nime2008_168.pdf}, keywords = {Location-based, electronic music, composition, embedded Linux, GPS, Pure Data, interaction, mapping, soundscape } }
Andrew Schmeder and Adrian Freed. 2008. uOSC : The Open Sound Control Reference Platform for Embedded Devices. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 175–180. http://doi.org/10.5281/zenodo.1179627
Abstract
Download PDF DOI
A general-purpose firmware for a low cost microcontroller is described that employs the Open Sound Control protocol over USB. The firmware is designed with considerations for integration in new musical interfaces and embedded devices. Features of note include stateless design, efficient floating-point support, temporally correct data handling, and protocol completeness. A timing performance analysis is conducted.
@inproceedings{Schmeder2008, author = {Schmeder, Andrew and Freed, Adrian}, title = {uOSC : The Open Sound Control Reference Platform for Embedded Devices}, pages = {175--180}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179627}, url = {http://www.nime.org/proceedings/2008/nime2008_175.pdf}, keywords = {jitter,latency,nime08,open sound control,pic microcontroller,usb} }
Timothy Place, Trond Lossius, Alexander R. Jensenius, and Nils Peters. 2008. Addressing Classes by Differentiating Values and Properties in OSC. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 181–184. http://doi.org/10.5281/zenodo.1179613
Abstract
Download PDF DOI
An approach for creating structured Open Sound Control(OSC) messages by separating the addressing of node valuesand node properties is suggested. This includes a methodfor querying values and properties. As a result, it is possibleto address complex nodes as classes inside of more complextree structures using an OSC namespace. This is particularly useful for creating flexible communication in modularsystems. A prototype implementation is presented and discussed.
@inproceedings{Place2008, author = {Place, Timothy and Lossius, Trond and Jensenius, Alexander R. and Peters, Nils}, title = {Addressing Classes by Differentiating Values and Properties in OSC}, pages = {181--184}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179613}, url = {http://www.nime.org/proceedings/2008/nime2008_181.pdf}, keywords = {jamoma,namespace,nime08,osc,standardization} }
Misra Ananya, Georg Essl, and Michael Rohs. 2008. Microphone as Sensor in Mobile Phone Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 185–188. http://doi.org/10.5281/zenodo.1179485
Abstract
Download PDF DOI
Many mobile devices, specifically mobile phones, come equipped with a microphone. Microphones are high-fidelity sensors that can pick up sounds relating to a range of physical phenomena. Using simple feature extraction methods,parameters can be found that sensibly map to synthesis algorithms to allow expressive and interactive performance.For example blowing noise can be used as a wind instrument excitation source. Also other types of interactionscan be detected via microphones, such as striking. Hencethe microphone, in addition to allowing literal recording,serves as an additional source of input to the developingfield of mobile phone performance.
@inproceedings{Platz2008, author = {Ananya, Misra and Essl, Georg and Rohs, Michael}, title = {Microphone as Sensor in Mobile Phone Performance}, pages = {185--188}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179485}, url = {http://www.nime.org/proceedings/2008/nime2008_185.pdf}, keywords = {mobile music making, microphone, mobile-stk } }
Nicolas Bouillot, Mike Wozniewski, Zack Settel, and Jeremy R. Cooperstock. 2008. A Mobile Wireless Augmented Guitar. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 189–192. http://doi.org/10.5281/zenodo.1179499
BibTeX
Download PDF DOI
@inproceedings{Bouillot2008, author = {Bouillot, Nicolas and Wozniewski, Mike and Settel, Zack and Cooperstock, Jeremy R.}, title = {A Mobile Wireless Augmented Guitar}, pages = {189--192}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179499}, url = {http://www.nime.org/proceedings/2008/nime2008_189.pdf}, keywords = {nime08} }
Robert Jacobs, Mark Feldmeier, and Joseph A. Paradiso. 2008. A Mobile Music Environment Using a PD Compiler and Wireless Sensors. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 193–196. http://doi.org/10.5281/zenodo.1179567
Abstract
Download PDF DOI
None
@inproceedings{Jacobs2008, author = {Jacobs, Robert and Feldmeier, Mark and Paradiso, Joseph A.}, title = {A Mobile Music Environment Using a PD Compiler and Wireless Sensors}, pages = {193--196}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179567}, url = {http://www.nime.org/proceedings/2008/nime2008_193.pdf}, keywords = {None} }
Ross Bencina, Danielle Wilde, and Somaya Langley. 2008. Gesture=Sound Experiments : Process and Mappings. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 197–202. http://doi.org/10.5281/zenodo.1179491
BibTeX
Download PDF DOI
@inproceedings{Bencina2008, author = {Bencina, Ross and Wilde, Danielle and Langley, Somaya}, title = {Gesture=Sound Experiments : Process and Mappings}, pages = {197--202}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179491}, url = {http://www.nime.org/proceedings/2008/nime2008_197.pdf}, keywords = {gestural control,mapping,nime08,prototyping,three-axis accelerometers,vocal,wii remote} }
Miha Ciglar. 2008. "3rd. Pole" – A Composition Performed via Gestural Cues. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 203–206. http://doi.org/10.5281/zenodo.1179511
BibTeX
Download PDF DOI
@inproceedings{Ciglar2008, author = {Ciglar, Miha}, title = {"3rd. Pole" -- A Composition Performed via Gestural Cues}, pages = {203--206}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179511}, url = {http://www.nime.org/proceedings/2008/nime2008_203.pdf}, keywords = {dancer, fig, from the system in, gesture recognition, haptic feedback, in, markers attached to the, motion tracking, nime08, s limbs, the dancer receives feedback, two ways} }
Kjetil F. Hansen and Marcos Alonso. 2008. More DJ Techniques on the reactable. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 207–210. http://doi.org/10.5281/zenodo.1179555
Abstract
Download PDF DOI
This paper describes a project started for implementing DJscratching techniques on the reactable. By interacting withobjects representing scratch patterns commonly performedon the turntable and the crossfader, the musician can playwith DJ techniques and manipulate how they are executedin a performance. This is a novel approach to the digital DJapplications and hardware. Two expert musicians practisedand performed on the reactable in order to both evaluate theplayability and improve the design of the DJ techniques.
@inproceedings{Hansen2008, author = {Hansen, Kjetil F. and Alonso, Marcos}, title = {More DJ Techniques on the reactable}, pages = {207--210}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179555}, url = {http://www.nime.org/proceedings/2008/nime2008_207.pdf}, keywords = {dj scratch techniques,interfaces,nime08,playability,reactable} }
Smilen Dimitrov, Marcos Alonso, and Stefania Serafin. 2008. Developing Block-Movement, Physical-Model Based Objects for the Reactable. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 211–214. http://doi.org/10.5281/zenodo.1179523
Abstract
Download PDF DOI
This paper reports on a Short-Term Scientific Mission (STSM)sponsored by the Sonic Interaction Design (SID) EuropeanCOST Action IC601.Prototypes of objects for the novel instrument Reactablewere developed, with the goal of studying sonification ofmovements on this platform using physical models. A physical model of frictional interactions between rubbed dry surfaces was used as an audio generation engine, which alloweddevelopment in two directions — a set of objects that affordsmotions similar to sliding, and a single object aiming tosonify contact friction sound. Informal evaluation was obtained from a Reactable expert user, regarding these sets ofobjects. Experiments with the objects were also performed- related to both audio filtering, and interfacing with otherobjects for the Reactable.
@inproceedings{Dimitrov2008, author = {Dimitrov, Smilen and Alonso, Marcos and Serafin, Stefania}, title = {Developing Block-Movement, Physical-Model Based Objects for the Reactable}, pages = {211--214}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179523}, url = {http://www.nime.org/proceedings/2008/nime2008_211.pdf}, keywords = {Reactable, physical model, motion sonification, contact fric- tion } }
Jean-Baptiste Thiebaut, Samer Abdallah, Andrew Robertson, Nick Bryan-Kinns, and Mark D. Plumbley. 2008. Real Time Gesture Learning and Recognition : Towards Automatic Categorization. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 215–218. http://doi.org/10.5281/zenodo.1179639
Abstract
Download PDF DOI
This research focuses on real-time gesture learning and recognition. Events arrive in a continuous stream without explicitly given boundaries. To obtain temporal accuracy, weneed to consider the lag between the detection of an eventand any effects we wish to trigger with it. Two methodsfor real time gesture recognition using a Nintendo Wii controller are presented. The first detects gestures similar to agiven template using either a Euclidean distance or a cosinesimilarity measure. The second method uses novel information theoretic methods to detect and categorize gestures inan unsupervised way. The role of supervision, detection lagand the importance of haptic feedback are discussed.
@inproceedings{Thiebaut2008, author = {Thiebaut, Jean-Baptiste and Abdallah, Samer and Robertson, Andrew and Bryan-Kinns, Nick and Plumbley, Mark D.}, title = {Real Time Gesture Learning and Recognition : Towards Automatic Categorization}, pages = {215--218}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179639}, url = {http://www.nime.org/proceedings/2008/nime2008_215.pdf}, keywords = {Gesture recognition, supervised and unsupervised learning, interaction, haptic feedback, information dynamics, HMMs } }
Mari Kimura. 2008. Making of VITESSIMO for Augmented Violin : Compositional Process and Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 219–220. http://doi.org/10.5281/zenodo.1179581
Abstract
Download PDF DOI
This paper describes the compositional process for creatingthe interactive work for violin entitled VITESSIMO using theAugmented Violin [1].
@inproceedings{Kimura2008, author = {Kimura, Mari}, title = {Making of VITESSIMO for Augmented Violin : Compositional Process and Performance}, pages = {219--220}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179581}, url = {http://www.nime.org/proceedings/2008/nime2008_219.pdf}, keywords = {Augmented Violin, gesture tracking, interactive performance } }
Jörn Loviscach. 2008. Programming a Music Synthesizer through Data Mining. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 221–224. http://doi.org/10.5281/zenodo.1179591
Abstract
Download PDF DOI
Sound libraries for music synthesizers easily comprise one thousand or more programs (”patches”). Thus, there are enough raw data to apply data mining to reveal typical settings and to extract dependencies. Intelligent user interfaces for music synthesizers can be based on such statistics. This paper proposes two approaches: First, the user sets any number of parameters and then lets the system find the nearest sounds in the database, a kind of patch autocompletion. Second, all parameters are "live" as usual, but turning one knob or setting a switch will also change the settings of other, statistically related controls. Both approaches canbe used with the standard interface of the synthesizer. On top of that, this paper introduces alternative or additional interfaces based on data visualization.
@inproceedings{Loviscach2008, author = {Loviscach, J\''{o}rn}, title = {Programming a Music Synthesizer through Data Mining}, pages = {221--224}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179591}, url = {http://www.nime.org/proceedings/2008/nime2008_221.pdf}, keywords = {Information visualization, mutual information, intelligent user interfaces } }
Kia Ng and Paolo Nesi. 2008. i-Maestro : Technology-Enhanced Learning and Teaching for Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 225–228. http://doi.org/10.5281/zenodo.1179605
Abstract
Download PDF DOI
This paper presents a project called i-Maestro (www.i-maestro.org) which develops interactive multimedia environments for technology enhanced music education. The project explores novel solutions for music training in both theory and performance, building on recent innovations resulting from the development of computer and information technologies, by exploiting new pedagogical paradigms with cooperative and interactive self-learning environments, gesture interfaces, and augmented instruments. This paper discusses the general context along with the background and current developments of the project, together with an overview of the framework and discussions on a number of selected tools to support technology-enhanced music learning and teaching.
@inproceedings{Ng2008, author = {Ng, Kia and Nesi, Paolo}, title = {i-Maestro : Technology-Enhanced Learning and Teaching for Music}, pages = {225--228}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179605}, url = {http://www.nime.org/proceedings/2008/nime2008_225.pdf}, keywords = {augmented instrument,education,gesture,interactive,interface,motion,multimedia,music,nime08,notation,sensor,sonification,technology-enhanced learning,visualisation} }
Bart Kuyken, Wouter Verstichel, Frederick Bossuyt, Jan Vanfleteren, Michiel Demey, and Marc Leman. 2008. The HOP Sensor : Wireless Motion Sensor. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 229–232. http://doi.org/10.5281/zenodo.1179583
Abstract
Download PDF DOI
This paper describes the HOP system. It consists of a wireless module built up by multiple nodes and a base station. The nodes detect acceleration of e.g. human movement. At a rate of 100 Hertz the base station collects the acceleration samples. The data can be acquired in real-time software like Pure Data and Max/MSP. The data can be used to analyze and/or sonify movement.
@inproceedings{Kuyken2008, author = {Kuyken, Bart and Verstichel, Wouter and Bossuyt, Frederick and Vanfleteren, Jan and Demey, Michiel and Leman, Marc}, title = {The HOP Sensor : Wireless Motion Sensor}, pages = {229--232}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179583}, url = {http://www.nime.org/proceedings/2008/nime2008_229.pdf}, keywords = {Digital Musical Instrument, Wireless Sensors, Inertial Sensing, Hop Sensor } }
Niall Coghlan and Benjamin Knapp. 2008. Sensory Chairs : A System for Biosignal Research and Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 233–236. http://doi.org/10.5281/zenodo.1179513
BibTeX
Download PDF DOI
@inproceedings{Coghlan2008, author = {Coghlan, Niall and Knapp, Benjamin}, title = {Sensory Chairs : A System for Biosignal Research and Performance}, pages = {233--236}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179513}, url = {http://www.nime.org/proceedings/2008/nime2008_233.pdf}, keywords = {Ubiquitous computing, context -awareness, networking, embedded systems, chairs, digital artefacts, emotional state sensing, affective computing, biosignals. } }
Andrew B. Godbehere and Nathan J. Ward. 2008. Wearable Interfaces for Cyberphysical Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 237–240. http://doi.org/10.5281/zenodo.1179547
Abstract
Download PDF DOI
We present examples of a wireless sensor network as applied to wearable digital music controllers. Recent advances in wireless Personal Area Networks (PANs) have precipitated the IEEE 802.15.4 standard for low-power, low-cost wireless sensor networks. We have applied this new technology to create a fully wireless, wearable network of accelerometers which are small enough to be hidden under clothing. Various motion analysis and machine learning techniques are applied to the raw accelerometer data in real-time to generate and control music on the fly.
@inproceedings{Godbehere2008, author = {Godbehere, Andrew B. and Ward, Nathan J.}, title = {Wearable Interfaces for Cyberphysical Musical Expression}, pages = {237--240}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179547}, url = {http://www.nime.org/proceedings/2008/nime2008_237.pdf}, keywords = {Wearable computing, personal area networks, accelerometers, 802.15.4, motion analysis, human-computer interaction, live performance, digital musical controllers, gestural control } }
Kouki Hayafuchi and Kenji Suzuki. 2008. MusicGlove: A Wearable Musical Controller for Massive Media Library. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 241–244. http://doi.org/10.5281/zenodo.1179561
Abstract
Download PDF DOI
This research aims to develop a wearable musical interface which enables to control audio and video signals by using hand gestures and human body motions. We have been developing an audio-visual manipulation system that realizes tracks control, time-based operations and searching for tracks from massive music library. It aims to build an emotional and affecting musical interaction, and will provide a better method of music listening to people. A sophisticated glove-like device with an acceleration sensor and several strain sensors has been developed. A realtime signal processing and musical control are executed as a result of gesture recognition. We also developed a stand-alone device that performs as a musical controller and player at the same time. In this paper, we describe the development of a compact and sophisticated sensor device, and demonstrate its performance of audio and video signals control.
@inproceedings{Hayafuchi2008, author = {Hayafuchi, Kouki and Suzuki, Kenji}, title = {MusicGlove: A Wearable Musical Controller for Massive Media Library}, pages = {241--244}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179561}, url = {http://www.nime.org/proceedings/2008/nime2008_241.pdf}, keywords = {Embodied Sound Media, Music Controller, Gestures, Body Motion, Musical Interface } }
Michael Zbyszynski. 2008. An Elementary Method for Tablet. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 245–248. http://doi.org/10.5281/zenodo.1177461
Abstract
Download PDF DOI
This paper proposes the creation of a method book for tabletbased instruments, evaluating pedagogical materials fortraditional instruments as well as research in human-computerinteraction and tablet interfaces.
@inproceedings{Zbyszynski2008, author = {Zbyszynski, Michael}, title = {An Elementary Method for Tablet}, pages = {245--248}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1177461}, url = {http://www.nime.org/proceedings/2008/nime2008_245.pdf}, keywords = {Wacom tablet, digitizing tablet, expressivity, gesture, mapping, pedagogy, practice } }
Gerard Roma and Anna Xambó. 2008. A Tabletop Waveform Editor for Live Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 249–252. http://doi.org/10.5281/zenodo.1179621
Abstract
Download PDF DOI
We present an audio waveform editor that can be operated in real time through a tabletop interface. The systemcombines multi-touch and tangible interaction techniques inorder to implement the metaphor of a toolkit that allows direct manipulation of a sound sample. The resulting instrument is well suited for live performance based on evolvingloops.
@inproceedings{Roma2008, author = {Roma, Gerard and Xamb\'{o}, Anna}, title = {A Tabletop Waveform Editor for Live Performance}, pages = {249--252}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179621}, url = {http://www.nime.org/proceedings/2008/nime2008_249.pdf}, keywords = {tangible interface, tabletop interface, musical performance, interaction techniques } }
Andrea Valle. 2008. Integrated Algorithmic Composition Fluid systems for including notation in music composition cycle. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 253–256. http://doi.org/10.5281/zenodo.1179645
BibTeX
Download PDF DOI
@inproceedings{Valle2008a, author = {Valle, Andrea}, title = {Integrated Algorithmic Composition Fluid systems for including notation in music composition cycle}, pages = {253--256}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179645}, url = {http://www.nime.org/proceedings/2008/nime2008_253.pdf}, keywords = {algorithmic composition,automatic notation,nime08} }
Andrea Valle. 2008. GeoGraphy : a Real-Time, Graph-Based Composition Environment. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 257–260. http://doi.org/10.5281/zenodo.1179643
Abstract
Download PDF DOI
This paper is about GeoGraphy, a graph-based system forthe control of both musical composition and interactive performance and its implementation in a real-time, interactiveapplication. The implementation includes a flexible userinterface system.
@inproceedings{Valle2008, author = {Valle, Andrea}, title = {GeoGraphy : a Real-Time, Graph-Based Composition Environment}, pages = {257--260}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179643}, url = {http://www.nime.org/proceedings/2008/nime2008_257.pdf}, keywords = {a graph,composition,figure 1,interfaces,left,live coding,musical algorithmic composition,nime08,performance,vertex durations and coor-} }
Iannis Zannos. 2008. Multi-Platform Development of Audiovisual and Kinetic Installations. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 261–264. http://doi.org/10.5281/zenodo.1177459
Abstract
Download PDF DOI
In this paper, we describe the development of multi-platform tools for Audiovisual and Kinetic installations. These involve the connection of three development environments: Python, SuperCollider and Processing, in order to drive kinetic art installations and to combine these with digital synthesis of sound and image in real time. By connecting these three platforms via the OSC protocol, we enable the control in real time of analog physical media (a device that draws figures on sand), sound synthesis and image synthesis. We worked on the development of algorithms for drawing figures and synthesizing images and sound on all three platforms and experimented with various mechanisms for coordinating synthesis and rendering in different media. Several problems were addressed: How to coordinate the timing between different platforms? What configuration to use? Clientserver (who is the client who the server?), equal partners, mixed configurations. A library was developed in SuperCollider to enable the packaging of algorithms into modules with automatic generation of GUI from specifications, and the saving of configurations of modules into session files as scripts in SuperCollider code. The application of this library as a framework for both driving graphic synthesis in Processing and receiving control data from it resulted in an environment for experimentation that is also being used successfully in teaching interactive audiovisual media.
@inproceedings{Zannos2008, author = {Zannos, Iannis}, title = {Multi-Platform Development of Audiovisual and Kinetic Installations}, pages = {261--264}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1177459}, url = {http://www.nime.org/proceedings/2008/nime2008_261.pdf}, keywords = {kinetic art, audiovisual installations, python, SuperCollider, Processing, algorithmic art, tools for multi-platform development } }
Greg Corness. 2008. Performer Model : Towards a Framework for Interactive Performance Based on Perceived Intention. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 265–268. http://doi.org/10.5281/zenodo.1179515
Abstract
Download PDF DOI
Through the developing of tools for analyzing the performerssonic and movement-based gestures, research into the systemperformer interaction has focused on the computer’s ability torespond to the performer. Where as such work shows interestwithin the community in developing an interaction paradigmmodeled on the player, by focusing on the perception andreasoning of the system, this research assumes that theperformer’s manner of interaction is in agreement with thiscomputational model. My study presents an alternative model ofinteraction designed for improvisatory performance centered onthe perception of the performer as understood by theories takenfrom performance practices and cognitive science.
@inproceedings{Corness2008, author = {Corness, Greg}, title = {Performer Model : Towards a Framework for Interactive Performance Based on Perceived Intention}, pages = {265--268}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179515}, url = {http://www.nime.org/proceedings/2008/nime2008_265.pdf}, keywords = {Interactive performance, Perception, HCI } }
Paulo C. Teles and Aidan Boyle. 2008. Developing an "Antigenous" Art Installation Based on a Touchless Endosystem Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 269–272. http://doi.org/10.5281/zenodo.1179637
BibTeX
Download PDF DOI
@inproceedings{Teles2008, author = {Teles, Paulo C. and Boyle, Aidan}, title = {Developing an "Antigenous" Art Installation Based on a Touchless Endosystem Interface}, pages = {269--272}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179637}, url = {http://www.nime.org/proceedings/2008/nime2008_269.pdf}, keywords = {nime08} }
Silvia Lanzalone. 2008. The ’Suspended Clarinet’ with the ’Uncaused Sound’ : Description of a Renewed Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 273–276. http://doi.org/10.5281/zenodo.1179587
BibTeX
Download PDF DOI
@inproceedings{Lanzalone2008, author = {Lanzalone, Silvia}, title = {The 'Suspended Clarinet' with the 'Uncaused Sound' : Description of a Renewed Musical Instrument}, pages = {273--276}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179587}, url = {http://www.nime.org/proceedings/2008/nime2008_273.pdf}, keywords = {nime08} }
Mitsuyo Hashida, Yosuke Ito, and Haruhiro Katayose. 2008. A Directable Performance Rendering System: Itopul. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 277–280. http://doi.org/10.5281/zenodo.1179559
Abstract
Download PDF DOI
One of the advantages of case-based systems is that theycan generate expressions even if the user doesn’t know howthe system applies expression rules. However, the systemscannot avoid the problem of data sparseness and do notpermit a user to improve the expression of a certain part ofa melody directly. After discussing the functions requiredfor user-oriented interface for performance rendering systems, this paper proposes a directable case-based performance rendering system, called Itopul. Itopul is characterized by 1) a combination of the phrasing model and thepulse model, 2) the use of a hierarchical music structure foravoiding from the data sparseness problem, 3) visualizationof the processing progress, and 4) music structures directlymodifiable by the user.
@inproceedings{Hashida2008, author = {Hashida, Mitsuyo and Ito, Yosuke and Katayose, Haruhiro}, title = {A Directable Performance Rendering System: Itopul}, pages = {277--280}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179559}, url = {http://www.nime.org/proceedings/2008/nime2008_277.pdf}, keywords = {Performance Rendering, User Interface, Case-based Approach } }
William R. Hazlewood and Ian Knopke. 2008. Designing Ambient Musical Information Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 281–284. http://doi.org/10.5281/zenodo.1179563
Abstract
Download PDF DOI
In this work we describe our initial explorations in building a musical instrument specifically for providing listenerswith simple, but useful, ambient information. The termAmbient Musical Information Systems (AMIS) is proposedto describe this kind of research. Instruments like these differ from standard musical instruments in that they are tobe perceived indirectly from outside one’s primary focus ofattention. We describe our rationale for creating such a device, a discussion on the appropriate qualities of sound fordelivering ambient information, and a description of an instrument created for use in a series of experiments that wewill use to test out ideas. We conclude with a discussion ofour initial findings, and some further directions we wish toexplore.
@inproceedings{Hazlewood2008, author = {Hazlewood, William R. and Knopke, Ian}, title = {Designing Ambient Musical Information Systems}, pages = {281--284}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179563}, url = {http://www.nime.org/proceedings/2008/nime2008_281.pdf}, keywords = {Ambient Musical Information Systems, musical instruments, human computer interaction, Markov chain, probability, al- gorithmic composition } }
Aristotelis Hadjakos, Erwin Aitenbichler, and Max Mühlhäuser. 2008. The Elbow Piano : Sonification of Piano Playing Movements. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 285–288. http://doi.org/10.5281/zenodo.1179553
Abstract
Download PDF DOI
The Elbow Piano distinguishes two types of piano touch: a touchwith movement in the elbow joint and a touch without. A playednote is first mapped to the left or right hand by visual tracking.Custom-built goniometers attached to the player’s arms are usedto detect the type of touch. The two different types of touchesare sonified by different instrument sounds. This gives theplayer an increased awareness of his elbow movements, which isconsidered valuable for piano education. We have implementedthe system and evaluated it with a group of music students.
@inproceedings{Hadjakos2008, author = {Hadjakos, Aristotelis and Aitenbichler, Erwin and M\''{u}hlh\''{a}user, Max}, title = {The Elbow Piano : Sonification of Piano Playing Movements}, pages = {285--288}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179553}, url = {http://www.nime.org/proceedings/2008/nime2008_285.pdf}, keywords = {Piano, education, sonification, feedback, gesture. } }
Yoshinari Takegawa and Masahiko Tsukamoto. 2008. UnitKeyboard : An Easily Configurable Compact Clavier. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 289–292. http://doi.org/10.5281/zenodo.1179635
Abstract
Download PDF DOI
Musical keyboard instruments have a long history, whichresulted in many kinds of keyboards (claviers) today. Sincethe hardware of conventional musical keyboards cannot bechanged, such as the number of keys, musicians have tocarry these large keyboards for playing music that requiresonly a small diapason. To solve this problem, the goal ofour study is to construct UnitKeyboard, which has only 12keys (7 white keys and 5 black keys) and connectors fordocking with other UnitKeyboards. We can build variouskinds of musical keyboard configurations by connecting oneUnitKeyboard to others, since they have automatic settingsfor multiple keyboard instruments. We discuss the usabilityof the UnitKeyboard from reviews by several amateur andprofessional pianists who used the UnitKeyboard.
@inproceedings{Takegawa2008, author = {Takegawa, Yoshinari and Tsukamoto, Masahiko}, title = {UnitKeyboard : An Easily Configurable Compact Clavier}, pages = {289--292}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179635}, url = {http://www.nime.org/proceedings/2008/nime2008_289.pdf}, keywords = {Portable keyboard instruments, block interface, Automatic settings } }
Cléo Palacio-Quintin. 2008. Eight Years of Practice on the Hyper-Flute : Technological and Musical Perspectives. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 293–298. http://doi.org/10.5281/zenodo.1179609
Abstract
Download PDF DOI
After eight years of practice on the first hyper-flute prototype (a flute extended with sensors), this article presentsa retrospective of its instrumental practice and the newdevelopments planned from both technological and musical perspectives. Design, performance skills, and mappingstrategies are discussed, as well as interactive compositionand improvisation.
@inproceedings{PalacioQuintin2008, author = {Palacio-Quintin, Cl\'{e}o}, title = {Eight Years of Practice on the Hyper-Flute : Technological and Musical Perspectives}, pages = {293--298}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179609}, url = {http://www.nime.org/proceedings/2008/nime2008_293.pdf}, keywords = {composition,gestural control,hyper-flute,hyper-instruments,improvisation,interactive music,mapping,nime08,sensors} }
Edgar Berdahl and Julius O. Smith. 2008. A Tangible Virtual Vibrating String : A Physically Motivated Virtual Musical Instrument Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 299–302. http://doi.org/10.5281/zenodo.1179493
Abstract
Download PDF DOI
We introduce physically motivated interfaces for playing virtual musical instruments, and we suggest that they lie somewhere in between commonplace interfaces and haptic interfaces in terms of their complexity. Next, we review guitarlike interfaces, and we design an interface to a virtual string.The excitation signal and pitch are sensed separately usingtwo independent string segments. These parameters controla two-axis digital waveguide virtual string, which modelsvibrations in the horizontal and vertical transverse axes aswell as the coupling between them. Finally, we consider theadvantages of using a multi-axis pickup for measuring theexcitation signal.
@inproceedings{Berdahl2008, author = {Berdahl, Edgar and Smith, Julius O.}, title = {A Tangible Virtual Vibrating String : A Physically Motivated Virtual Musical Instrument Interface}, pages = {299--302}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179493}, url = {http://www.nime.org/proceedings/2008/nime2008_299.pdf}, keywords = {physically motivated, physical, models, modeling, vibrating string, guitar, pitch detection, interface, excitation, coupled strings, haptic } }
Christian Geiger, Holger Reckter, David Paschke, Florian Schulz, and Cornelius Poepel. 2008. Towards Participatory Design and Evaluation of Theremin-based Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 303–306. http://doi.org/10.5281/zenodo.1179545
Abstract
Download PDF DOI
Being one of the earliest electronic instruments the basic principles of the Theremin have often been used to design new musical interfaces. We present the structured design and evaluation of a set of 3D interfaces for a virtual Theremin, the VRemin. The variants differ in the size of the interaction space, the interface complexity, and the applied IO devices. We conducted a formal evaluation based on the well-known AttrakDiff questionnaire for evaluating the hedonic and pragmatic quality of interactive products. The presented work is a first approach towards a participatory design process for musical interfaces that includes user evaluation at early design phases.
@inproceedings{Geiger2008, author = {Geiger, Christian and Reckter, Holger and Paschke, David and Schulz, Florian and Poepel, Cornelius}, title = {Towards Participatory Design and Evaluation of Theremin-based Musical Interfaces}, pages = {303--306}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179545}, url = {http://www.nime.org/proceedings/2008/nime2008_303.pdf}, keywords = {3d interaction techniques,an important concept for,both hands,evaluation,few wimp interface concepts,in contrast the use,make efficient use of,nime08,of both hands is,theremin-based interfaces} }
Tomás Henriques. 2008. META-EVI Innovative Performance Paths with a Wind Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 307–310. http://doi.org/10.5281/zenodo.1179565
BibTeX
Download PDF DOI
@inproceedings{Henriques2008, author = {Henriques, Tom\'{a}s}, title = {META-{EV}I Innovative Performance Paths with a Wind Controller}, pages = {307--310}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179565}, url = {http://www.nime.org/proceedings/2008/nime2008_307.pdf}, keywords = {computer music,musical instrument,nime08,sensor technologies} }
Robin Price and Pedro Rebelo. 2008. Database and Mapping Design for Audiovisual Prepared Radio Set Installation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 311–314. http://doi.org/10.5281/zenodo.1179615
BibTeX
Download PDF DOI
@inproceedings{Price2008, author = {Price, Robin and Rebelo, Pedro}, title = {Database and Mapping Design for Audiovisual Prepared Radio Set Installation}, pages = {311--314}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179615}, url = {http://www.nime.org/proceedings/2008/nime2008_311.pdf}, keywords = {Mapping, database, audiovisual, radio, installation art. } }
Kazuhiro Jo and Norihisa Nagano. 2008. Monalisa : "See the Sound , Hear the Image". Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 315–318. http://doi.org/10.5281/zenodo.1179569
Abstract
Download PDF DOI
Monalisa is a software platform that enables to "see the sound, hear the image". It consists of three software: Monalisa Application, Monalisa-Audio Unit, and Monalisa-Image Unit, and an installation: Monalisa "shadow of the sound". In this paper, we describe the implementation of each software and installation with the explanation of the basic algorithms to treat the image data and the sound data transparently.
@inproceedings{Jo2008, author = {Jo, Kazuhiro and Nagano, Norihisa}, title = {Monalisa : "See the Sound , Hear the Image"}, pages = {315--318}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179569}, url = {http://www.nime.org/proceedings/2008/nime2008_315.pdf}, keywords = {Sound and Image Processing Software, Plug-in, Installation } }
Andrew Robertson, Mark D. Plumbley, and Nick Bryan-Kinns. 2008. A Turing Test for B-Keeper : Evaluating an Interactive. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 319–324. http://doi.org/10.5281/zenodo.1179619
BibTeX
Download PDF DOI
@inproceedings{Robertson2008, author = {Robertson, Andrew and Plumbley, Mark D. and Bryan-Kinns, Nick}, title = {A Turing Test for B-Keeper : Evaluating an Interactive}, pages = {319--324}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179619}, url = {http://www.nime.org/proceedings/2008/nime2008_319.pdf}, keywords = {Automatic Accompaniment, Beat Tracking, Human-Computer Interaction, Musical Interface Evaluation } }
Gabriel Gatzsche, Markus Mehnert, and Christian Stöcklmeier. 2008. Interaction with Tonal Pitch Spaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 325–330. http://doi.org/10.5281/zenodo.1179541
Abstract
Download PDF DOI
In this paper, we present a pitch space based musical interface approach. A pitch space arranges tones in a way that meaningful tone combinations can be easily generated. Using a touch sensitive surface or a 3D-Joystick a player can move through the pitch space and create the desired sound by selecting tones. The more optimal the tones are geometrically arranged, the less control parameters are required to move through the space and to select the desired pitches. For this the quality of pitch space based musical interfaces depends on two factors: 1. the way how the tones are organized within the pitch space and 2. the way how the parameters of a given controller are used to move through the space and to select pitches. This paper presents a musical interface based on a tonal pitch space derived from a four dimensional model found by the music psychologists [11], [2]. The proposed pitch space particularly eases the creation of tonal harmonic music. Simultaneously it outlines music psychological and theoretical principles of music.
@inproceedings{Gatzsche2008, author = {Gatzsche, Gabriel and Mehnert, Markus and St\''{o}cklmeier, Christian}, title = {Interaction with Tonal Pitch Spaces}, pages = {325--330}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179541}, url = {http://www.nime.org/proceedings/2008/nime2008_325.pdf}, keywords = {Pitch space, musical interface, Carol L. Krumhansl, music psychology, music theory, western tonal music, 3D tonality model, spiral of thirds, 3D, Hardware controller, Symmetry model } }
Parag Chordia and Alex Rae. 2008. Real-Time Raag Recognition for Interactive Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 331–334. http://doi.org/10.5281/zenodo.1179509
Abstract
Download PDF DOI
We describe a system that can listen to a performance of Indian music and recognize the raag, the fundamental melodicframework that Indian classical musicians improvise within.In addition to determining the most likely raag being performed, the system displays the estimated the likelihoodof each of the other possible raags, visualizing the changesover time. The system computes the pitch-class distributionand uses a Bayesian decision rule to classify the resultingtwelve dimensional feature vector, where each feature represents the relative use of each pitch class. We show that thesystem achieves high performance on a variety of sources,making it a viable tool for interactive performance.
@inproceedings{Chordia2008, author = {Chordia, Parag and Rae, Alex}, title = {Real-Time Raag Recognition for Interactive Music}, pages = {331--334}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179509}, url = {http://www.nime.org/proceedings/2008/nime2008_331.pdf}, keywords = {automatic recognition,indian music,nime08,raag,raga} }
Anders Vinjar. 2008. Bending Common Music with Physical Models. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 335–338. http://doi.org/10.5281/zenodo.1179647
Abstract
Download PDF DOI
A general CAC1-environment charged with physical-modelling capabilities is described. It combines CommonMusic,ODE and Fluxus in a modular way, making a powerful andflexible environment for experimenting with physical modelsin composition.Composition in this respect refers to the generation andmanipulation of structure typically on or above a note, phrase or voice-level. Compared to efforts in synthesisand performance little work has gone into applying physicalmodels to composition. Potentials in composition-applications are presumably large.The implementation of the physically equipped CAC-environment is described in detail.
@inproceedings{Vinjar2008, author = {Vinjar, Anders}, title = {Bending Common Music with Physical Models}, pages = {335--338}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179647}, url = {http://www.nime.org/proceedings/2008/nime2008_335.pdf}, keywords = {Physical Models in composition, CommonMusic, Musical mapping } }
Margaret Schedel, Alison Rootberg, and Elizabeth de Martelly. 2008. Scoring an Interactive, Multimedia Performance Work. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 339–342. http://doi.org/10.5281/zenodo.1179625
Abstract
Download PDF DOI
The Color of Waiting is an interactive theater workwith music, dance, and video which was developed atSTEIM in Amsterdam and further refined at CMMASin Morelia Mexico with funding from Meet theComposer. Using Max/MSP/ Jitter a cellist is able tocontrol sound and video during the performancewhile performing a structured improvisation inresponse to the dancer’s movement. In order toensure. repeated performances of The Color o fWaiting , Kinesthetech Sense created the scorecontained in this paper. Performance is essential tothe practice of time-based art as a living form, buthas been complicated by the unique challenges ininterpretation and re-creation posed by worksincorporating technology. Creating a detailed scoreis one of the ways artists working with technologycan combat obsolescence.
@inproceedings{Schedel2008, author = {Schedel, Margaret and Rootberg, Alison and de Martelly, Elizabeth}, title = {Scoring an Interactive, Multimedia Performance Work}, pages = {339--342}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179625}, url = {http://www.nime.org/proceedings/2008/nime2008_339.pdf}, keywords = {nime08} }
Ayaka Endo and Yasuo Kuhara. 2008. Rhythmic Instruments Ensemble Simulator Generating Animation Movies Using Bluetooth Game Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 345–346. http://doi.org/10.5281/zenodo.1179529
Abstract
Download PDF DOI
We developed a rhythmic instruments ensemble simulator generating animation using game controllers. The motion of a player is transformed into musical expression data of MIDI to generate sounds, and MIDI data are transformed into animation control parameters to generate movies. These animations and music are shown as the reflection of player performance. Multiple players can perform a musical ensemble to make more varied patterns of animation. Our system is so easy that everyone can enjoy performing a fusion of music and animation.
@inproceedings{Endo2008, author = {Endo, Ayaka and Kuhara, Yasuo}, title = {Rhythmic Instruments Ensemble Simulator Generating Animation Movies Using {Bluetooth} Game Controller}, pages = {345--346}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179529}, url = {http://www.nime.org/proceedings/2008/nime2008_345.pdf}, keywords = {Wii Remote, Wireless game controller, MIDI, Max/MSP, Flash movie, Gesture music and animation. } }
Keith A. McMillen. 2008. Stage-Worthy Sensor Bows for Stringed Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 347–348. http://doi.org/10.5281/zenodo.1179597
Abstract
Download PDF DOI
The demonstration of a series of properly weighted and balanced Bluetooth sensor bows for violin, viola, cello and bass.
@inproceedings{McMillen2008, author = {McMillen, Keith A.}, title = {Stage-Worthy Sensor Bows for Stringed Instruments}, pages = {347--348}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179597}, url = {http://www.nime.org/proceedings/2008/nime2008_347.pdf}, keywords = {Sensor bow, stringed instruments, bluetooth } }
Lesley Flanigan and Andrew Doro. 2008. Plink Jet. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 349–351. http://doi.org/10.5281/zenodo.1179533
Abstract
Download PDF DOI
Plink Jet is a robotic musical instrument made from scavenged inkjet printers and guitar parts. We investigate the expressive capabilities of everyday machine technology by recontextualizing the relatively high-tech mechanisms of typical office debris into an electro-acoustic musical instrument. We also explore the performative relationship between human and machine.
@inproceedings{Flanigan2008, author = {Flanigan, Lesley and Doro, Andrew}, title = {Plink Jet}, pages = {349--351}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179533}, url = {http://www.nime.org/proceedings/2008/nime2008_349.pdf}, keywords = {Interaction Design, Repurposing of Consumer Technology, DIY, Performing Technology, Robotics, Automation, Infra-Instrument } }
Yusuke Kamiyama, Mai Tanaka, and Hiroya Tanaka. 2008. Oto-Shigure : An Umbrella-Shaped Sound Generator for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 352–353. http://doi.org/10.5281/zenodo.1179575
BibTeX
Download PDF DOI
@inproceedings{Kamiyama2008, author = {Kamiyama, Yusuke and Tanaka, Mai and Tanaka, Hiroya}, title = {Oto-Shigure : An Umbrella-Shaped Sound Generator for Musical Expression}, pages = {352--353}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179575}, url = {http://www.nime.org/proceedings/2008/nime2008_352.pdf}, keywords = {umbrella, musical expression, sound generating device, 3D sound system, sound-field arrangement. } }
Sean Follmer, Chris Warren, and Adnan Marquez-Borbon. 2008. The Pond : Interactive Multimedia Installation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 354–355. http://doi.org/10.5281/zenodo.1179535
BibTeX
Download PDF DOI
@inproceedings{Follmer2008, author = {Follmer, Sean and Warren, Chris and Marquez-Borbon, Adnan}, title = {The Pond : Interactive Multimedia Installation}, pages = {354--355}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179535}, url = {http://www.nime.org/proceedings/2008/nime2008_354.pdf}, keywords = {nime08} }
Ethan Hartman, Jeff Cooper, and Kyle Spratt. 2008. Swing Set : Musical Controllers with Inherent Physical Dynamics. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 356–357. http://doi.org/10.5281/zenodo.1179557
BibTeX
Download PDF DOI
@inproceedings{Hartman2008, author = {Hartman, Ethan and Cooper, Jeff and Spratt, Kyle}, title = {Swing Set : Musical Controllers with Inherent Physical Dynamics}, pages = {356--357}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179557}, url = {http://www.nime.org/proceedings/2008/nime2008_356.pdf}, keywords = {nime08} }
Paul Modler and Tony Myatt. 2008. Video Based Recognition of Hand Gestures by Neural Networks for the Control of Sound and Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 358–359. http://doi.org/10.5281/zenodo.1179601
BibTeX
Download PDF DOI
@inproceedings{Modler2008, author = {Modler, Paul and Myatt, Tony}, title = {Video Based Recognition of Hand Gestures by Neural Networks for the Control of Sound and Music}, pages = {358--359}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179601}, url = {http://www.nime.org/proceedings/2008/nime2008_358.pdf}, keywords = {nime08} }
Kenji Suzuki, Miho Kyoya, Takahiro Kamatani, and Toshiaki Uchiyama. 2008. beacon : Embodied Sound Media Environment for Socio-Musical Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 360–361. http://doi.org/10.5281/zenodo.1179633
Abstract
Download PDF DOI
This research aims to develop a novel instrument for sociomusical interaction where a number of participants can produce sounds by feet in collaboration with each other. Thedeveloped instrument, beacon, is regarded as embodied soundmedia product that will provide an interactive environmentaround it. The beacon produces laser beams lying on theground and rotating. Audio sounds are then produced whenthe beams pass individual performer’s foot. As the performers are able to control the pitch and sound length accordingto the foot location and angles facing the instrument, theperformer’s body motion and foot behavior can be translated into sound and music in an intuitive manner.
@inproceedings{Suzuki2008, author = {Suzuki, Kenji and Kyoya, Miho and Kamatani, Takahiro and Uchiyama, Toshiaki}, title = {beacon : Embodied Sound Media Environment for Socio-Musical Interaction}, pages = {360--361}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179633}, url = {http://www.nime.org/proceedings/2008/nime2008_360.pdf}, keywords = {Embodied sound media, Hyper-instrument, Laser beams } }
Eva Sjuve. 2008. Prototype GO : Wireless Controller for Pure Data. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 362–363. http://doi.org/10.5281/zenodo.1179629
Abstract
Download PDF DOI
This paper describes the development of a wireless wearablecontroller, GO, for both sound processing and interactionwith wearable lights. Pure Data is used for sound processing.The GO prototype is built using a PIC microcontroller usingvarious sensors for receiving information from physicalmovements.
@inproceedings{Sjuve2008, author = {Sjuve, Eva}, title = {Prototype GO : Wireless Controller for Pure Data}, pages = {362--363}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179629}, url = {http://www.nime.org/proceedings/2008/nime2008_362.pdf}, keywords = {Wireless controller, Pure Data, Gestural interface, Interactive Lights. } }
Robert Macrae and Simon Dixon. 2008. From Toy to Tutor : Note-Scroller is a Game to Teach Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 364–365. http://doi.org/10.5281/zenodo.1179593
BibTeX
Download PDF DOI
@inproceedings{Macrae2008, author = {Macrae, Robert and Dixon, Simon}, title = {From Toy to Tutor : Note-Scroller is a Game to Teach Music}, pages = {364--365}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179593}, url = {http://www.nime.org/proceedings/2008/nime2008_364.pdf}, keywords = {Graphical Interface, Computer Game, MIDI Display } }
Stuart Favilla, Joanne Cannon, Tony Hicks, Dale Chant, and Paris Favilla. 2008. Gluisax : Bent Leather Band’s Augmented Saxophone Project. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 366–369. http://doi.org/10.5281/zenodo.1179531
Abstract
Download PDF DOI
This demonstration presents three new augmented and metasaxophone interface/instruments, built by the Bent LeatherBand. The instruments are designed for virtuosic liveperformance and make use of Sukandar Kartadinata’s Gluion[OSC] interfaces. The project rationale and research outcomesfor the first twelve months is discussed. Instruments/interfacesdescribed include the Gluisop, Gluialto and Leathersop.
@inproceedings{Favilla2008, author = {Favilla, Stuart and Cannon, Joanne and Hicks, Tony and Chant, Dale and Favilla, Paris}, title = {Gluisax : Bent Leather Band's Augmented Saxophone Project}, pages = {366--369}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179531}, url = {http://www.nime.org/proceedings/2008/nime2008_366.pdf}, keywords = {Augmented saxophone, Gluion, OSC, virtuosic performance systems } }
Staas de Jong. 2008. The Cyclotactor : Towards a Tactile Platform for Musical Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 370–371. http://doi.org/10.5281/zenodo.1179571
BibTeX
Download PDF DOI
@inproceedings{DeJong2008, author = {de Jong, Staas}, title = {The Cyclotactor : Towards a Tactile Platform for Musical Interaction}, pages = {370--371}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179571}, url = {http://www.nime.org/proceedings/2008/nime2008_370.pdf}, keywords = {nime08} }
Michiel Demey, Marc Leman, Frederick Bossuyt, and Jan Vanfleteren. 2008. The Musical Synchrotron : Using Wireless Motion Sensors to Study How Social Interaction Affects Synchronization with Musical Tempo. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 372–373. http://doi.org/10.5281/zenodo.1179521
Abstract
Download PDF DOI
The Musical Synchrotron is a software interface that connects wireless motion sensors to a real-time interactive environment (Pure Data, Max/MSP). In addition to the measurement of movement, the system provides audio playback and visual feedback. The Musical Synchrotron outputs a score with the degree in which synchronization with the presented music is successful. The interface has been used to measure how people move in response to music. The system was used for experiments at public events.
@inproceedings{Demey2008, author = {Demey, Michiel and Leman, Marc and Bossuyt, Frederick and Vanfleteren, Jan}, title = {The Musical Synchrotron : Using Wireless Motion Sensors to Study How Social Interaction Affects Synchronization with Musical Tempo}, pages = {372--373}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2008}, address = {Genoa, Italy}, issn = {2220-4806}, doi = {10.5281/zenodo.1179521}, url = {http://www.nime.org/proceedings/2008/nime2008_372.pdf}, keywords = {Wireless sensors, tempo perception, social interaction, music and movement, embodied music cognition } }
2007
Randy Jones and Andrew Schloss. 2007. Controlling a Physical Model with a 2D Force Matrix. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 27–30. http://doi.org/10.5281/zenodo.1177131
Abstract
Download PDF DOI
Physical modeling has proven to be a successful method ofsynthesizing highly expressive sounds. However, providingdeep methods of real time musical control remains a majorchallenge. In this paper we describe our work towards aninstrument for percussion synthesis, in which a waveguidemesh is both excited and damped by a 2D matrix of forcesfrom a sensor. By emulating a drum skin both as controllerand sound generator, our instrument has reproduced someof the expressive qualities of hand drumming. Details of ourimplementation are discussed, as well as qualitative resultsand experience gleaned from live performances.
@inproceedings{Jones2007, author = {Jones, Randy and Schloss, Andrew}, title = {Controlling a Physical Model with a {2D} Force Matrix}, pages = {27--30}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177131}, url = {http://www.nime.org/proceedings/2007/nime2007_027.pdf}, keywords = {Physical modeling, instrument design, expressive control, multi-touch, performance } }
Niels Bottcher, Steven Gelineck, and Stefania Serafin. 2007. PHYSMISM : A Control Interface for Creative Exploration of Physical Models. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 31–36. http://doi.org/10.5281/zenodo.1177051
Abstract
Download PDF DOI
In this paper we describe the design and implementation of the PHYSMISM: an interface for exploring the possibilities for improving the creative use of physical modelling sound synthesis. The PHYSMISM is implemented in a software and hardware version. Moreover, four different physical modelling techniques are implemented, to explore the implications of using and combining different techniques. In order to evaluate the creative use of physical models, a test was performed using 11 experienced musicians as test subjects. Results show that the capability of combining the physical models and the use of a physical interface engaged the musicians in creative exploration of physical models.
@inproceedings{B2007, author = {Bottcher, Niels and Gelineck, Steven and Serafin, Stefania}, title = {{PHY}SMISM : A Control Interface for Creative Exploration of Physical Models}, pages = {31--36}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177051}, url = {http://www.nime.org/proceedings/2007/nime2007_031.pdf}, keywords = {Physical models, hybrid instruments, excitation, resonator. } }
Katarzyna Chuchacz, Sile O’Modhrain, and Roger Woods. 2007. Physical Models and Musical Controllers – Designing a Novel Electronic Percussion Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 37–40. http://doi.org/10.5281/zenodo.1177071
Abstract
Download PDF DOI
A novel electronic percussion synthesizer prototype is presented. Our ambition is to design an instrument that will produce a high quality, realistic sound based on a physical modelling sound synthesis algorithm. This is achieved using a real-time Field Programmable Gate Array (FPGA) implementation of the model coupled to an interface that aims to make efficient use of all the subtle nuanced gestures of the instrumentalist. It is based on a complex physical model of the vibrating plate — the source of sound in the majority of percussion instruments. A Xilinx Virtex II pro FPGA core handles the sound synthesis computations with an 8 billion operations per second performance and has been designed in such a way to allow a high level of control and flexibility. Strategies are also presented to that allow the parametric space of the model to be mapped to the playing gestures of the percussionist.
@inproceedings{Chuchacz2007, author = {Chuchacz, Katarzyna and O'Modhrain, Sile and Woods, Roger}, title = {Physical Models and Musical Controllers -- Designing a Novel Electronic Percussion Instrument}, pages = {37--40}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177071}, url = {http://www.nime.org/proceedings/2007/nime2007_037.pdf}, keywords = {Physical Model, Electronic Percussion Instrument, FPGA. } }
David Wessel, Rimas Avizienis, Adrian Freed, and Matthew Wright. 2007. A Force Sensitive Multi-Touch Array Supporting Multiple 2-D Musical Control Structures. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 41–45. http://doi.org/10.5281/zenodo.1179479
Abstract
Download PDF DOI
We describe the design, implementation, and evaluation with musical applications of force sensitive multi-touch arrays of touchpads. Each of the touchpads supports a three dimensional representation of musical material: two spatial dimensions plus a force measurement we typically use to control dynamics. We have developed two pad systems, one with 24 pads and a second with 2 arrays of 16 pads each. We emphasize the treatment of gestures as sub-sampled audio signals. This tight coupling of gesture with audio provides for a high degree of control intimacy. Our experiments with the pad arrays demonstrate that we can efficiently deal with large numbers of audio encoded gesture channels – 72 for the 24 pad array and 96 for the two 16 pad arrays.
@inproceedings{Wessel2007, author = {Wessel, David and Avizienis, Rimas and Freed, Adrian and Wright, Matthew}, title = {A Force Sensitive Multi-Touch Array Supporting Multiple {2-D} Musical Control Structures}, pages = {41--45}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1179479}, url = {http://www.nime.org/proceedings/2007/nime2007_041.pdf}, keywords = {Pressure and force sensing, High-resolution gestural signals, Touchpad, VersaPad.} }
Angela Chang and Hiroshi Ishii. 2007. Zstretch : A Stretchy Fabric Music Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 46–49. http://doi.org/10.5281/zenodo.1177067
Abstract
Download PDF DOI
FigureWe present Zstretch, a textile music controller that supports expressive haptic interactions. The musical controller takes advantage of the fabric’s topological constraints to enable proportional control of musical parameters. This novel interface explores ways in which one might treat music as a sheet of cloth. This paper proposes an approach to engage simple technologies for supporting ordinary hand interactions. We show that this combination of basic technology with general tactile movements can result in an expressive musical interface. a
@inproceedings{Chang2007, author = {Chang, Angela and Ishii, Hiroshi}, title = {Zstretch : A Stretchy Fabric Music Controller}, pages = {46--49}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177067}, url = {http://www.nime.org/proceedings/2007/nime2007_046.pdf}, keywords = {Tangible interfaces, textiles, tactile design, musical expressivity } }
Juno Kim, Greg Schiemer, and Terumi Narushima. 2007. Oculog : Playing with Eye Movements. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 50–55. http://doi.org/10.5281/zenodo.1177145
Abstract
Download PDF DOI
In this paper, we describe the musical development of a new system for performing electronic music where a video-based eye movement recording system, known as Oculog, is used to control sound. Its development is discussed against a background that includes a brief history of biologically based interfaces for performing music, together with a survey of various recording systems currently in use for monitoring eye movement in clinical applications. Oculog is discussed with specific reference to its implementation as a performance interface for electronic music. A new work features algorithms driven by eye movement response and allows the user to interact with audio synthesis and introduces new possibilities for microtonal performance. Discussion reflects an earlier technological paradigm and concludes by reviewing possibilities for future development.
@inproceedings{Kim2007, author = {Kim, Juno and Schiemer, Greg and Narushima, Terumi}, title = {Oculog : Playing with Eye Movements}, pages = {50--55}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177145}, url = {http://www.nime.org/proceedings/2007/nime2007_050.pdf}, keywords = {1,algorithmic composition,expressive control interfaces,eye movement recording,microtonal tuning,midi,nime07,pure data,video} }
Antonio Camurri, Corrado Canepa, and Gualtiero Volpe. 2007. Active Listening to a Virtual Orchestra Through an Expressive Gestural Interface : The Orchestra Explorer. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 56–61. http://doi.org/10.5281/zenodo.1177059
Abstract
Download PDF DOI
In this paper, we present a new system, the Orchestra Explorer, enabling a novel paradigm for active fruition of sound and music content. The Orchestra Explorer allows users to physically navigate inside a virtual orchestra, to actively explore the music piece the orchestra is playing, to modify and mold the sound and music content in real-time through their expressive full-body movement and gesture. An implementation of the Orchestra Explorer was developed and presented in the framework of the science exhibition Cimenti di Invenzione e Armonia, held at Casa Paganini, Genova, from October 2006 to January 2007.
@inproceedings{Camurri2007, author = {Camurri, Antonio and Canepa, Corrado and Volpe, Gualtiero}, title = {Active Listening to a Virtual Orchestra Through an Expressive Gestural Interface : The Orchestra Explorer}, pages = {56--61}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177059}, url = {http://www.nime.org/proceedings/2007/nime2007_056.pdf}, keywords = {Active listening of music, expressive interfaces, full-body motion analysis and expressive gesture processing, multimodal interactive systems for music and performing arts applications. } }
Bo Bell, Jim Kleban, Dan Overholt, Lance Putnam, John Thompson, and JoAnn Morin-Kuchera. 2007. The Multimodal Music Stand. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 62–65. http://doi.org/10.5281/zenodo.1177039
Abstract
Download PDF DOI
We present the Multimodal Music Stand (MMMS) for the untethered sensing of performance gestures and the interactive control of music. Using e-field sensing, audio analysis, and computer vision, the MMMS captures a performer’s continuous expressive gestures and robustly identifies discrete cues in a musical performance. Continuous and discrete gestures are sent to an interactive music system featuring custom designed software that performs real-time spectral transformation of audio.
@inproceedings{Bell2007, author = {Bell, Bo and Kleban, Jim and Overholt, Dan and Putnam, Lance and Thompson, John and Morin-Kuchera, JoAnn}, title = {The Multimodal Music Stand}, pages = {62--65}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177039}, url = {http://www.nime.org/proceedings/2007/nime2007_062.pdf}, keywords = {Multimodal, interactivity, computer vision, e-field sensing, untethered control. } }
Joseph Malloch and Marcelo M. Wanderley. 2007. The T-Stick : From Musical Interface to Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 66–69. http://doi.org/10.5281/zenodo.1177175
Abstract
Download PDF DOI
This paper describes the T-Stick, a new family of digitalmusical instruments. It presents the motivation behind theproject, hardware and software design, and presents insightsgained through collaboration with performers who have collectively practised and performed with the T-Stick for hundreds of hours, and with composers who have written piecesfor the instrument in the context of McGill University’s Digital Orchestra project. Each of the T-Sticks is based on thesame general structure and sensing platform, but each alsodiffers from its siblings in size, weight, timbre and range.
@inproceedings{Malloch2007, author = {Malloch, Joseph and Wanderley, Marcelo M.}, title = {The T-Stick : From Musical Interface to Musical Instrument}, pages = {66--69}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177175}, url = {http://www.nime.org/proceedings/2007/nime2007_066.pdf}, keywords = {gestural controller, digital musical instrument, families of instruments } }
Garth Paine, Ian Stevenson, and Angela Pearce. 2007. The Thummer Mapping Project (ThuMP). Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 70–77. http://doi.org/10.5281/zenodo.1177217
Abstract
Download PDF DOI
This paper presents the Thummer Mapping Project (ThuMP), an industry partnership project between ThumMotion P/L and The University of Western Sydney (UWS). ThuMP sought to developing mapping strategies for new interfaces for musical expression (NIME), specifically the ThummerTM, which provides thirteen simultaneous degrees of freedom. This research presents a new approach to the mapping problem resulting from a primary design research phase and a prototype testing and evaluation phase. In order to establish an underlying design approach for the ThummerTM mapping strategies, a number of interviews were carried out with high-level acoustic instrumental performers, the majority of whom play with the Sydney Symphony Orchestra, Sydney, Australia. Mapping strategies were developed from analysis of these interviews and then evaluated in trial usability testing.
@inproceedings{Paine2007, author = {Paine, Garth and Stevenson, Ian and Pearce, Angela}, title = {The Thummer Mapping Project (ThuMP)}, pages = {70--77}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177217}, url = {http://www.nime.org/proceedings/2007/nime2007_070.pdf}, keywords = {Musical Instrument Design, Mapping, Musicianship, evaluation, testing. } }
Nicolas d’Alessandro and Thierry Dutoit. 2007. HandSketch Bi-Manual Controller Investigation on Expressive Control Issues of an Augmented Tablet. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 78–81. http://doi.org/10.5281/zenodo.1177027
Abstract
Download PDF DOI
In this paper, we present a new bi-manual gestural controller, called HandSketch, composed of purchasable devices : pen tablet and pressure-sensing surfaces. It aims at achieving real-time manipulation of several continuous and articulated aspects of pitched sounds synthesis, with a focus on expressive voice. Both prefered and non-prefered hand issues are discussed. Concrete playing diagrams and mapping strategies are described. These results are integrated and a compact controller is proposed.
@inproceedings{dAlessandro2007, author = {d'Alessandro, Nicolas and Dutoit, Thierry}, title = {HandSketch Bi-Manual Controller Investigation on Expressive Control Issues of an Augmented Tablet}, pages = {78--81}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177027}, url = {http://www.nime.org/proceedings/2007/nime2007_078.pdf}, keywords = {Pen tablet, FSR, bi-manual gestural control. } }
Yoshinari Takegawa and Tsutomu Terada. 2007. Mobile Clavier : New Music Keyboard for Flexible Key Transpose. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 82–87. http://doi.org/10.5281/zenodo.1177255
Abstract
Download PDF DOI
Musical performers need to show off their virtuosity for selfexpression and communicate with other people. Therefore, they are prepared to perform at any time and anywhere. However, a musical keyboard of 88 keys is too large and too heavy to carry around. When a portable keyboard that is suitable for carrying around is played over a wide range, the notes being played frequently cause the diapason of the keyboard to protrude. It is common to use Key Transpose in conventional portable keyboards, which shifts the diapason of the keyboard. However, this function creates several problems such as the feeling of discomfort from the misalignment between the keying positions and their output sounds. Therefore, the goal of our study is to construct Mobile Clavier, which enables the diapason to be changed smoothly. Mobile Clavier resolves the problems with Key Transpose by having black keys inserted between any two side-by-side white keys. This paper also discusses how effective Mobile Clavier was in an experiment conducted using professional pianists. We can play music at any time and anywhere with Mobile Clavier.
@inproceedings{Takegawa2007, author = {Takegawa, Yoshinari and Terada, Tsutomu}, title = {Mobile Clavier : New Music Keyboard for Flexible Key Transpose}, pages = {82--87}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177255}, url = {http://www.nime.org/proceedings/2007/nime2007_082.pdf}, keywords = {Portable keyboard, Additional black keys, Diapason change } }
Mikko Ojanen, Jari Suominen, Titti Kallio, and Kai Lassfolk. 2007. Design Principles and User Interfaces of Erkki Kurenniemi’s Electronic Musical Instruments of the 1960’s and 1970’s. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 88–93. http://doi.org/10.5281/zenodo.1177211
Abstract
Download PDF DOI
This paper presents a line of historic electronic musical instruments designed by Erkki Kurenniemi in the 1960’s and1970’s. Kurenniemi’s instruments were influenced by digitallogic and an experimental attitude towards user interfacedesign. The paper presents an overview of Kurenniemi’sinstruments and a detailed description of selected devices.Emphasis is put on user interface issues such as unconventional interactive real-time control and programming methods.
@inproceedings{Ojanen2007, author = {Ojanen, Mikko and Suominen, Jari and Kallio, Titti and Lassfolk, Kai}, title = {Design Principles and User Interfaces of Erkki Kurenniemi's Electronic Musical Instruments of the 1960's and 1970's}, pages = {88--93}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177211}, url = {http://www.nime.org/proceedings/2007/nime2007_088.pdf}, keywords = {Erkki Kurenniemi, Dimi, Synthesizer, Digital electronics, User interface design } }
Thor Magnusson and Enrike H. Mendieta. 2007. The Acoustic, the Digital and the Body : A Survey on Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 94–99. http://doi.org/10.5281/zenodo.1177171
Abstract
Download PDF DOI
This paper reports on a survey conducted in the autumn of 2006 with the objective to understand people’s relationship to their musical tools. The survey focused on the question of embodiment and its different modalities in the fields of acoustic and digital instruments. The questions of control, instrumental entropy, limitations and creativity were addressed in relation to people’s activities of playing, creating or modifying their instruments. The approach used in the survey was phenomenological, i.e. we were concerned with the experience of playing, composing for and designing digital or acoustic instruments. At the time of analysis, we had 209 replies from musicians, composers, engineers, designers, artists and others interested in this topic. The survey was mainly aimed at instrumentalists and people who create their own instruments or compositions in flexible audio programming environments such as SuperCollider, Pure Data, ChucK, Max/MSP, CSound, etc.
@inproceedings{Magnusson2007, author = {Magnusson, Thor and Mendieta, Enrike H.}, title = {The Acoustic, the Digital and the Body : A Survey on Musical Instruments}, pages = {94--99}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177171}, url = {http://www.nime.org/proceedings/2007/nime2007_094.pdf}, keywords = {Survey, musical instruments, usability, ergonomics, embodiment, mapping, affordances, constraints, instrumental entropy, audio programming. } }
Michael Zbyszynski, Matthew Wright, Ali Momeni, and Daniel Cullen. 2007. Ten Years of Tablet Musical Interfaces at CNMAT. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 100–105. http://doi.org/10.5281/zenodo.1179483
Abstract
Download PDF DOI
We summarize a decade of musical projects and research employing Wacom digitizing tablets as musical controllers, discussing general implementation schemes using Max/MSP and OpenSoundControl, and specific implementations in musical improvisation, interactive sound installation, interactive multimedia performance, and as a compositional assistant. We examine two-handed sensing strategies and schemes for gestural mapping.
@inproceedings{Zbyszynski2007, author = {Zbyszynski, Michael and Wright, Matthew and Momeni, Ali and Cullen, Daniel}, title = {Ten Years of Tablet Musical Interfaces at CNMAT}, pages = {100--105}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1179483}, url = {http://www.nime.org/proceedings/2007/nime2007_100.pdf}, keywords = {1,algorithmic composition,digitizing tablet,expressivity,gesture,mapping,nime07,position sensing,wacom tablet,why the wacom tablet} }
Michael Gurevich and Jeffrey Treviño. 2007. Expression and Its Discontents : Toward an Ecology of Musical Creation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 106–111. http://doi.org/10.5281/zenodo.1177107
Abstract
Download PDF DOI
We describe the prevailing model of musical expression, which assumes a binary formulation of "the text" and "the act", along with its implied roles of composer and performer. We argue that this model not only excludes some contemporary aesthetic values but also limits the communicative ability of new music interfaces. As an alternative, an ecology of musical creation accounts for both a diversity of aesthetic goals and the complex interrelation of human and non-human agents. An ecological perspective on several approaches to musical creation with interactive technologies reveals an expanded, more inclusive view of artistic interaction that facilitates novel, compelling ways to use technology for music. This paper is fundamentally a call to consider the role of aesthetic values in the analysis of artistic processes and technologies.
@inproceedings{Gurevich2007, author = {Gurevich, Michael and Trevi\~{n}o, Jeffrey}, title = {Expression and Its Discontents : Toward an Ecology of Musical Creation}, pages = {106--111}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177107}, url = {http://www.nime.org/proceedings/2007/nime2007_106.pdf}, keywords = {Expression, expressivity, non-expressive, emotion, discipline, model, construct, discourse, aesthetic goal, experience, transparency, evaluation, communication } }
Click Nilson. 2007. Live Coding Practice. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 112–117. http://doi.org/10.5281/zenodo.1177209
Abstract
Download PDF DOI
Live coding is almost the antithesis of immediate physical musicianship, and yet, has attracted the attentions of a number of computer-literate musicians, as well as the music-savvy programmers that might be more expected. It is within the context of live coding that I seek to explore the question of practising a contemporary digital musical instrument, which is often raised as an aside but more rarely carried out in research (though see [12]). At what stage of expertise are the members of the live coding movement, and what practice regimes might help them to find their true potential?
@inproceedings{Nilson2007, author = {Nilson, Click}, title = {Live Coding Practice}, pages = {112--117}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177209}, url = {http://www.nime.org/proceedings/2007/nime2007_112.pdf}, keywords = {Practice, practising, live coding } }
Steve Mann. 2007. Natural Interfaces for Musical Expression : Physiphones and a Physics-Based Organology. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 118–123. http://doi.org/10.5281/zenodo.1177181
Abstract
Download PDF DOI
This paper presents two main ideas: (1) Various newly invented liquid-based or underwater musical instruments are proposed that function like woodwind instruments but use water instead of air. These “woodwater” instruments expand the space of known instruments to include all three states of matter: solid (strings, percussion); liquid (the proposed instruments); and gas (brass and woodwinds). Instruments that use the fourth state of matter (plasma) are also proposed. (2) Although the current trend in musical interfaces has been to expand versatililty and generality by separating the interface from the sound-producing medium, this paper identifies an opposite trend in musical interface design inspired by instruments such as the harp, the acoustic or electric guitar, the tin whistle, and the Neanderthal flute, that have a directness of user-interface, where the fingers of the musician are in direct physical contact with the sound-producing medium. The newly invented instruments are thus designed to have this sensually tempting intimacy not be lost behind layers of abstraction, while also allowing for the high degree of virtuosity. Examples presented include the poseidophone, an instrument made from an array of ripple tanks, each tuned for a particular note, and the hydraulophone, an instrument in which sound is produced by pressurized hydraulic fluid that is in direct physical contact with the fingers of the player. Instruments based on these primordial media tend to fall outside existing classifications and taxonomies of known musical instruments which only consider instruments that make sound with solid or gaseous states of matter. To better understand and contextualize some of the new primordial user interfaces, a broader concept of musical instrument classification is proposed that considers the states of matter of both the user-interface and the sound production medium.
@inproceedings{Mann2007, author = {Mann, Steve}, title = {Natural Interfaces for Musical Expression : Physiphones and a Physics-Based Organology}, pages = {118--123}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177181}, url = {http://www.nime.org/proceedings/2007/nime2007_118.pdf}, keywords = {all or part of,ethnomusicology,hydraulophone,is granted without fee,nime07,or hard copies of,permission to make digital,personal or classroom use,provided that copies are,tangible user interface,this work for} }
Frédéric Bevilacqua, Fabrice Guédy, Norbert Schnell, Emmanuel Fléty, and Nicolas Leroy. 2007. Wireless Sensor Interface and Gesture-Follower for Music Pedagogy. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 124–129. http://doi.org/10.5281/zenodo.1177045
Abstract
Download PDF DOI
We present in this paper a complete gestural interface built to support music pedagogy. The development of this prototype concerned both hardware and software components: a small wireless sensor interface including accelerometers and gyroscopes, and an analysis system enabling gesture following and recognition. A first set of experiments was conducted with teenagers in a music theory class. The preliminary results were encouraging concerning the suitability of these developments in music education.
@inproceedings{Bevilacqua2007, author = {Bevilacqua, Fr\'{e}d\'{e}ric and Gu\'{e}dy, Fabrice and Schnell, Norbert and Fl\'{e}ty, Emmanuel and Leroy, Nicolas}, title = {Wireless Sensor Interface and Gesture-Follower for Music Pedagogy}, pages = {124--129}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177045}, url = {http://www.nime.org/proceedings/2007/nime2007_124.pdf}, keywords = {Technology-enhanced learning, music pedagogy, wireless interface, gesture-follower, gesture recognition } }
Roger B. Dannenberg. 2007. New Interfaces for Popular Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 130–135. http://doi.org/10.5281/zenodo.1177081
Abstract
Download PDF DOI
Augmenting performances of live popular music with computer systems poses many new challenges. Here, "popular music" is taken to mean music with a mostly steady tempo, some improvisational elements, and largely predetermined melodies, harmonies, and other parts. The overall problem is studied by developing a framework consisting of constraints and subproblems that any solution should address. These problems include beat acquisition, beat phase, score location, sound synthesis, data preparation, and adaptation. A prototype system is described that offers a set of solutions to the problems posed by the framework, and future work is suggested.
@inproceedings{Dannenberg2007, author = {Dannenberg, Roger B.}, title = {New Interfaces for Popular Music Performance}, pages = {130--135}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177081}, url = {http://www.nime.org/proceedings/2007/nime2007_130.pdf}, keywords = {accompaniment,beat,conducting,intelligent,music synchronization,nime07,synthetic performer,tracking,virtual orchestra} }
Eric Lee, Urs Enke, Jan Borchers, and Leo de Jong. 2007. Towards Rhythmic Analysis of Human Motion Using Acceleration-Onset Times. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 136–141. http://doi.org/10.5281/zenodo.1177159
Abstract
Download PDF DOI
We present a system for rhythmic analysis of human motion inreal-time. Using a combination of both spectral (Fourier) andspatial analysis of onsets, we are able to extract repeating rhythmic patterns from data collected using accelerometers. These extracted rhythmic patterns show the relative magnitudes of accentuated movements and their spacing in time. Inspired by previouswork in automatic beat detection of audio recordings, we designedour algorithms to be robust to changes in timing using multipleanalysis techniques and methods for sensor fusion, filtering andclustering. We tested our system using a limited set of movements,as well as dance movements collected from a professional, bothwith promising results.
@inproceedings{Lee2007, author = {Lee, Eric and Enke, Urs and Borchers, Jan and de Jong, Leo}, title = {Towards Rhythmic Analysis of Human Motion Using Acceleration-Onset Times}, pages = {136--141}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177159}, url = {http://www.nime.org/proceedings/2007/nime2007_136.pdf}, keywords = {rhythm analysis, dance movement analysis, onset analysis } }
Nicolas Bouillot. 2007. nJam User Experiments : Enabling Remote Musical Interaction from Milliseconds to Seconds. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 142–147. http://doi.org/10.5281/zenodo.1177055
Abstract
Download PDF DOI
Remote real-time musical interaction is a domain where endto-end latency is a well known problem. Today, the mainexplored approach aims to keep it below the musicians perception threshold. In this paper, we explore another approach, where end-to-end delays rise to several seconds, butcomputed in a controlled (and synchronized) way dependingon the structure of the musical pieces. Thanks to our fullydistributed prototype called nJam, we perform user experiments to show how this new kind of interactivity breaks theactual end-to-end latency bounds.
@inproceedings{Bouillot2007, author = {Bouillot, Nicolas}, title = {nJam User Experiments : Enabling Remote Musical Interaction from Milliseconds to Seconds}, pages = {142--147}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177055}, url = {http://www.nime.org/proceedings/2007/nime2007_142.pdf}, keywords = {Remote real-time musical interaction, end-to-end delays, syn- chronization, user experiments, distributed metronome, NMP. } }
Niall Moody, Nick Fells, and Nicholas Bailey. 2007. Ashitaka : An Audiovisual Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 148–153. http://doi.org/10.5281/zenodo.1177199
Abstract
Download PDF DOI
This paper describes the Ashitaka audiovisual instrumentand the process used to develop it. The main idea guidingthe design of the instrument is that motion can be used toconnect audio and visuals, and the first part of the paperconsists of an exploration of this idea. The issue of mappings is raised, discussing both audio-visual mappings andthe mappings between the interface and synthesis methods.The paper concludes with a detailed look at the instrumentitself, including the interface, synthesis methods, and mappings used.
@inproceedings{Moody2007, author = {Moody, Niall and Fells, Nick and Bailey, Nicholas}, title = {Ashitaka : An Audiovisual Instrument}, pages = {148--153}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177199}, url = {http://www.nime.org/proceedings/2007/nime2007_148.pdf}, keywords = {audiovisual,instrument,mappings,nime07,synchresis,x3d} }
Roberto Aimi. 2007. Percussion Instruments Using Realtime Convolution : Physical Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 154–159. http://doi.org/10.5281/zenodo.1177033
Abstract
Download PDF DOI
This paper describes several example hybrid acoustic / electronic percussion instruments using realtime convolution toaugment and modify the apparent acoustics of damped physical objects. Examples of cymbal, frame drum, practice pad,brush, and bass drum controllers are described.
@inproceedings{Aimi2007, author = {Aimi, Roberto}, title = {Percussion Instruments Using Realtime Convolution : Physical Controllers}, pages = {154--159}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177033}, url = {http://www.nime.org/proceedings/2007/nime2007_154.pdf}, keywords = {Musical controllers, extended acoustic instruments } }
Michael Rohs and Georg Essl. 2007. CaMus 2 – Optical Flow and Collaboration in Camera Phone Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 160–163. http://doi.org/10.5281/zenodo.1177233
Abstract
Download PDF DOI
CaMus2 allows collaborative performance with mobile camera phones. The original CaMus project was extended tosupport multiple phones performing in the same space andgenerating MIDI signals to control sound generation andmanipulation software or hardware. Through an opticalflow technology the system can be used without a referencemarker grid. When using a marker grid, the use of dynamicdigital zoom extends the range of performance. Semanticinformation display helps guide the performer visually.
@inproceedings{Rohs2007, author = {Rohs, Michael and Essl, Georg}, title = {CaMus 2 -- Optical Flow and Collaboration in Camera Phone Music Performance}, pages = {160--163}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177233}, url = {http://www.nime.org/proceedings/2007/nime2007_160.pdf}, keywords = {Camera phone, mobile phone, music performance, mobile sound generation, sensing-based interaction, collaboration } }
Rebecca Fiebrink, Ge Wang, and Perry R. Cook. 2007. Don’t Forget the Laptop : Using Native Input Capabilities for Expressive Musical Control. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 164–167. http://doi.org/10.5281/zenodo.1177087
Abstract
Download PDF DOI
We draw on our experiences with the Princeton Laptop Orchestra to discuss novel uses of the laptop’s native physical inputs for flexible and expressive control. We argue that instruments designed using these built-in inputs offer benefits over custom standalone controllers, particularly in certain group performance settings; creatively thinking about native capabilities can lead to interesting and unique new interfaces. We discuss a variety of example instruments that use the laptop’s native capabilities and suggest avenues for future work. We also describe a new toolkit for rapidly experimenting with these capabilities.
@inproceedings{Fiebrink2007, author = {Fiebrink, Rebecca and Wang, Ge and Cook, Perry R.}, title = {Don't Forget the Laptop : Using Native Input Capabilities for Expressive Musical Control}, pages = {164--167}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177087}, url = {http://www.nime.org/proceedings/2007/nime2007_164.pdf}, keywords = {Mapping strategies. Laptop-based physical interfaces. Collaborative laptop performance.} }
Katherine Moriwaki and Jonah Brucken-Cohen. 2007. MIDI Scrapyard Challenge Workshops. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 168–172. http://doi.org/10.5281/zenodo.1177201
Abstract
Download PDF DOI
In this paper the authors present the MIDI Scrapyard Challenge (MSC) workshop, a one-day hands-on experience which asks participants to create musical controllers out of cast-off electronics, found materials and junk. The workshop experience, principles, and considerations are detailed, along with sample projects which have been created in various MSC workshops. Observations and implications as well as future developments for the workshop are discussed.
@inproceedings{Moriwaki2007, author = {Moriwaki, Katherine and Brucken-Cohen, Jonah}, title = {MIDI Scrapyard Challenge Workshops}, pages = {168--172}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177201}, url = {http://www.nime.org/proceedings/2007/nime2007_168.pdf}, keywords = {Workshop, MIDI, Interaction Design, Creativity, Performance} }
Eric Lee, Marius Wolf, Yvonne Jansen, and Jan Borchers. 2007. REXband : A Multi-User Interactive Exhibit for Exploring Medieval Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 172–177. http://doi.org/10.5281/zenodo.1177163
Abstract
Download PDF DOI
We present REXband, an interactive music exhibit for collaborative improvisation to medieval music. This audio-only system consists of three digitally augmented medieval instrument replicas: thehurdy gurdy, harp, and frame drum. The instruments communicate with software that provides users with both musical support and feedback on their performance using a "virtual audience" set in a medieval tavern. REXband builds upon previous work in interactive music exhibits by incorporating aspects of e-learning to educate, in addition to interaction design patterns to entertain; care was also taken to ensure historic authenticity. Feedback from user testing in both controlled (laboratory) and public (museum) environments has been extremely positive. REXband is part of the Regensburg Experience, an exhibition scheduled to open in July 2007 to showcase the rich history of Regensburg, Germany.
@inproceedings{Lee2007a, author = {Lee, Eric and Wolf, Marius and Jansen, Yvonne and Borchers, Jan}, title = {REXband : A Multi-User Interactive Exhibit for Exploring Medieval Music}, pages = {172--177}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177163}, url = {http://www.nime.org/proceedings/2007/nime2007_172.pdf}, keywords = {interactive music exhibits, medieval music, augmented instruments, e-learning, education } }
Marije A. Baalman, Daniel Moody-Grigsby, and Christopher L. Salter. 2007. Schwelle : Sensor Augmented, Adaptive Sound Design for Live Theatrical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 178–184. http://doi.org/10.5281/zenodo.1177035
Abstract
Download PDF DOI
This paper describes work on a newly created large-scale interactive theater performance entitled Schwelle (Thresholds). The authors discuss an innovative approach towards the conception, development and implementation of dynamic and responsive audio scenography: a constantly evolving, multi-layered sound design generated by continuous input from a series of distributed wireless sensors deployed both on the body of a performer and placed within the physical stage environment. The paper is divided into conceptual and technological parts. We first describe the project’s dramaturgical and conceptual context in order to situate the artistic framework that has guided the technological system design. Specifically, this framework discusses the team’s approach in combining techniques from situated computing, theatrical sound design practice and dynamical systems in order to create a new kind of adaptive audio scenographic environment augmented by wireless, distributed sensing for use in live theatrical performance. The goal of this adaptive sound design is to move beyond both existing playback models used in theatre sound as well as the purely humancentered, controller-instrument approach used in much current interactive performance practice.
@inproceedings{Baalman2007, author = {Baalman, Marije A. and Moody-Grigsby, Daniel and Salter, Christopher L.}, title = {Schwelle : Sensor Augmented, Adaptive Sound Design for Live Theatrical Performance}, pages = {178--184}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177035}, url = {http://www.nime.org/proceedings/2007/nime2007_178.pdf}, keywords = {Interactive performance, dynamical systems, wireless sens- ing, adaptive audio scenography, audio dramaturgy, situated computing, sound design } }
Joanne Jakovich and Kirsty Beilharz. 2007. ParticleTecture : Interactive Granular Soundspaces for Architectural Design. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 185–190. http://doi.org/10.5281/zenodo.1177127
Abstract
Download PDF DOI
Architectural space is a key contributor to the perceptual world we experience daily. We present ‘ParticleTecture’, a soundspace installation system that extends spatial perception of ordinary architectural space through gestural interaction with sound in space. ParticleTecture employs a particle metaphor to produce granular synthesis soundspaces in response to video-tracking of human movement. It incorporates an adaptive mechanism that utilizes a measure of engagement to inform ongoing audio patterns in response to human activity. By identifying engaging features in its response, the system is able to predict, pre-empt and shape its evolving responses in accordance with the most engaging, compelling, interesting attributes of the active environment. An implementation of ParticleTecture for gallery installation is presented and discussed as one form of architectural space.
@inproceedings{Jakovich2007, author = {Jakovich, Joanne and Beilharz, Kirsty}, title = {ParticleTecture : Interactive Granular Soundspaces for Architectural Design}, pages = {185--190}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177127}, url = {http://www.nime.org/proceedings/2007/nime2007_185.pdf}, keywords = {Architecture, installation, interaction, granular synthesis, adaptation, engagement. } }
Karmen Franinovic and Yon Visell. 2007. New Musical Interfaces in Context : Sonic Interaction Design in the Urban Setting. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 191–196. http://doi.org/10.5281/zenodo.1177093
Abstract
Download PDF DOI
The distinctive features of interactive sound installations in public space are considered, with special attention to the rich, if undoubtedly difficult, environments in which they exist. It is argued that such environments, and the social contexts that they imply, are among the most valuable features of these works for the approach that we have adopted to creation as research practice. The discussion is articulated through case studies drawn from two of our installations, Recycled Soundscapes (2004) and Skyhooks (2006). Implications for the broader design of new musical instruments are presented.
@inproceedings{Franinovic2007, author = {Franinovic, Karmen and Visell, Yon}, title = {New Musical Interfaces in Context : Sonic Interaction Design in the Urban Setting}, pages = {191--196}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177093}, url = {http://www.nime.org/proceedings/2007/nime2007_191.pdf}, keywords = {architecture,interaction,music,nime07,sound in-,urban design} }
Marcelo Gimenes, Eduardo Miranda, and Chris Johnson. 2007. Musicianship for Robots with Style. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 197–202. http://doi.org/10.5281/zenodo.1177099
Abstract
Download PDF DOI
In this paper we introduce a System conceived to serve as the "musical brain" of autonomous musical robots or agent-based software simulations of robotic systems. Our research goal is to provide robots with the ability to integrate with the musical culture of their surroundings. In a multi-agent configuration, the System can simulate an environment in which autonomous agents interact with each other as well as with external agents (e.g., robots, human beings or other systems). The main outcome of these interactions is the transformation and development of their musical styles as well as the musical style of the environment in which they live.
@inproceedings{Gimenes2007, author = {Gimenes, Marcelo and Miranda, Eduardo and Johnson, Chris}, title = {Musicianship for Robots with Style}, pages = {197--202}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177099}, url = {http://www.nime.org/proceedings/2007/nime2007_197.pdf}, keywords = {artificial life,musical style,musicianship,nime07} }
David Topper. 2007. Extended Applications of the Wireless Sensor Array (WISEAR). Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 203–204. http://doi.org/10.5281/zenodo.1177261
Abstract
Download PDF DOI
WISEAR (Wireless Sensor Array)8, provides a robust andscalable platform for virtually limitless types of data input tosoftware synthesis engines. It is essentially a Linux based SBC(Single Board Computer) with 802.11a/b/g wireless capability.The device, with batteries, only weighs a few pounds and can beworn by a dancer or other live performer. Past work has focusedon connecting "conventional" sensors (eg., bend sensors,accelerometers, FSRs, etc...) to the board and using it as a datarelay, sending the data as real time control messages to synthesisengines like Max/MSP and RTcmix1. Current research hasextended the abilities of the device to take real-time audio andvideo data from USB cameras and audio devices, as well asrunning synthesis engines on board the device itself. Given itsgeneric network ability (eg., being an 802.11a/b/g device) there istheoretically no limit to the number of WISEAR boxes that canbe used simultaneously in a performance, facilitating multiperformer compositions. This paper will present the basic design philosophy behindWISEAR, explain some of the basic concepts and methods, aswell as provide a live demonstration of the running device, wornby the author.
@inproceedings{Topper2007, author = {Topper, David}, title = {Extended Applications of the Wireless Sensor Array (WISEAR)}, pages = {203--204}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177261}, url = {http://www.nime.org/proceedings/2007/nime2007_203.pdf}, keywords = {Wireless, sensors, embedded devices, linux, real-time audio, real- time video } }
Giuseppe Torre, Mikael Fernström, Brendan O’Flynn, and Philip Angove. 2007. Celeritas : Wearable Wireless System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 205–208. http://doi.org/10.5281/zenodo.1179463
Abstract
Download PDF DOI
In this paper, we describe a new wearable wireless sensor system for solo or group dance performances. The system consists of a number of 25mm Wireless Inertial Measurement Unit (WIMU) nodes designed at the Tyndall National Institute. Each sensor node has two dual-axis accelerometers, three single axis gyroscopes and two dual axis magnetometers, providing 6 Degrees of Freedom (DOF) movement tracking. All sensors transmit data wirelessly to a basestation at a frequency band and power that does not require licensing. The interface process has been developed at the Interaction Design Center of the University of Limerick (Ireland). The data are acquired and manipulated in well-know real-time software like pd and Max/MSP. This paper presents the new system, describes the interface design and outlines the main achievements of this collaborative research, which has been named ‘Celeritas’.
@inproceedings{Fernstrom2007, author = {Torre, Giuseppe and Fernstr\''{o}m, Mikael and O'Flynn, Brendan and Angove, Philip}, title = {Celeritas : Wearable Wireless System}, pages = {205--208}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1179463}, url = {http://www.nime.org/proceedings/2007/nime2007_205.pdf}, keywords = {Inertial Measurement Unit, IMU, Position Tracking, Interactive Dance Performance, Graphical Object, Mapping. } }
Stephen Sinclair and Marcelo M. Wanderley. 2007. Defining a Control Standard for Easily Integrating Haptic Virtual Environments with Existing Audio / Visual Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 209–212. http://doi.org/10.5281/zenodo.1177245
Abstract
Download PDF DOI
This paper presents an approach to audio-haptic integration that utilizes Open Sound Control, an increasingly wellsupported standard for audio communication, to initializeand communicate with dynamic virtual environments thatwork with off-the-shelf force-feedback devices.
@inproceedings{Sinclair2007, author = {Sinclair, Stephen and Wanderley, Marcelo M.}, title = {Defining a Control Standard for Easily Integrating Haptic Virtual Environments with Existing Audio / Visual Systems}, pages = {209--212}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177245}, url = {http://www.nime.org/proceedings/2007/nime2007_209.pdf}, keywords = {Haptics, control, multi-modal, audio, force-feedback } }
Justin Donaldson, Ian Knopke, and Chris Raphael. 2007. Chroma Palette : Chromatic Maps of Sound As Granular Synthesis Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 213–219. http://doi.org/10.5281/zenodo.1177085
Abstract
Download PDF DOI
Chroma based representations of acoustic phenomenon are representations of sound as pitched acoustic energy. A framewise chroma distribution over an entire musical piece is a useful and straightforward representation of its musical pitch over time. This paper examines a method of condensing the block-wise chroma information of a musical piece into a two dimensional embedding. Such an embedding is a representation or map of the different pitched energies in a song, and how these energies relate to each other in the context of the song. The paper presents an interactive version of this representation as an exploratory analytical tool or instrument for granular synthesis. Pointing and clicking on the interactive map recreates the acoustical energy present in the chroma blocks at that location, providing an effective way of both exploring the relationships between sounds in the original piece, and recreating a synthesized approximation of these sounds in an instrumental fashion.
@inproceedings{Donaldson2007, author = {Donaldson, Justin and Knopke, Ian and Raphael, Chris}, title = {Chroma Palette : Chromatic Maps of Sound As Granular Synthesis Interface}, pages = {213--219}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177085}, url = {http://www.nime.org/proceedings/2007/nime2007_213.pdf}, keywords = {Chroma, granular synthesis, dimensionality reduction } }
Nick Collins. 2007. Matching Parts : Inner Voice Led Control for Symbolic and Audio Accompaniment. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 220–223. http://doi.org/10.5281/zenodo.1177075
BibTeX
Download PDF DOI
@inproceedings{Collins2007, author = {Collins, Nick}, title = {Matching Parts : Inner Voice Led Control for Symbolic and Audio Accompaniment}, pages = {220--223}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177075}, url = {http://www.nime.org/proceedings/2007/nime2007_220.pdf}, keywords = {accompaniment,concatenative sound syn-,feature matching,inner parts,interactive mu-,melodic similarity,nime07,thesis} }
Mark Cartwright, Matt Jones, and Hiroko Terasawa. 2007. Rage in Conjunction with the Machine. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 224–227. http://doi.org/10.5281/zenodo.1177063
Abstract
Download PDF DOI
This report presents the design and construct ion of Rage in Conjunction with the Machine, a simple but novel pairing of musical interface and sound sculpture. The , , authors discuss the design and creation of this instrument , focusing on the unique aspects of it, including the use of physical systems, large gestural input, scale, and the electronic coupling of a physical input to a physical output.
@inproceedings{Cartwright2007, author = {Cartwright, Mark and Jones, Matt and Terasawa, Hiroko}, title = {Rage in Conjunction with the Machine}, pages = {224--227}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177063}, url = {http://www.nime.org/proceedings/2007/nime2007_224.pdf}, keywords = {audience participation,inflatable,instrume nt design,instrume nt size,mapping,musical,new musical instrument,nime07,physical systems,sound scultpure} }
Gil Weinberg and Scott Driscoll. 2007. The Design of a Robotic Marimba Player – Introducing Pitch into Robotic Musicianship. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 228–233. http://doi.org/10.5281/zenodo.1179477
Abstract
Download PDF DOI
The paper presents the theoretical background and the design scheme for a perceptual and improvisational robotic marimba player that interacts with human musicians in a visual and acoustic manner. Informed by an evaluation of a previously developed robotic percussionist, we present the extension of our work to melodic and harmonic realms with the design of a robotic player that listens to, analyzes and improvises pitch-based musical materials. After a presentation of the motivation for the project, theoretical background and related work, we present a set of research questions followed by a description of hardware and software approaches that address these questions. The paper concludes with a description of our plans to implement and embed these approaches in a robotic marimba player that will be used in workshops and concerts.
@inproceedings{Weinberg2007, author = {Weinberg, Gil and Driscoll, Scott}, title = {The Design of a Robotic Marimba Player -- Introducing Pitch into Robotic Musicianship}, pages = {228--233}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1179477}, url = {http://www.nime.org/proceedings/2007/nime2007_228.pdf}, keywords = {human-machine interaction,improvisation,nime07,perceptual modeling,robotic musicianship} }
Andrew Robertson and Mark D. Plumbley. 2007. B-Keeper : A Beat-Tracker for Live Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 234–237. http://doi.org/10.5281/zenodo.1177231
Abstract
Download PDF DOI
This paper describes the development of B-Keeper, a reatime beat tracking system implemented in Java and Max/MSP,which is capable of maintaining synchronisation between anelectronic sequencer and a drummer. This enables musicians to interact with electronic parts which are triggeredautomatically by the computer from performance information. We describe an implementation which functions withthe sequencer Ableton Live.
@inproceedings{Robertson2007, author = {Robertson, Andrew and Plumbley, Mark D.}, title = {B-Keeper : A Beat-Tracker for Live Performance}, pages = {234--237}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177231}, url = {http://www.nime.org/proceedings/2007/nime2007_234.pdf}, keywords = {Human-Computer Interaction, Automatic Accompaniment, Performance } }
Ajay Kapur, Eric Singer, Manjinder S. Benning, George Tzanetakis, and Trimpin Trimpin. 2007. Integrating HyperInstruments , Musical Robots & Machine Musicianship for North Indian Classical Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 238–241. http://doi.org/10.5281/zenodo.1177137
Abstract
Download PDF DOI
This paper describes a system enabling a human to perform music with a robot in real-time, in the context of North Indian classical music. We modify a traditional acoustic sitar into a hyperinstrument in order to capture performance gestures for musical analysis. A custom built four-armed robotic Indian drummer was built using a microchip, solenoids, aluminum and folk frame drums. Algorithms written towards "intelligent" machine musicianship are described. The final goal of this research is to have a robotic drummer accompany a professional human sitar player live in performance.
@inproceedings{Kapur2007, author = {Kapur, Ajay and Singer, Eric and Benning, Manjinder S. and Tzanetakis, George and Trimpin, Trimpin}, title = {Integrating HyperInstruments , Musical Robots \& Machine Musicianship for North {India}n Classical Music}, pages = {238--241}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177137}, url = {http://www.nime.org/proceedings/2007/nime2007_238.pdf}, keywords = {Musical Robotics, Electronic Sitar, Hyperinstruments, Music Information Retrieval (MIR). } }
Arthur Clay and Dennis Majoe. 2007. The Wrist-Conductor. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 242–245. http://doi.org/10.5281/zenodo.1177073
Abstract
Download PDF DOI
The starting point for this project is the want to produce a music controller that could be employed in such a manner that even lay public could enjoy the possibilities of mobile art. All of the works that are discussed here are in relation to a new GPS-based controller, the Wrist-Conductor. The works are technically based around the synchronizing possibilities using the GPS Time Mark and are aesthetically rooted in works that function in an open public space such as a city or a forest. One of the works intended for the controller, China Gates, is discussed here in detail in order to describe how the GPS Wrist-Controller is actually used in a public art context. The other works, CitySonics, The Enchanted Forest and Get a Pot & a Spoon are described briefly in order to demonstrate that even a simple controller can be used to create a body of works. This paper also addresses the breaking of the media bubble via the concept of the “open audience”, or how mobile art can engage pedestrians as viewers or listeners within public space and not remain an isolated experience for performers only.
@inproceedings{Clay2007, author = {Clay, Arthur and Majoe, Dennis}, title = {The Wrist-Conductor}, pages = {242--245}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177073}, url = {http://www.nime.org/proceedings/2007/nime2007_242.pdf}, keywords = {Mobile Music, GPS, Controller, Collaborative Performance } }
Avrum Hollinger, Christopher Steele, Virginia Penhune, Robert Zatorre, and Marcelo M. Wanderley. 2007. fMRI-Compatible Electronic Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 246–249. http://doi.org/10.5281/zenodo.1177119
Abstract
Download PDF DOI
This paper presents an electronic piano keyboard and computer mouse designed for use in a magnetic resonance imaging scanner. The interface allows neuroscientists studying motor learning of musical tasks to perform functional scans of a subject’s brain while synchronizing the scanner, auditory and visual stimuli, and auditory feedback with the onset, offset, and velocity of the piano keys. The design of the initial prototype and environment-specific issues are described, as well as prior work in the field. Preliminary results are positive and were unable to show the existence of image artifacts caused by the interface. Recommendations to improve the optical assembly are provided in order to increase the robustness of the design.
@inproceedings{Hollinger2007, author = {Hollinger, Avrum and Steele, Christopher and Penhune, Virginia and Zatorre, Robert and Wanderley, Marcelo M.}, title = {fMRI-Compatible Electronic Controllers}, pages = {246--249}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177119}, url = {http://www.nime.org/proceedings/2007/nime2007_246.pdf}, keywords = {Input device, MRI-compatible, fMRI, motor learning, optical sensing. } }
Yoichi Nagashima. 2007. GHI project and "Cyber Kendang". Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 250–253. http://doi.org/10.5281/zenodo.1177205
Abstract
Download PDF DOI
This is a report of research project about developing novel musical instruments for interactive computer music. The project’s name - "GHI project" means that "It might be good that musical instrument shines, isn’t it?" in Japanese. I examined the essences of musical instruments again on proverb "Taking a lesson from the past". At the first step, my project targeted and chose "Kendang" - the traditional musical instrument of Indonesia.
@inproceedings{Nagashima2007, author = {Nagashima, Yoichi}, title = {GHI project and "Cyber Kendang"}, pages = {250--253}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177205}, url = {http://www.nime.org/proceedings/2007/nime2007_250.pdf}, keywords = {kendang, media arts, new instruments, sound and light} }
Shinichiro Toyoda. 2007. Sensillum : An Improvisational Approach to Composition. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 254–255. http://doi.org/10.5281/zenodo.1179465
Abstract
Download PDF DOI
This study proposes new possibilities for interaction design pertaining to music piece creation. Specifically, the study created an environment wherein a wide range of users are able to easily experience new musical expressions via a combination of newly developed software and the Nintendo Wii Remote controller.
@inproceedings{Toyoda2007, author = {Toyoda, Shinichiro}, title = {Sensillum : An Improvisational Approach to Composition}, pages = {254--255}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1179465}, url = {http://www.nime.org/proceedings/2007/nime2007_254.pdf}, keywords = {Interactive systems, improvisation, gesture, composition INTRODUCTION Though music related research focusing on the interaction between people and computers is currently experiencing wide range development, the history of approaches wherein the creation of new musical expression is made possible via the active } }
Leon Gruenbaum. 2007. The Samchillian Tip Tip Tip Cheeepeeeee : A Relativistic Keyboard Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 256–259. http://doi.org/10.5281/zenodo.1177103
Abstract
Download PDF DOI
Almost all traditional musical instruments have a one-to-one correspondence between a given fingering and the pitch that sounds for that fingering. The Samchillian Tip Tip Tip Cheeepeeeee does not — it is a keyboard MIDI controller that is based on intervals rather than fixed pitches. That is, a given keypress will sound a pitch a number of steps away from the last note sounded (within the key signature and scale selected) according to the ’delta’ value assigned to that key. The advantages of such a system are convenience, speed, and the ability to play difficult, unusual and/or unintended passages extemporaneously.
@inproceedings{Gruenbaum2007, author = {Gruenbaum, Leon}, title = {The Samchillian Tip Tip Tip Cheeepeeeee : A Relativistic Keyboard Instrument}, pages = {256--259}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177103}, url = {http://www.nime.org/proceedings/2007/nime2007_256.pdf}, keywords = {samchillian, keyboard, MIDI controller, relative, interval, microtonal, computer keyboard, pitch, musical instrument } }
Jason Freeman. 2007. Graph Theory : Interfacing Audiences Into the Compositional Process. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 260–263. http://doi.org/10.5281/zenodo.1177095
Abstract
Download PDF DOI
Graph Theory links the creative music-making activities of web site visitors to the dynamic generation of an instrumental score for solo violin. Participants use a web-based interface to navigate among short, looping musical fragments to create their own unique path through the open-form composition. Before each concert performance, the violinist prints out a new copy of the score that orders the fragments based on the decisions made by web visitors.
@inproceedings{Freeman2007, author = {Freeman, Jason}, title = {Graph Theory : Interfacing Audiences Into the Compositional Process}, pages = {260--263}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177095}, url = {http://www.nime.org/proceedings/2007/nime2007_260.pdf}, keywords = {Music, Composition, Residency, Audience Interaction, Collaboration, Violin, Graph, Flash, Internet, Traveling Salesman. } }
Nicolas Villar, Hans Gellersen, Matt Jervis, and Alexander Lang. 2007. The ColorDex DJ System : A New Interface for Live Music Mixing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 264–269. http://doi.org/10.5281/zenodo.1179475
Abstract
Download PDF DOI
This paper describes the design and implementation of a new interface prototype for live music mixing. The ColorDex system employs a completely new operational metaphor which allows the mix DJ to prepare up to six tracks at once, and perform mixes between up to three of those at a time. The basic premises of the design are: 1) Build a performance tool that multiplies the possible choices a DJ has in respect in how and when tracks are prepared and mixed; 2) Design the system in such a way that the tool does not overload the performer with unnecessary complexity, and 3) Make use of novel technology to make the performance of live music mixing more engaging for both the performer and the audience. The core components of the system are: A software program to load, visualize and playback digitally encoded tracks; the HDDJ device (built chiefly out of a repurposed hard disk drive), which provides tactile manipulation of the playback speed and position of tracks; and the Cubic Crossfader, a wireless sensor cube that controls of the volume of individual tracks, and allows the DJ to mix these in interesting ways.
@inproceedings{Villar2007, author = {Villar, Nicolas and Gellersen, Hans and Jervis, Matt and Lang, Alexander}, title = {The ColorDex DJ System : A New Interface for Live Music Mixing}, pages = {264--269}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1179475}, url = {http://www.nime.org/proceedings/2007/nime2007_264.pdf}, keywords = {Novel interfaces, live music-mixing, cube-based interfaces, crossfading, repurposing HDDs, accelerometer-based cubic control } }
Luke Dahl, Nathan Whetsell, and John Van Stoecker. 2007. The WaveSaw : A Flexible Instrument for Direct Timbral Manipulation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 270–272. http://doi.org/10.5281/zenodo.1177079
Abstract
Download PDF DOI
In this paper, we describe a musical controller – the WaveSaw – for directly manipulating a wavetable. The WaveSaw consists of a long, flexible metal strip with handles on either end, somewhat analogous to a saw. The user plays the WaveSaw by holding the handles and bending the metal strip. We use sensors to measure the strip’s curvature and reconstruct its shape as a wavetable stored in a computer. This provides a direct gestural mapping from the shape of the WaveSaw to the timbral characteristics of the computer-generated sound. Additional sensors provide control of pitch, amplitude, and other musical parameters.
@inproceedings{Dahl2007, author = {Dahl, Luke and Whetsell, Nathan and Van Stoecker, John}, title = {The WaveSaw : A Flexible Instrument for Direct Timbral Manipulation}, pages = {270--272}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177079}, url = {http://www.nime.org/proceedings/2007/nime2007_270.pdf}, keywords = {Musical controller, Puredata, scanned synthesis, flex sensors. } }
Peter Bennett, Nicholas Ward, Sile O’Modhrain, and Pedro Rebelo. 2007. DAMPER : A Platform for Effortful Interface Development. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 273–276. http://doi.org/10.5281/zenodo.1177041
Abstract
Download PDF DOI
This paper proposes that the physicality of an instrument be considered an important aspect in the design of new interfaces for musical expression. The use of Laban’s theory of effort in the design of new effortful interfaces, in particular looking at effortspace modulation, is investigated, and a platform for effortful interface development (named the DAMPER) is described. Finally, future work is described and further areas of research are highlighted.
@inproceedings{Bennett2007, author = {Bennett, Peter and Ward, Nicholas and O'Modhrain, Sile and Rebelo, Pedro}, title = {DAMPER : A Platform for Effortful Interface Development}, pages = {273--276}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177041}, url = {http://www.nime.org/proceedings/2007/nime2007_273.pdf}, keywords = {Effortful Interaction. Haptics. Laban Analysis. Physicality. HCI. } }
Alexandre R. François, Elaine Chew, and Dennis Thurmond. 2007. Visual Feedback in Performer-Machine Interaction for Musical Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 277–280. http://doi.org/10.5281/zenodo.1177091
Abstract
Download PDF DOI
This paper describes the design of Mimi, a multi-modal interactive musical improvisation system that explores the potential and powerful impact of visual feedback in performermachine interaction. Mimi is a performer-centric tool designed for use in performance and teaching. Its key andnovel component is its visual interface, designed to providethe performer with instantaneous and continuous information on the state of the system. For human improvisation,in which context and planning are paramount, the relevantstate of the system extends to the near future and recentpast. Mimi’s visual interface allows for a peculiar blendof raw reflex typically associated with improvisation, andpreparation and timing more closely affiliated with scorebased reading. Mimi is not only an effective improvisationpartner, it has also proven itself to be an invaluable platformthrough which to interrogate the mental models necessaryfor successful improvisation.
@inproceedings{Francois2007, author = {Fran\c{c}ois, Alexandre R. and Chew, Elaine and Thurmond, Dennis}, title = {Visual Feedback in Performer-Machine Interaction for Musical Improvisation}, pages = {277--280}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177091}, url = {http://www.nime.org/proceedings/2007/nime2007_277.pdf}, keywords = {Performer-machine interaction, visualization design, machine improvisation } }
Cornelius Poepel and Günter Marx. 2007. >hot_strings SIG. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 281–284. http://doi.org/10.5281/zenodo.1177221
Abstract
Download PDF DOI
Many fascinating new developments in the area bowed stringed instruments have been developed in recent years. However, the majority of these new applications are either not well known, used orconsidered in a broader context by their target users. The necessaryexchange between the world of developers and the players is ratherlimited. A group of performers, researchers, instrument developersand composers was founded in order to share expertise and experiences and to give each other feedback on the work done to developnew instruments. Instruments incorporating new interfaces, synthesis methods, sensor technology, new materials like carbon fiber andwood composites as well as composite materials and research outcome are presented and discussed in the group. This paper gives anintroduction to the group and reports about activities and outcomesin the last two years.
@inproceedings{Poepel2007, author = {Poepel, Cornelius and Marx, G\''{u}nter}, title = {>hot\_strings SIG}, pages = {281--284}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177221}, url = {http://www.nime.org/proceedings/2007/nime2007_281.pdf}, keywords = {Interdisciplinary user group, electronic bowed string instrument, evaluation of computer based musical instruments } }
Andrew A. Cook and Graham Pullin. 2007. Tactophonics : Your Favourite Thing Wants to Sing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 285–288. http://doi.org/10.5281/zenodo.1177077
Abstract
Download PDF DOI
Description of a project, inspired by the theory of affordance, exploring the issues of visceral expression and audience engagement in the realm of computer performance. Describes interaction design research techniques in novel application, used to engage and gain insight into the culture and mindset of the improvising musician. This research leads to the design and implementation of a prototype system that allows musicians to play an object of their choice as a musical instrument.
@inproceedings{Cook2007, author = {Cook, Andrew A. and Pullin, Graham}, title = {Tactophonics : Your Favourite Thing Wants to Sing}, pages = {285--288}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177077}, url = {http://www.nime.org/proceedings/2007/nime2007_285.pdf}, keywords = {1,affordance,background and problem space,cultural probes,design research,improvisation,interaction design,nime07,performance} }
Miguel A. Pérez, Benjamin Knapp, and Michael Alcorn. 2007. Dı́amair : Composing for Choir and Integral Music Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 289–292. http://doi.org/10.5281/zenodo.1177215
Abstract
Download PDF DOI
In this paper, we describe the composition of a piece for choir and Integral Music Controller. We focus more on the aesthetic, conceptual, and practical aspects of the interface and less on the technological details. We especially stress the influence that the designed interface poses on the compositional process and how we approach the expressive organisation of musical materials during the composition of the piece, as well as the addition of nuances (personal real-time expression) by the musicians at performance time.
@inproceedings{Perez2007, author = {P\'{e}rez, Miguel A. and Knapp, Benjamin and Alcorn, Michael}, title = {D\'{\i}amair : Composing for Choir and Integral Music Controller}, pages = {289--292}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177215}, url = {http://www.nime.org/proceedings/2007/nime2007_289.pdf}, keywords = {Composition, Integral Music Controller, Emotion measurement, Physiological Measurement, Spatialisation. } }
Jose Fornari, Adolfo Jr. Maia, and Jonatas Manzolli. 2007. Interactive Spatialization and Sound Design using an Evolutionary System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 293–298. http://doi.org/10.5281/zenodo.1177089
Abstract
Download PDF DOI
We present an interactive sound spatialization and synthesis system based on Interaural Time Difference (ITD) model and Evolutionary Computation. We define a Sonic Localization Field using sound attenuation and ITD azimuth angle parameters and, in order to control an adaptive algorithm, we used pairs of these parameters as Spatial Sound Genotypes (SSG). They are extracted from waveforms which are considered individuals of a Population Set. A user-interface receives input from a generic gesture interface (such as a NIME device) and interprets them as ITD cues. Trajectories provided by these signals are used as Target Sets of an evolutionary algorithm. A Fitness procedure optimizes locally the distance between the Target Set and the SSG pairs. Through a parametric score the user controls dynamic changes in the sound output.
@inproceedings{Fornari2007, author = {Fornari, Jose and Maia, Adolfo Jr. and Manzolli, Jonatas}, title = {Interactive Spatialization and Sound Design using an Evolutionary System}, pages = {293--298}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177089}, url = {http://www.nime.org/proceedings/2007/nime2007_293.pdf}, keywords = {interactive, sound, spatialization, evolutionary, adaptation. } }
Anthony J. Hornof, Troy Rogers, and Tim Halverson. 2007. EyeMusic : Performing Live Music and Multimedia Compositions with Eye Movements. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 299–300. http://doi.org/10.5281/zenodo.1177121
Abstract
Download PDF DOI
In this project, eye tracking researchers and computer music composers collaborate to create musical compositions that are played with the eyes. A commercial eye tracker (LC Technologies Eyegaze) is connected to a music and multimedia authoring environment (Max/MSP/Jitter). The project addresses issues of both noise and control: How will the performance benefit from the noise inherent in eye trackers and eye movements, and to what extent should the composition encourage the performer to try to control a specific musical outcome? Providing one set of answers to these two questions, the authors create an eye-controlled composition, EyeMusic v1.0, which was selected by juries for live performance at computer music conferences.
@inproceedings{Hornof2007, author = {Hornof, Anthony J. and Rogers, Troy and Halverson, Tim}, title = {EyeMusic : Performing Live Music and Multimedia Compositions with Eye Movements}, pages = {299--300}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177121}, url = {http://www.nime.org/proceedings/2007/nime2007_299.pdf}, keywords = {H.5.2 [Information Interfaces and Presentation] User Interfaces --- input devices and strategies, interaction styles. J.5 [Arts and Humanities] Fine arts, performing arts. } }
Turner Kirk and Colby Leider. 2007. The FrankenPipe : A Novel Bagpipe Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 301–304. http://doi.org/10.5281/zenodo.1177151
Abstract
Download PDF DOI
The FrankenPipe project is an attempt to convert a traditionalHighland Bagpipe into a controller capable of driving both realtime synthesis on a laptop as well as a radio-controlled (RC) car.Doing so engages musical creativity while enabling novel, oftenhumorous, performance art. The chanter is outfitted withphotoresistors (CdS photoconductive cells) underneath each hole,allowing a full range of MIDI values to be produced with eachfinger and giving the player a natural feel. An air-pressure sensoris also deployed in the bag to provide another element of controlwhile capturing a fundamental element of bagpipe performance.The final product navigates the realm of both musical instrumentand toy, allowing the performer to create a novel yet richperformance experience for the audience.
@inproceedings{Kirk2007, author = {Kirk, Turner and Leider, Colby}, title = {The FrankenPipe : A Novel Bagpipe Controller}, pages = {301--304}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177151}, url = {http://www.nime.org/proceedings/2007/nime2007_301.pdf}, keywords = {FrankenPipe, alternate controller, MIDI, bagpipe, photoresistor, chanter. } }
Antonio Camurri, Paolo Coletta, Giovanna Varni, and Simone Ghisio. 2007. Developing Multimodal Interactive Systems with EyesWeb XMI. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 305–308. http://doi.org/10.5281/zenodo.1177061
Abstract
Download PDF DOI
EyesWeb XMI (for eXtended Multimodal Interaction) is the new version of the well-known EyesWeb platform. It has a main focus on multimodality and the main design target of this new release has been to improve the ability to process and correlate several streams of data. It has been used extensively to build a set of interactive systems for performing arts applications for Festival della Scienza 2006, Genoa, Italy. The purpose of this paper is to describe the developed installations as well as the new EyesWeb features that helped in their development.
@inproceedings{Camurri2007a, author = {Camurri, Antonio and Coletta, Paolo and Varni, Giovanna and Ghisio, Simone}, title = {Developing Multimodal Interactive Systems with EyesWeb XMI}, pages = {305--308}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177061}, url = {http://www.nime.org/proceedings/2007/nime2007_305.pdf}, keywords = {EyesWeb, multimodal interactive systems, performing arts. } }
Matt Hoffman and Perry R. Cook. 2007. Real-Time Feature-Based Synthesis for Live Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 309–312. http://doi.org/10.5281/zenodo.1177117
Abstract
Download PDF DOI
A crucial set of decisions in digital musical instrument design deals with choosing mappings between parameters controlled by the performer and the synthesis algorithms that actually generate sound. Feature-based synthesis offers a way to parameterize audio synthesis in terms of the quantifiable perceptual characteristics, or features, the performer wishes the sound to take on. Techniques for accomplishing such mappings and enabling feature-based synthesis to be performed in real time are discussed. An example is given of how a real-time performance system might be designed to take advantage of feature-based synthesis’s ability to provide perceptually meaningful control over a large number of synthesis parameters.
@inproceedings{Hoffman2007, author = {Hoffman, Matt and Cook, Perry R.}, title = {Real-Time Feature-Based Synthesis for Live Musical Performance}, pages = {309--312}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177117}, url = {http://www.nime.org/proceedings/2007/nime2007_309.pdf}, keywords = {Feature, Synthesis, Analysis, Mapping, Real-time. } }
Mitsuyo Hashida, Noriko Nagata, and Haruhiro Katayose. 2007. jPop-E : An Assistant System for Performance Rendering of Ensemble Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 313–316. http://doi.org/10.5281/zenodo.1177111
Abstract
Download PDF DOI
This paper introduces jPop-E (java-based PolyPhrase Ensemble), an assistant system for the Pop-E performancerendering system. Using this assistant system, MIDI dataincluding expressive tempo changes or velocity control canbe created based on the user’s musical intention. Pop-E(PolyPhrase Ensemble) is one of the few machine systemsdevoted to creating expressive musical performances thatcan deal with the structure of polyphonic music and theuser’s interpretation of the music. A well-designed graphical user interface is required to make full use of the potential ability of Pop-E. In this paper, we discuss the necessaryelements of the user interface for Pop-E, and describe theimplemented system, jPop-E.
@inproceedings{Hashida2007, author = {Hashida, Mitsuyo and Nagata, Noriko and Katayose, Haruhiro}, title = {jPop-E : An Assistant System for Performance Rendering of Ensemble Music}, pages = {313--316}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177111}, url = {http://www.nime.org/proceedings/2007/nime2007_313.pdf}, keywords = {Performance Rendering, User Interface, Ensemble Music Ex- pression } }
Mihir Sarkar and Barry Vercoe. 2007. Recognition and Prediction in a Network Music Performance System for Indian Percussion. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 317–320. http://doi.org/10.5281/zenodo.1177239
Abstract
Download PDF DOI
Playing music over the Internet, whether for real-time jamming, network performance or distance education, is constrained by the speed of light which introduces, over long distances, time delays unsuitable for musical applications. Current musical collaboration systems generally transmit compressed audio streams over low-latency and high-bandwidthnetworks to optimize musician synchronization. This paperproposes an alternative approach based on pattern recognition and music prediction. Trained for a particular typeof music, here the Indian tabla drum, the system calledTablaNet identifies rhythmic patterns by recognizing individual strokes played by a musician and mapping them dynamically to known musical constructs. Symbols representing these musical structures are sent over the network toa corresponding computer system. The computer at thereceiving end anticipates incoming events by analyzing previous phrases and synthesizes an estimated audio output.Although such a system may introduce variants due to prediction approximations, resulting in a slightly different musical experience at both ends, we find that it demonstratesa high level of playability with an immediacy not present inother systems, and functions well as an educational tool.
@inproceedings{Sarkar2007, author = {Sarkar, Mihir and Vercoe, Barry}, title = {Recognition and Prediction in a Network Music Performance System for {India}n Percussion}, pages = {317--320}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177239}, url = {http://www.nime.org/proceedings/2007/nime2007_317.pdf}, keywords = {network music performance, real-time online musical collab- oration, Indian percussions, tabla bols, strokes recognition, music prediction } }
Benjamin Vigoda and David Merrill. 2007. JamiOki-PureJoy : A Game Engine and Instrument for Electronically-Mediated Musical Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 321–326. http://doi.org/10.5281/zenodo.1179473
Abstract
Download PDF DOI
JamiOki-PureJoy is a novel electronically mediated musical performance system. PureJoy is a musical instrument; A highly flexible looper, sampler, effects processor and sound manipulation interface based on Pure Data, with input from a joystick controller and headset microphone. PureJoy allows the player to essentially sculpt their voice with their hands. JamiOki is an engine for running group-player musical game pieces. JamiOki helps each player by ‘whispering instructions’ in their ear. Players track and control their progress through the game using a graphical display and a touch-sensitive footpad. JamiOki is an architecture for bringing groups of players together to express themselves musically in a way that is both spontaneous and formally satisfying. The flexibility of the PureJoy instrument offers to JamiOki the ability for any player to play any requested role in the music at any time. The musical structure provided by JamiOki helps PureJoy players create more complex pieces of music on the fly with spontaneous sounds, silences, themes, recapitulation, tight transitions, structural hierarchy, interesting interactions, and even friendly competition. As a combined system JamiOki-PureJoy is exciting and fun to play.
@inproceedings{Vigoda2007, author = {Vigoda, Benjamin and Merrill, David}, title = {JamiOki-PureJoy : A Game Engine and Instrument for Electronically-Mediated Musical Improvisation}, pages = {321--326}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1179473}, url = {http://www.nime.org/proceedings/2007/nime2007_321.pdf}, keywords = {JamiOki, PureJoy, collaborative performance, structured im- provisation, electronically-mediated performance, found sound } }
Daniel Gómez, Tjebbe Donner, and Andrés Posada. 2007. A Look at the Design and Creation of a Graphically Controlled Digital Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 327–329. http://doi.org/10.5281/zenodo.1177097
Abstract
Download PDF DOI
In this article we want to show how graphical languages can be used successfully for monitoring and controlling a digital musical instrument. An overview of the design and development stages of this instrument shows how we can create models which will simplify the control and use of different kinds of musical algorithms for synthesis and sequencing.
@inproceedings{Gomez2007, author = {G\'{o}mez, Daniel and Donner, Tjebbe and Posada, Andr\'{e}s}, title = {A Look at the Design and Creation of a Graphically Controlled Digital Musical Instrument}, pages = {327--329}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177097}, url = {http://www.nime.org/proceedings/2007/nime2007_327.pdf}, keywords = {nime07} }
Roy Vanegas. 2007. The MIDI Pick : Trigger Serial Data , Samples, and MIDI from a Guitar Pick. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 330–333. http://doi.org/10.5281/zenodo.1179471
Abstract
Download PDF DOI
The guitar pick has traditionally been used to strike or rakethe strings of a guitar or bass, and in rarer instances, ashamisen, lute, or other stringed instrument. The pressure exerted on it, however, has until now been ignored.The MIDI Pick, an enhanced guitar pick, embraces this dimension, acting as a trigger for serial data, audio samples,MIDI messages 1, Max/MSP patches, and on/off messages.This added scope expands greatly the stringed instrumentplayer’s musical dynamic in the studio or on stage.
@inproceedings{Vanegas2007, author = {Vanegas, Roy}, title = {The {MIDI} Pick : Trigger Serial Data , Samples, and {MIDI} from a Guitar Pick}, pages = {330--333}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1179471}, url = {http://www.nime.org/proceedings/2007/nime2007_330.pdf}, keywords = {guitar, MIDI, pick, plectrum, wireless, bluetooth, ZigBee, Arduino, NIME, ITP } }
Manjinder S. Benning, Michael McGuire, and Peter Driessen. 2007. Improved Position Tracking of a 3-D Gesture-Based Musical Controller Using a Kalman Filter. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 334–337. http://doi.org/10.5281/zenodo.1177043
Abstract
Download PDF DOI
This paper describes the design and experimentation of a Kalman Filter used to improve position tracking of a 3-D gesture-based musical controller known as the Radiodrum. The Singer dynamic model for target tracking is used to describe the evolution of a Radiodrum’s stick position in time. The autocorrelation time constant of a gesture’s acceleration and the variance of the gesture acceleration are used to tune the model to various performance modes. Multiple Kalman Filters tuned to each gesture type are run in parallel and an Interacting Multiple Model (IMM) is implemented to decide on the best combination of filter outputs to track the current gesture. Our goal is to accurately track Radiodrum gestures through noisy measurement signals.
@inproceedings{Benning2007, author = {Benning, Manjinder S. and McGuire, Michael and Driessen, Peter}, title = {Improved Position Tracking of a {3-D} Gesture-Based Musical Controller Using a {Kalman} Filter}, pages = {334--337}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177043}, url = {http://www.nime.org/proceedings/2007/nime2007_334.pdf}, keywords = {Kalman Filtering, Radiodrum, Gesture Tracking, Interacting Multiple Model INTRODUCTION Intention is a key aspect of traditional music performance. The ability for an artist to reliably reproduce sound, pitch, rhythms, and emotion is paramount to the design of any instrument. With the } }
Noah H. Keating. 2007. The Lambent Reactive : An Audiovisual Environment for Kinesthetic Playforms. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 338–343. http://doi.org/10.5281/zenodo.1177141
Abstract
Download PDF DOI
In this paper, design scenarios made possible by the use of an interactive illuminated floor as the basis of an audiovisual environment are presented. By interfacing a network of pressure sensitive, light-emitting tiles with a 7.1 channel speaker system and requisite audio software, many avenues for collaborative expression emerge, as do heretofore unexplored modes of multiplayer music and dance gaming. By giving users light and sound cues that both guide and respond to their movement, a rich environment is created that playfully integrates the auditory, the visual, and the kinesthetic into a unified interactive experience.
@inproceedings{Keating2007, author = {Keating, Noah H.}, title = {The Lambent Reactive : An Audiovisual Environment for Kinesthetic Playforms}, pages = {338--343}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177141}, url = {http://www.nime.org/proceedings/2007/nime2007_338.pdf}, keywords = {Responsive Environments, Audiovisual Play, Kinetic Games, Movement Rich Game Play, Immersive Dance, Smart Floor } }
Adam M. Stark, Mark D. Plumbley, and Matthew E. Davies. 2007. Real-Time Beat-Synchronous Audio Effects. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 344–345. http://doi.org/10.5281/zenodo.1177249
Abstract
Download PDF DOI
We present a new group of audio effects that use beat tracking, the detection of beats in an audio signal, to relate effectparameters to the beats in an input signal. Conventional audio effects are augmented so that their operation is related tothe output of a beat tracking system. We present a temposynchronous delay effect and a set of beat synchronous lowfrequency oscillator effects including tremolo, vibrato andauto-wah. All effects are implemented in real-time as VSTplug-ins to allow for their use in live performance.
@inproceedings{Stark2007, author = {Stark, Adam M. and Plumbley, Mark D. and Davies, Matthew E.}, title = {Real-Time Beat-Synchronous Audio Effects}, pages = {344--345}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177249}, url = {http://www.nime.org/proceedings/2007/nime2007_344.pdf}, keywords = {a beat-synchronous tremolo effect,audio effects,beat tracking,figure 1,im-,nime07,plemented as a vst,plug-in,real-time,the rate is controlled,vst plug-in} }
Harris Wulfson, G. Douglas Barrett, and Michael Winter. 2007. Automatic Notation Generators. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 346–351. http://doi.org/10.5281/zenodo.1176861
Abstract
Download PDF DOI
This article presents various custom software tools called Automatic Notation Generators (ANG’s) developed by the authors to aid in the creation of algorithmic instrumental compositions. The unique possibilities afforded by ANG software are described, along with relevant examples of their compositional output. These avenues of exploration include: mappings of spectral data directly into notated music, the creation of software transcribers that enable users to generate multiple realizations of algorithmic compositions, and new types of spontaneous performance with live generated screen-based music notation. The authors present their existing software tools along with suggestions for future research and artistic inquiry.
@inproceedings{Wulfson2007, author = {Wulfson, Harris and Barrett, G. Douglas and Winter, Michael}, title = {Automatic Notation Generators}, pages = {346--351}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1176861}, url = {http://www.nime.org/proceedings/2007/nime2007_346.pdf}, keywords = {nime07} }
Diana Young and Anagha Deshmane. 2007. Bowstroke Database : A Web-Accessible Archive of Violin Bowing Data. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 352–357. http://doi.org/10.5281/zenodo.1179481
Abstract
Download PDF DOI
This paper presents a newly created database containing calibrated gesture and audio data corresponding to various violin bowstrokes, as well as video and motion capture data in some cases. The database is web-accessible and searchable by keywords and subject. It also has several important features designed to improve accessibility to the data and to foster collaboration between researchers in fields related to bowed string synthesis, acoustics, and gesture.
@inproceedings{Young2007, author = {Young, Diana and Deshmane, Anagha}, title = {Bowstroke Database : A Web-Accessible Archive of Violin Bowing Data}, pages = {352--357}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1179481}, url = {http://www.nime.org/proceedings/2007/nime2007_352.pdf}, keywords = {violin, bowed string, bowstroke, bowing, bowing parameters, technique, gesture, audio } }
Jan C. Schacher. 2007. Gesture Control of Sounds in 3D Space. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 358–362. http://doi.org/10.5281/zenodo.1177241
Abstract
Download PDF DOI
This paper presents a methodology and a set of tools for gesture control of sources in 3D surround sound. The techniques for rendering acoustic events on multi-speaker or headphone-based surround systems have evolved considerably, making it possible to use them in real-time performances on light equipment. Controlling the placement of sound sources is usually done in idiosyncratic ways and has not yet been fully explored and formalized. This issue is addressed here with the proposition of a methodical approach. The mapping of gestures to source motion is implemented by giving the sources physical object properties and manipulating these characteristics with standard geometrical transforms through hierarchical or emergent relationships.
@inproceedings{Schacher2007, author = {Schacher, Jan C.}, title = {Gesture Control of Sounds in {3D} Space}, pages = {358--362}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177241}, url = {http://www.nime.org/proceedings/2007/nime2007_358.pdf}, keywords = {Gesture, Surround Sound, Mapping, Trajectory, Transform Matrix, Tree Hierarchy, Emergent Structures. } }
Alexandre T. Porres and Jonatas Manzolli. 2007. Adaptive Tuning Using Theremin as Gestural Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 363–366. http://doi.org/10.5281/zenodo.1177223
Abstract
Download PDF DOI
This work presents an interactive device to control an adaptive tuning and synthesis system. The gestural controller is based on the theremin concept in which only an antenna is used as a proximity sensor. This interactive process is guided by sensorial consonance curves and adaptive tuning related to psychoacoustical studies. We used an algorithm to calculate the dissonance values according to amplitudes and frequencies of a given sound spectrum. The theoretical background is presented followed by interactive composition strategies and sound results.
@inproceedings{Porres2007, author = {Porres, Alexandre T. and Manzolli, Jonatas}, title = {Adaptive Tuning Using Theremin as Gestural Controller}, pages = {363--366}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177223}, url = {http://www.nime.org/proceedings/2007/nime2007_363.pdf}, keywords = {Interaction, adaptive tuning, theremin, sensorial dissonance, synthesis. } }
William Hsu. 2007. Design Issues in Interaction Modeling for Free Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 367–370. http://doi.org/10.5281/zenodo.1177123
Abstract
Download PDF DOI
In previous publications (see for example [2] and [3]), we described an interactive music system, designed to improvise with saxophonist John Butcher; our system analyzes timbral and gestural features in real-time, and uses this information to guide response generation. This paper overviews our recent work with the system’s interaction management component (IMC). We explore several options for characterizing improvisation at a higher level, and managing decisions for interactive performance in a rich timbral environment. We developed a simple, efficient framework using a small number of features suggested by recent work in mood modeling in music. We describe and evaluate the first version of the IMC, which was used in performance at the Live Algorithms for Music (LAM) conference in December 2006. We touch on developments on the system since LAM, and discuss future plans to address perceived shortcomings in responsiveness, and the ability of the system to make long-term adaptations.
@inproceedings{Hsu2007, author = {Hsu, William}, title = {Design Issues in Interaction Modeling for Free Improvisation}, pages = {367--370}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177123}, url = {http://www.nime.org/proceedings/2007/nime2007_367.pdf}, keywords = {Interactive music systems, timbral analysis, free improvisation. } }
Sylvain le Groux, Jonatas Manzolli, and Paul F. Verschure. 2007. VR-RoBoser : Real-Time Adaptive Sonification of Virtual Environments Based on Avatar Behavior. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 371–374. http://doi.org/10.5281/zenodo.1177101
Abstract
Download PDF DOI
Until recently, the sonification of Virtual Environments had often been reduced to its simplest expression. Too often soundscapes and background music are predetermined, repetitive and somewhat predictable. Yet, there is room for more complex and interesting sonification schemes that can improve the sensation of presence in a Virtual Environment. In this paper we propose a system that automatically generates original background music in real-time called VR-RoBoser. As a test case we present the application of VR-RoBoser to a dynamic avatar that explores its environment. We show that the musical events are directly and continuously generated and influenced by the behavior of the avatar in three-dimensional virtual space, generating a context dependent sonification.
@inproceedings{Groux2007, author = {le Groux, Sylvain and Manzolli, Jonatas and Verschure, Paul F.}, title = {VR-RoBoser : Real-Time Adaptive Sonification of Virtual Environments Based on Avatar Behavior}, pages = {371--374}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177101}, url = {http://www.nime.org/proceedings/2007/nime2007_371.pdf}, keywords = {Real-time Composition, Interactive Sonification, Real-time Neural Processing, Multimedia, Virtual Environment, Avatar. } }
Hans-Christoph Steiner, David Merrill, and Olaf Matthes. 2007. A Unified Toolkit for Accessing Human Interface Devices in Pure Data and Max / MSP. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 375–378. http://doi.org/10.5281/zenodo.1177251
Abstract
Download PDF DOI
In this paper we discuss our progress on the HID toolkit, a collection of software modules for the Pure Data and Max/MSP programming environments that provide unified, user-friendly and cross-platform access to human interface devices (HIDs) such as joysticks, digitizer tablets, and stomp-pads. These HIDs are ubiquitous, inexpensive and capable of sensing a wide range of human gesture, making them appealing interfaces for interactive media control. However, it is difficult to utilize many of these devices for custom-made applications, particularly for novices. The modules we discuss in this paper are [hidio], which handles incoming and outgoing data between a patch and a HID, and [input noticer], which monitors HID plug/unplug events. The goal in creating these modules is to preserve maximal flexibility in accessing the input and output capabilities of HIDs, in a manner that is ap- proachable for both sophisticated and beginning designers. This paper documents our design notes and implementa- tion considerations, current progress, and ideas for future extensions to the HID toolkit.
@inproceedings{Steiner2007, author = {Steiner, Hans-Christoph and Merrill, David and Matthes, Olaf}, title = {A Unified Toolkit for Accessing Human Interface Devices in Pure Data and Max / MSP}, pages = {375--378}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177251}, url = {http://www.nime.org/proceedings/2007/nime2007_375.pdf}, keywords = {nime07} }
Doug Van Nort and Marcelo M. Wanderley. 2007. Control Strategies for Navigation of Complex Sonic Spaces Transformation of Resonant Models. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 379–383. http://doi.org/10.5281/zenodo.1179469
Abstract
Download PDF DOI
This paper describes musical experiments aimed at designing control structures for navigating complex and continuous sonic spaces. The focus is on sound processing techniques which contain a high number of control parameters,and which exhibit subtle and interesting micro-variationsand textural qualities when controlled properly. The examples all use a simple low-dimensional controller — a standard graphics tablet — and the task of initimate and subtle textural manipulations is left to the design of proper mappings,created using a custom toolbox of mapping functions. Thiswork further acts to contextualize past theoretical results bythe given musical presentations, and arrives at some conclusions about the interplay between musical intention, controlstrategies and the process of their design.
@inproceedings{Nort2007, author = {Van Nort, Doug and Wanderley, Marcelo M.}, title = {Control Strategies for Navigation of Complex Sonic Spaces Transformation of Resonant Models}, pages = {379--383}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1179469}, url = {http://www.nime.org/proceedings/2007/nime2007_379.pdf}, keywords = {Mapping, Control, Sound Texture, Musical Gestures } }
André Knörig, Boris Müller, and Reto Wettach. 2007. Articulated Paint : Musical Expression for Non-Musicians. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 384–385. http://doi.org/10.5281/zenodo.1177155
Abstract
Download PDF DOI
In this paper we present the concept and prototype of a new musical interface that utilizes the close relationship between gestural expression in the act of painting and that of playing a musical instrument in order to provide non-musicians the opportunity to create musical expression. A physical brush on a canvas acts as the instrument. The characteristics of its stroke are intuitively mapped to a conductor program, defining expressive parameters of the tone in real-time. Two different interaction modes highlight the importance of bodily expression in making music as well as the value of a metaphorical visual representation.
@inproceedings{Knorig2007, author = {Kn\''{o}rig, Andr\'{e} and M\''{u}ller, Boris and Wettach, Reto}, title = {Articulated Paint : Musical Expression for Non-Musicians}, pages = {384--385}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177155}, url = {http://www.nime.org/proceedings/2007/nime2007_384.pdf}, keywords = {musical interface, musical expression, expressive gesture, musical education, natural interface } }
Tetsuaki Baba, Taketoshi Ushiama, and Kiyoshi Tomimatsu. 2007. Freqtric Drums : A Musical Instrument that Uses Skin Contact as an Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 386–387. http://doi.org/10.5281/zenodo.1177037
Abstract
Download PDF DOI
Freqtric Drums is a new musical, corporal electronic instrument that allows us not only to recover face-to-face communication, but also makes possible body-to-body communication so that a self image based on the sense of being a separate body can be signicant altered through an openness toand even a sense of becoming part of another body. FreqtricDrums is a device that turns audiences surrounding a performer into drums so that the performer, as a drummer, cancommunicate with audience members as if they were a setof drums. We describe our concept and the implementationand process of evolution of Freqtric Drums.
@inproceedings{Baba2007, author = {Baba, Tetsuaki and Ushiama, Taketoshi and Tomimatsu, Kiyoshi}, title = {Freqtric Drums : A Musical Instrument that Uses Skin Contact as an Interface}, pages = {386--387}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177037}, url = {http://www.nime.org/proceedings/2007/nime2007_386.pdf}, keywords = {interpersonal communication, musical instrument, interaction design, skin contact, touch } }
Chang Min Han. 2007. Project Scriabin v.3. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 388–389. http://doi.org/10.5281/zenodo.1177109
Abstract
Download PDF DOI
Project Scriabin is an interactive implementation of Alexander Scriabin’s experimentation with “opposite mapping direction”, that is, mapping from hue (colour) to pitch (sound). Main colour to sound coding was implemented by Scriabin’s colour scale.
@inproceedings{Han2007, author = {Han, Chang Min}, title = {Project Scriabin v.3}, pages = {388--389}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177109}, url = {http://www.nime.org/proceedings/2007/nime2007_388.pdf}, keywords = {Synaesthesia, Sonification, Touch Screen} }
Ginevra Castellano, Roberto Bresin, Antonio Camurri, and Gualtiero Volpe. 2007. Expressive Control of Music and Visual Media by Full-Body Movement. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 390–391. http://doi.org/10.5281/zenodo.1177065
Abstract
Download PDF DOI
In this paper we describe a system which allows users to use their full-body for controlling in real-time the generation of an expressive audio-visual feedback. The system extracts expressive motion features from the user’s full-body movements and gestures. The values of these motion features are mapped both onto acoustic parameters for the real-time expressive rendering of a piece of music, and onto real-time generated visual feedback projected on a screen in front of the user.
@inproceedings{Castellano2007, author = {Castellano, Ginevra and Bresin, Roberto and Camurri, Antonio and Volpe, Gualtiero}, title = {Expressive Control of Music and Visual Media by Full-Body Movement}, pages = {390--391}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177065}, url = {http://www.nime.org/proceedings/2007/nime2007_390.pdf}, keywords = {Expressive interaction; multimodal environments; interactive music systems } }
Adam R. Tindale. 2007. A Hybrid Method for Extended Percussive Gesture. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 392–393. http://doi.org/10.5281/zenodo.1177259
Abstract
Download PDF DOI
This paper describes a hybrid method to allow drummers to expressively utilize electronics. Commercial electronic drum hardware is made more expressive by replacing the sample playback “drum brain” with a physical modeling algorithm implemented in Max/MSP. Timbre recognition techniques identify striking implement and location as symbolic data that can be used to modify the parameters of the physical model.
@inproceedings{Tindale2007, author = {Tindale, Adam R.}, title = {A Hybrid Method for Extended Percussive Gesture}, pages = {392--393}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177259}, url = {http://www.nime.org/proceedings/2007/nime2007_392.pdf}, keywords = {electronic percussion,nime07,physical modeling,timbre recognition} }
Paolo Bottoni, Riccardo Caporali, Daniele Capuano, Stefano Faralli, Anna Labella, and Mario Pierro. 2007. Use of a Dual-Core DSP in a Low-Cost, Touch-Screen Based Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 394–395. http://doi.org/10.5281/zenodo.1177053
Abstract
Download PDF DOI
This paper reports our experiments on using a dual-coreDSP processor in the construction of a user-programmablemusical instrument and controller called the TouchBox.
@inproceedings{Bottoni2007, author = {Bottoni, Paolo and Caporali, Riccardo and Capuano, Daniele and Faralli, Stefano and Labella, Anna and Pierro, Mario}, title = {Use of a Dual-Core {DSP} in a Low-Cost, Touch-Screen Based Musical Instrument}, pages = {394--395}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177053}, url = {http://www.nime.org/proceedings/2007/nime2007_394.pdf}, keywords = {dual-core, DSP, touch-screen, synthesizer, controller } }
Junichi Kanebako, James Gibson, and Laurent Mignonneau. 2007. Mountain Guitar : a Musical Instrument for Everyone. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 396–398. http://doi.org/10.5281/zenodo.1177133
Abstract
Download PDF DOI
This instrument is a part of the “Gangu Project” at IAMAS, which aim to develop digital toys for improving children’s social behavior in the future. It was further developed as part of the IAMAS-Interface Cultures exchange program. “Mountain Guitar” is a new musical instrument that enables musical expression through a custom-made sensor technology, which captures and transforms the height at which the instrument is held to the musical outcome during the playing session. One of the goals of “Mountain Guitar” is to let untrained users easily and intuitively play guitar through their body movements. In addition to capturing the users’ body movements, “Mountain Guitar” also simulates standard guitar playing techniques such as vibrato, choking, and mute. “Mountain Guitar’s” goal is to provide playing pleasure for guitar training sessions. This poster describes the “Mountain Guitar’s” fundamental principles and its mode of operation.
@inproceedings{Kanebako2007, author = {Kanebako, Junichi and Gibson, James and Mignonneau, Laurent}, title = {Mountain Guitar : a Musical Instrument for Everyone}, pages = {396--398}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177133}, url = {http://www.nime.org/proceedings/2007/nime2007_396.pdf}, keywords = {Musical Expression, Guitar Instrument, MIDI to sensor mapping, Physical Computing, Intuitive Interaction} }
Marc Sirguy and Emmanuelle Gallin. 2007. Eobody2 : A Follow-up to Eobody’s Technology. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 401–402. http://doi.org/10.5281/zenodo.1177247
Abstract
Download PDF DOI
Eowave and Ircam have been deeply involved into gestureanalysis and sensing for a few years by now, as severalartistic projects demonstrate (1). In 2004, Eowave has beenworking with Ircam on the development of the Eobodysensor system, and since that, Eowave’s range of sensors hasbeen increased with new sensors sometimes developed innarrow collaboration with artists for custom sensor systemsfor installations and performances. This demo-paperdescribes the recent design of a new USB/MIDI-to-sensorinterface called Eobody2.
@inproceedings{Sirguy2007, author = {Sirguy, Marc and Gallin, Emmanuelle}, title = {Eobody2 : A Follow-up to Eobody's Technology}, pages = {401--402}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177247}, url = {http://www.nime.org/proceedings/2007/nime2007_401.pdf}, keywords = {Gestural controller, Sensor, MIDI, USB, Computer music, Relays, Motors, Robots, Wireless. } }
Bernie C. Till, Manjinder S. Benning, and Nigel Livingston. 2007. Wireless Inertial Sensor Package (WISP). Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 403–404. http://doi.org/10.5281/zenodo.1177257
Abstract
Download PDF DOI
The WISP is a novel wireless sensor that uses 3 axis magnetometers, accelerometers, and rate gyroscopes to provide a real-time measurement of its own orientation in space. Orientation data are transmitted via the Open Sound Control protocol (OSC) to a synthesis engine for interactive live dance performance.
@inproceedings{Till2007, author = {Till, Bernie C. and Benning, Manjinder S. and Livingston, Nigel}, title = {Wireless Inertial Sensor Package (WISP)}, pages = {403--404}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177257}, url = {http://www.nime.org/proceedings/2007/nime2007_403.pdf}, keywords = {Music Controller, Human-Computer Interaction, Wireless Sensing, Inertial Sensing. } }
Stefan Loewenstein. 2007. "Acoustic Map" – An Interactive Cityportrait. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 405–406. http://doi.org/10.5281/zenodo.1177167
Abstract
Download PDF DOI
The ”Acoustic Map“ is an interactive soundinstallation developed for the “Hallakustika” Festival in Hall (Tyrolia, Austria) using the motion tracking software Eyes-Web and Max-MSP. For the NIME 07 a simulation of the motion tracking part of the original work will be shown. Its aim was to create an interactive city portrait of the city of Hall and to offer the possibility to enhance six sites of the city on an acoustical basis with what I called an “acoustic zoom”.
@inproceedings{Loewenstein2007, author = {Loewenstein, Stefan}, title = {"Acoustic Map" -- An Interactive Cityportrait}, pages = {405--406}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177167}, url = {http://www.nime.org/proceedings/2007/nime2007_405.pdf}, keywords = {nime07} }
Tomoko Hashida, Takeshi Naemura, and Takao Sato. 2007. A System for Improvisational Musical Expression Based on Player’s Sense of Tempo. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 407–408. http://doi.org/10.5281/zenodo.1177113
Abstract
Download PDF DOI
This paper introduces a system for improvisational musical expression that enables all users, novice and experienced, to perform intuitively and expressively. Users can generate musically consistent results through intuitive action, inputting rhythm in a decent tempo. We demonstrate novel mapping ways that reflect user’s input information more interactively and effectively in generating the music. We also present various input devices that allow users more creative liberty.
@inproceedings{Hashida2007a, author = {Hashida, Tomoko and Naemura, Takeshi and Sato, Takao}, title = {A System for Improvisational Musical Expression Based on Player's Sense of Tempo}, pages = {407--408}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177113}, url = {http://www.nime.org/proceedings/2007/nime2007_407.pdf}, keywords = {Improvisation, interactive music, a sense of tempo } }
Misako Nakamoto and Yasuo Kuhara. 2007. Circle Canon Chorus System Used To Enjoy A Musical Ensemble Singing "Frog Round". Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 409–410. http://doi.org/10.5281/zenodo.1177207
Abstract
Download PDF DOI
We proposed a circle canon system for enjoying a musical ensemble supported by a computer and network. Using the song Frog round, which is a popular circle canon chorus originated from a German folk song, we produced a singing ensemble opportunity where everyone plays the music together at the same time. The aim of our system is that anyone can experience the joyful feeling of actually playing the music as well as sharing it with others.
@inproceedings{Nakamoto2007, author = {Nakamoto, Misako and Kuhara, Yasuo}, title = {Circle Canon Chorus System Used To Enjoy A Musical Ensemble Singing "Frog Round"}, pages = {409--410}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177207}, url = {http://www.nime.org/proceedings/2007/nime2007_409.pdf}, keywords = {Circle canon, Chorus, Song, Frog round, Ensemble, Internet, Max/MSP, MySQL database. } }
Rui Pereira. 2007. Loop-R : Real-Time Video Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 411–414. http://doi.org/10.5281/zenodo.1177219
Abstract
Download PDF DOI
Loop-R is a real-time video performance tool, based in the exploration of low-tech, used technology and human engineering research. With this tool its author is giving a shout to industry, using existing and mistreated technology in innovative ways, combining concepts and interfaces: blending segregated interfaces (GUI and Physical) into one. After graspable interfaces and the “end” of WIMP interfaces, hardware and software blend themselves in a new genre providing free control of video-loops in an expressive hybrid tool.
@inproceedings{Estrada2007, author = {Pereira, Rui}, title = {Loop-R : Real-Time Video Interface}, pages = {411--414}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177219}, url = {http://www.nime.org/proceedings/2007/nime2007_411.pdf}, keywords = {Real-time; video; interface; live-visuals; loop; } }
Jane Rigler and Zachary Seldess. 2007. The Music Cre8tor : an Interactive System for Musical Exploration and Education. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 415–416. http://doi.org/10.5281/zenodo.1177227
Abstract
Download PDF DOI
The Music Cre8tor is an interactive music composition system controlled by motion sensors specifically designed for children with disabilities although not exclusively for this population. The player(s) of the Music Cre8tor can either hold or attach accelerometer sensors to trigger a variety of computer-generated sounds, MIDI instruments and/or pre-recorded sound files. The sensitivity of the sensors can be modified for each unique individual so that even the smallest movement can control a sound. The flexibility of the system is such that either four people can play simultaneously and/or one or more players can use up to four sensors. The original goal of this program was to empower students with disabilities to create music and encourage them to perform with other musicians, however this same goal has expanded to include other populations.
@inproceedings{Rigler2007, author = {Rigler, Jane and Seldess, Zachary}, title = {The Music Cre8tor : an Interactive System for Musical Exploration and Education}, pages = {415--416}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177227}, url = {http://www.nime.org/proceedings/2007/nime2007_415.pdf}, keywords = {Music Education, disabilities, special education, motion sensors, music composition, interactive performance. } }
Carlos Guedes. 2007. Establishing a Musical Channel of Communication between Dancers and Musicians in Computer-Mediated Collaborations in Dance Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 417–419. http://doi.org/10.5281/zenodo.1177105
Abstract
Download PDF DOI
In this demonstration, I exemplify how a musical channel ofcommunication can be established in computer-mediatedinteraction between musicians and dancers in real time. Thischannel of communication uses a software libraryimplemented as a library of external objects for Max/MSP[1],that processes data from an object or library that performsframe-differencing analysis of a video stream in real time inthis programming environment.
@inproceedings{Guedes2007, author = {Guedes, Carlos}, title = {Establishing a Musical Channel of Communication between Dancers and Musicians in Computer-Mediated Collaborations in Dance Performance}, pages = {417--419}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177105}, url = {http://www.nime.org/proceedings/2007/nime2007_417.pdf}, keywords = {dance,in dance,interaction between music and,interactive,interactive dance,interactive performance,musical rhythm and rhythm,nime07,performance systems} }
Steve Bull, Scot Gresham-Lancaster, Kalin Mintchev, and Terese Svoboda. 2007. Cellphonia : WET. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 420–420. http://doi.org/10.5281/zenodo.1177057
BibTeX
Download PDF DOI
@inproceedings{Bull2007, author = {Bull, Steve and Gresham-Lancaster, Scot and Mintchev, Kalin and Svoboda, Terese}, title = {Cellphonia : WET}, pages = {420--420}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177057}, url = {http://www.nime.org/proceedings/2007/nime2007_420.pdf}, keywords = {nime07} }
Collective Dearraindrop. 2007. Miller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 421–421. http://doi.org/10.5281/zenodo.1177083
BibTeX
Download PDF DOI
@inproceedings{Court2007, author = {Dearraindrop, Collective}, title = {{Miller}}, pages = {421--421}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177083}, url = {http://www.nime.org/proceedings/2007/nime2007_421.pdf}, keywords = {nime07} }
Sibylle Hauert, Daniel Reichmuth, and Volker Böhm. 2007. Instant City, a Music Building Game Table. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 422–422. http://doi.org/10.5281/zenodo.1177115
BibTeX
Download PDF DOI
@inproceedings{Hauert2007, author = {Hauert, Sibylle and Reichmuth, Daniel and B\''{o}hm, Volker}, title = {Instant City, a Music Building Game Table}, pages = {422--422}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177115}, url = {http://www.nime.org/proceedings/2007/nime2007_422.pdf}, keywords = {nime07} }
Andrew Milmoe. 2007. NIME Performance & Installation : Sonic Pong V3.0. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 423–423. http://doi.org/10.5281/zenodo.1177197
BibTeX
Download PDF DOI
@inproceedings{Milmoe2007, author = {Milmoe, Andrew}, title = {NIME Performance \& Installation : Sonic Pong V3.0}, pages = {423--423}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177197}, url = {http://www.nime.org/proceedings/2007/nime2007_423.pdf}, keywords = {nime07} }
Betsey Biggs. 2007. The Tipping Point. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 424–424. http://doi.org/10.5281/zenodo.1177047
BibTeX
Download PDF DOI
@inproceedings{Biggs2007, author = {Biggs, Betsey}, title = {The Tipping Point}, pages = {424--424}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177047}, url = {http://www.nime.org/proceedings/2007/nime2007_424.pdf}, keywords = {nime07} }
Simon Morris. 2007. Musique Concrete : Transforming Space , Sound and the City Through Skateboarding. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 425–425. http://doi.org/10.5281/zenodo.1177203
BibTeX
Download PDF DOI
@inproceedings{Morris2007, author = {Morris, Simon}, title = {Musique Concrete : Transforming Space , Sound and the City Through Skateboarding}, pages = {425--425}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177203}, url = {http://www.nime.org/proceedings/2007/nime2007_425.pdf}, keywords = {nime07} }
Yuta Uozumi, Masato Takahashi, and Ryoho Kobayashi. 2007. Bd : A Sound Installation with Swarming Robots. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 426–426. http://doi.org/10.5281/zenodo.1179467
BibTeX
Download PDF DOI
@inproceedings{Uozumi2007, author = {Uozumi, Yuta and Takahashi, Masato and Kobayashi, Ryoho}, title = {Bd : A Sound Installation with Swarming Robots}, pages = {426--426}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1179467}, url = {http://www.nime.org/proceedings/2007/nime2007_426.pdf}, keywords = {nime07} }
2007. Sensity. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 427–427. http://doi.org/10.5281/zenodo.1177029
BibTeX
Download PDF DOI
@inproceedings{Stanza2007, author = {}, title = {Sensity}, pages = {427--427}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177029}, url = {http://www.nime.org/proceedings/2007/nime2007_427.pdf}, keywords = {nime07} }
Adriana Sa. 2007. Thresholds. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 428–428. http://doi.org/10.5281/zenodo.1177235
BibTeX
Download PDF DOI
@inproceedings{Sa2007, author = {Sa, Adriana}, title = {Thresholds}, pages = {428--428}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177235}, url = {http://www.nime.org/proceedings/2007/nime2007_428.pdf}, keywords = {nime07} }
Masato Takahashi and Hiroya Tanaka. 2007. bog : Instrumental Aliens. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 429–429. http://doi.org/10.5281/zenodo.1177253
BibTeX
Download PDF DOI
@inproceedings{Takahashi2007, author = {Takahashi, Masato and Tanaka, Hiroya}, title = {bog : Instrumental Aliens}, pages = {429--429}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177253}, url = {http://www.nime.org/proceedings/2007/nime2007_429.pdf}, keywords = {nime07} }
Julian Oliver and Steven Pickles. 2007. Fijuu2 : A Game-Based Audio-Visual Performance and Composition Engine. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 430–430. http://doi.org/10.5281/zenodo.1177213
BibTeX
Download PDF DOI
@inproceedings{Oliver2007, author = {Oliver, Julian and Pickles, Steven}, title = {Fijuu2 : A Game-Based Audio-Visual Performance and Composition Engine}, pages = {430--430}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177213}, url = {http://www.nime.org/proceedings/2007/nime2007_430.pdf}, keywords = {nime07} }
Jinsil Seo and Greg Corness. 2007. nite_aura : An Audio-Visual Interactive Immersive Installation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 431–431. http://doi.org/10.5281/zenodo.1177243
BibTeX
Download PDF DOI
@inproceedings{Corness2007, author = {Seo, Jinsil and Corness, Greg}, title = {nite\_aura : An Audio-Visual Interactive Immersive Installation}, pages = {431--431}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177243}, url = {http://www.nime.org/proceedings/2007/nime2007_431.pdf}, keywords = {nime07} }
Miguel Álvarez-Fernández, Stefan Kersten, and Asia Piascik. 2007. Soundanism. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 432–432. http://doi.org/10.5281/zenodo.1177031
BibTeX
Download PDF DOI
@inproceedings{Historia2007, author = {\'{A}lvarez-Fern\'{a}ndez, Miguel and Kersten, Stefan and Piascik, Asia}, title = {Soundanism}, pages = {432--432}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177031}, url = {http://www.nime.org/proceedings/2007/nime2007_432.pdf}, keywords = {nime07} }
Alexandre Quessy. 2007. Human Sequencer. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 433–433. http://doi.org/10.5281/zenodo.1177225
BibTeX
@inproceedings{Quessy2007, author = {Quessy, Alexandre}, title = {Human Sequencer}, pages = {433--433}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2007}, address = {New York City, NY, United States}, issn = {2220-4806}, doi = {10.5281/zenodo.1177225}, url = {http://www.nime.org/proceedings/2007/nime2007_433.pdf}, keywords = {nime07} }
2006
Lalya Gaye, Lars E. Holmquist, Frauke Behrendt, and Atau Tanaka. 2006. Mobile Music Technology: Report on an Emerging Community. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 22–25. http://doi.org/10.5281/zenodo.1176909
BibTeX
Download PDF DOI
@inproceedings{Gaye2006, author = {Gaye, Lalya and Holmquist, Lars E. and Behrendt, Frauke and Tanaka, Atau}, title = {Mobile Music Technology: Report on an Emerging Community}, pages = {22--25}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176909}, url = {http://www.nime.org/proceedings/2006/nime2006_022.pdf} }
Atau Tanaka and Petra Gemeinboeck. 2006. A Framework for Spatial Interaction in Locative Media. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 26–30. http://doi.org/10.5281/zenodo.1177013
Abstract
Download PDF DOI
This paper presents the concepts and techniques used in afamily of location based multimedia works. The paper hasthree main sections: 1.) to describe the architecture of anaudio-visual hardware/software framework we havedeveloped for the realization of a series of locative mediaartworks, 2.) to discuss the theoretical and conceptualunderpinnings motivating the design of the technicalframework, and 3.) to elicit from this, fundamental issuesand questions that can be generalized and applicable to thegrowing practice of locative media.
@inproceedings{Tanaka2006, author = {Tanaka, Atau and Gemeinboeck, Petra}, title = {A Framework for Spatial Interaction in Locative Media}, pages = {26--30}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1177013}, url = {http://www.nime.org/proceedings/2006/nime2006_026.pdf}, keywords = {Mobile music, urban fiction, locative media. } }
Michael Rohs, Georg Essl, and Martin Roth. 2006. CaMus: Live Music Performance using Camera Phones and Visual Grid Tracking. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 31–36. http://doi.org/10.5281/zenodo.1176997
BibTeX
Download PDF DOI
@inproceedings{Rohs2006, author = {Rohs, Michael and Essl, Georg and Roth, Martin}, title = {CaMus: Live Music Performance using Camera Phones and Visual Grid Tracking}, pages = {31--36}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176997}, url = {http://www.nime.org/proceedings/2006/nime2006_031.pdf} }
Greg Schiemer and Mark Havryliv. 2006. Pocket Gamelan: Tuneable Trajectories for Flying Sources in Mandala 3 and Mandala 4. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 37–42. http://doi.org/10.5281/zenodo.1176999
Abstract
Download PDF DOI
This paper describes two new live performance scenarios for performing music using bluetooth-enabled mobile phones. Interaction between mobile phones via wireless link is a key feature of the performance interface for each scenario. Both scenarios are discussed in the context of two publicly performed works for an ensemble of players in which mobile phone handsets are used both as sound sources and as hand-held controllers. In both works mobile phones are mounted in a specially devised pouch attached to a cord and physically swung to produce audio chorusing. During performance some players swing phones while others operate phones as hand-held controllers. Wireless connectivity enables interaction between flying and hand-held phones. Each work features different bluetooth implementations. In one a dedicated mobile phone acts as a server that interconnects multiple clients, while in the other point to point communication takes place between clients on an ad hoc basis. The paper summarises bluetooth tools designed for live performance realisation and concludes with a comparative evaluation of both scenarios for future implementation of performance by large ensembles of nonexpert players performing microtonal music using ubiquitous technology.
@inproceedings{Schiemer2006, author = {Schiemer, Greg and Havryliv, Mark}, title = {Pocket Gamelan: Tuneable Trajectories for Flying Sources in Mandala 3 and Mandala 4}, pages = {37--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176999}, url = {http://www.nime.org/proceedings/2006/nime2006_037.pdf}, keywords = {Java 2 Micro Edition; j2me; Pure Data; PD; Real-Time Media Performance; Just Intonation. } }
David Birchfield, Kelly Phillips, Assegid Kidané, and David Lorig. 2006. Interactive Public Sound Art: a case study. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 43–48. http://doi.org/10.5281/zenodo.1176873
Abstract
Download PDF DOI
Physically situated public art poses significant challenges for the design and realization of interactive, electronic sound works. Consideration of diverse audiences, environmental sensitivity, exhibition conditions, and logistics must guide the artwork. We describe our work in this area, using a recently installed public piece, Transition Soundings, as a case study that reveals a specialized interface and open-ended approach to interactive music making. This case study serves as a vehicle for examination of the real world challenges posed by public art and its outcomes.
@inproceedings{Birchfield2006, author = {Birchfield, David and Phillips, Kelly and Kidan\'{e}, Assegid and Lorig, David}, title = {Interactive Public Sound Art: a case study}, pages = {43--48}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176873}, url = {http://www.nime.org/proceedings/2006/nime2006_043.pdf}, keywords = {Music, Sound, Interactivity, Arts, Public Art, Network Systems, Sculpture, Installation Art, Embedded Electronics. } }
Ge Wang, Ananya Misra, and Perry R. Cook. 2006. Building Collaborative Graphical interFaces in the Audicle. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 49–52. http://doi.org/10.5281/zenodo.1177017
BibTeX
Download PDF DOI
@inproceedings{Wang2006, author = {Wang, Ge and Misra, Ananya and Cook, Perry R.}, title = {Building Collaborative Graphical interFaces in the Audicle}, pages = {49--52}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1177017}, url = {http://www.nime.org/proceedings/2006/nime2006_049.pdf}, keywords = {Graphical interfaces, collaborative performance, networking, computer music ensemble, emergence, visualization, education. } }
Pedro Rebelo and Alain B. Renaud. 2006. The Frequencyliator – Distributing Structures for Networked Laptop Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 53–56. http://doi.org/10.5281/zenodo.1176993
Abstract
Download PDF DOI
The culture of laptop improvisation has grown tremendously in recent years. The development of personalized software instruments presents interesting issues in the context of improvised group performances. This paper examines an approach that is aimed at increasing the modes of interactivity between laptop performers and at the same time suggests ways in which audiences can better discern and identify the sonic characteristics of each laptop performer. We refer to software implementation that was developed for the BLISS networked laptop ensemble with view to designing a shared format for the exchange of messages within local and internet based networks.
@inproceedings{Rebelo2006, author = {Rebelo, Pedro and Renaud, Alain B.}, title = {The Frequencyliator -- Distributing Structures for Networked Laptop Improvisation}, pages = {53--56}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176993}, url = {http://www.nime.org/proceedings/2006/nime2006_053.pdf}, keywords = {Networked audio technologies, laptop ensemble, centralized audio server, improvisation } }
Martin Naef and Daniel Collicott. 2006. A VR Interface for Collaborative 3D Audio Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 57–60. http://doi.org/10.5281/zenodo.1176975
BibTeX
Download PDF DOI
@inproceedings{Naef2006, author = {Naef, Martin and Collicott, Daniel}, title = {A VR Interface for Collaborative {3D} Audio Performance}, pages = {57--60}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176975}, url = {http://www.nime.org/proceedings/2006/nime2006_057.pdf} }
Günter Geiger. 2006. Using the Touch Screen as a Controller for Portable Computer Music Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 61–64. http://doi.org/10.5281/zenodo.1176911
BibTeX
Download PDF DOI
@inproceedings{Geiger2006, author = {Geiger, G\''{u}nter}, title = {Using the Touch Screen as a Controller for Portable Computer Music Instruments}, pages = {61--64}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176911}, url = {http://www.nime.org/proceedings/2006/nime2006_061.pdf}, keywords = {touch screen, PDA, Pure Data, controller, mobile musical instrument, human computer interaction } }
Jukka Holm, Juha Arrasvuori, and Kai Havukainen. 2006. Using MIDI to Modify Video Game Content. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 65–70. http://doi.org/10.5281/zenodo.1176925
Abstract
Download PDF DOI
This paper discusses the concept of using background music to control video game parameters and thus actions on the screen. Each song selected by the player makes the game look different and behave variedly. The concept is explored by modifying an existing video game and playtesting it with different kinds of MIDI music. Several examples of mapping MIDI parameters to game events are presented. As mobile phones’ MIDI players do not usually have a dedicated callback API, a real-time MIDI analysis software for Symbian OS was implemented. Future developments including real-time group performance as a way to control game content are also considered.
@inproceedings{Holm2006, author = {Holm, Jukka and Arrasvuori, Juha and Havukainen, Kai}, title = {Using {MIDI} to Modify Video Game Content}, pages = {65--70}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176925}, url = {http://www.nime.org/proceedings/2006/nime2006_065.pdf}, keywords = {Games, MIDI, music, rhythm games, background music reactive games, musically controlled games, MIDI-controlled games, Virtual Sequencer. } }
Takuro M. Lippit. 2006. Turntable Music in the Digital Era: Designing Alternative Tools for New Turntable Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 71–74. http://doi.org/10.5281/zenodo.1176965
Abstract
Download PDF DOI
Turntable musicians have yet to explore new expressions with digital technology. New higher-level development tools open possibilities for these artists to build their own instruments that can achieve artistic goals commercial products cannot. This paper will present a rough overview on the practice and recent development of turntable music, followed by descriptions of two projects by the , , author.
@inproceedings{Lippit2006, author = {Lippit, Takuro M.}, title = {Turntable Music in the Digital Era: Designing Alternative Tools for New Turntable Expression}, pages = {71--74}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176965}, url = {http://www.nime.org/proceedings/2006/nime2006_071.pdf}, keywords = {Turntable music, DJ, turntablist, improvisation, Max/MSP, PIC Microcontroller, Physical Computing } }
Spencer Kiser. 2006. spinCycle: a Color-Tracking Turntable Sequencer. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 75–76. http://doi.org/10.5281/zenodo.1176941
Abstract
Download PDF DOI
This report presents an interface for musical performance called the spinCycle. spinCycle enables performers to make visual patterns with brightly colored objects on a spinning turntable platter that get translated into musical arrangements in realtime. I will briefly describe the hardware implementation and the sound generation logic used, as well as provide a historical background for the project.
@inproceedings{Kiser2006, author = {Kiser, Spencer}, title = {spinCycle: a Color-Tracking Turntable Sequencer}, pages = {75--76}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176941}, url = {http://www.nime.org/proceedings/2006/nime2006_075.pdf}, keywords = {Color-tracking, turntable, visualization, interactivity, synesthesia } }
Jason Lee. 2006. The Chopping Board: Real-time Sample Editor. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 77–78. http://doi.org/10.5281/zenodo.1176959
BibTeX
Download PDF DOI
@inproceedings{Lee2006a, author = {Lee, Jason}, title = {The Chopping Board: Real-time Sample Editor}, pages = {77--78}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176959}, url = {http://www.nime.org/proceedings/2006/nime2006_077.pdf} }
Staas de Jong. 2006. A Tactile Closed-Loop Device for Musical Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 79–80. http://doi.org/10.5281/zenodo.1176935
BibTeX
Download PDF DOI
@inproceedings{DeJong2006, author = {de Jong, Staas}, title = {A Tactile Closed-Loop Device for Musical Interaction}, pages = {79--80}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176935}, url = {http://www.nime.org/proceedings/2006/nime2006_079.pdf} }
Peter Bennett. 2006. PETECUBE: a Multimodal Feedback Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 81–84. http://doi.org/10.5281/zenodo.1176869
Abstract
Download PDF DOI
The PETECUBE project consists of a series of musical interfaces designed to explore multi-modal feedback. This paper will briefly describe the definition of multimodal feedback, the aim of the project, the development of the first PETECUBE and proposed further work.
@inproceedings{Bennett2006, author = {Bennett, Peter}, title = {{PET}ECUBE: a Multimodal Feedback Interface}, pages = {81--84}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176869}, url = {http://www.nime.org/proceedings/2006/nime2006_081.pdf}, keywords = {Multi-modal Feedback. Haptics. Musical Instrument. } }
Denis Lebel and Joseph Malloch. 2006. The G-Spring Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 85–88. http://doi.org/10.5281/zenodo.1176955
BibTeX
Download PDF DOI
@inproceedings{Lebel2006, author = {Lebel, Denis and Malloch, Joseph}, title = {The G-Spring Controller}, pages = {85--88}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176955}, url = {http://www.nime.org/proceedings/2006/nime2006_085.pdf}, keywords = {Digital musical instrument, kinesthetic feedback } }
Damien Lock and Greg Schiemer. 2006. Orbophone: a New Interface for Radiating Sound and Image. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 89–92. http://doi.org/10.5281/zenodo.1176967
Abstract
Download PDF DOI
The Orbophone is a new interface that radiates rather thanprojects sound and image. It provides a cohesive platformfor audio and visual presentation in situations where bothmedia are transmitted from the same location andlocalization in both media is perceptually correlated. Thispaper discusses the advantages of radiation overconventional sound and image projection for certain kindsof interactive public multimedia exhibits and describes theartistic motivation for its development against a historicalbackdrop of sound systems used in public spaces. Oneexhibit using the Orbophone is described in detail togetherwith description and critique of the prototype, discussingaspects of its design and construction. The paper concludeswith an outline of the Orbophone version 2.
@inproceedings{Lock2006, author = {Lock, Damien and Schiemer, Greg}, title = {Orbophone: a New Interface for Radiating Sound and Image}, pages = {89--92}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176967}, url = {http://www.nime.org/proceedings/2006/nime2006_089.pdf}, keywords = {Immersive Sound; Multi-channel Sound; Loud-speaker Array; Multimedia; Streaming Media; Real-Time Media Performance; Sound Installation. } }
Sukandar Kartadinata. 2006. The Gluion Advantages of an FPGA-based Sensor Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 93–96. http://doi.org/10.5281/zenodo.1176937
Abstract
Download PDF DOI
The gluion is a sensor interface that was designed to overcomesome of the limitations of more traditional designs based onmicrocontrollers, which only provide a small, fixed number ofdigital modules such as counters and serial interfaces. These areoften required to handle sensors where the physical parametercannot easily be converted into a voltage. Other sensors arepacked into modules that include converters and communicatevia SPI or I2C. Finallly, many designs require outputcapabilities beyond simple on/off.The gluion approaches these challenges thru its FPGA-baseddesign which allows for a large number of digital I/O modules.It also provides superior flexibility regarding theirconfiguration, resolution, and functionality. In addition, theFPGA enables a software implementation of the host link — inthe case of the gluion the OSC protocol as well as theunderlying Ethernet layers.
@inproceedings{Kartadinata2006, author = {Kartadinata, Sukandar}, title = {The Gluion Advantages of an {FPGA}-based Sensor Interface}, pages = {93--96}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176937}, url = {http://www.nime.org/proceedings/2006/nime2006_093.pdf}, keywords = {actuators,digital sensors,fpga,osc,sensor interfaces} }
Adrian Freed, Rimas Avizienis, and Matthew Wright. 2006. Beyond 0-5V: Expanding Sensor Integration Architectures. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 97–100. http://doi.org/10.5281/zenodo.1176903
Abstract
Download PDF DOI
A new sensor integration system and its first incarnation i sdescribed. As well as supporting existing analog sensorarrays a new architecture allows for easy integration of thenew generation of low-cost digital sensors used in computermusic performance instruments and installation art.
@inproceedings{Freed2006, author = {Freed, Adrian and Avizienis, Rimas and Wright, Matthew}, title = {Beyond 0-5{V}: Expanding Sensor Integration Architectures}, pages = {97--100}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176903}, url = {http://www.nime.org/proceedings/2006/nime2006_097.pdf}, keywords = {Gesture, sensor, MEMS, FPGA, network, OSC, configurability } }
Colin G. Johnson and Alex Gounaropoulos. 2006. Timbre Interfaces using Adjectives and Adverbs. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 101–102. http://doi.org/10.5281/zenodo.1176933
Abstract
Download PDF DOI
How can we provide interfaces to synthesis algorithms thatwill allow us to manipulate timbre directly, using the sametimbre-words that are used by human musicians to communicate about timbre? This paper describes ongoingwork that uses machine learning methods (principally genetic algorithms and neural networks) to learn (1) to recognise timbral characteristics of sound and (2) to adjust timbral characteristics of existing synthesized sounds.
@inproceedings{Johnson2006, author = {Johnson, Colin G. and Gounaropoulos, Alex}, title = {Timbre Interfaces using Adjectives and Adverbs}, pages = {101--102}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176933}, url = {http://www.nime.org/proceedings/2006/nime2006_101.pdf}, keywords = {timbre; natural language; neural networks } }
D. Andrew Stewart. 2006. SonicJumper Composer. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 103–105. http://doi.org/10.5281/zenodo.1177011
BibTeX
Download PDF DOI
@inproceedings{Stewart2006, author = {Stewart, D. Andrew}, title = {SonicJumper Composer}, pages = {103--105}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1177011}, url = {http://www.nime.org/proceedings/2006/nime2006_103.pdf}, keywords = {composition, process, materials, gesture, controller, cross- modal interaction } }
Hans-Christoph Steiner. 2006. Towards a Catalog and Software Library of Mapping Methods. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 106–109. http://doi.org/10.5281/zenodo.1177009
BibTeX
Download PDF DOI
@inproceedings{Steiner2006, author = {Steiner, Hans-Christoph}, title = {Towards a Catalog and Software Library of Mapping Methods}, pages = {106--109}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1177009}, url = {http://www.nime.org/proceedings/2006/nime2006_106.pdf} }
Daisuke Kobori, Kojiro Kagawa, Makoto Iida, and Chuichi Arakawa. 2006. LINE: Interactive Sound and Light Installation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 110–113. http://doi.org/10.5281/zenodo.1176947
BibTeX
Download PDF DOI
@inproceedings{Kobori2006, author = {Kobori, Daisuke and Kagawa, Kojiro and Iida, Makoto and Arakawa, Chuichi}, title = {LINE: Interactive Sound and Light Installation}, pages = {110--113}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176947}, url = {http://www.nime.org/proceedings/2006/nime2006_110.pdf} }
Nick Bryan-Kinns and Patrick G. Healey. 2006. Decay in Collaborative Music Making. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 114–117. http://doi.org/10.5281/zenodo.1176885
Abstract
Download PDF DOI
This paper reports on ongoing studies of the design and use ofsupport for remote group music making. In this paper weoutline the initial findings of a recent study focusing on thefunction of decay of contributions in collaborative musicmaking. Findings indicate that persistent contributions lendthemselves to individual musical composition and learningnovel interfaces, whilst contributions that quickly decayengender a more focused musical interaction in experiencedparticipants.
@inproceedings{BryanKinns2006, author = {Bryan-Kinns, Nick and Healey, Patrick G.}, title = {Decay in Collaborative Music Making}, pages = {114--117}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176885}, url = {http://www.nime.org/proceedings/2006/nime2006_114.pdf}, keywords = {creativity,design,group interaction,music improvisation} }
Michael Gurevich. 2006. JamSpace: Designing A Collaborative Networked Music Space for Novices. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 118–123. http://doi.org/10.5281/zenodo.1176915
BibTeX
Download PDF DOI
@inproceedings{Gurevich2006, author = {Gurevich, Michael}, title = {JamSpace: Designing A Collaborative Networked Music Space for Novices}, pages = {118--123}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176915}, url = {http://www.nime.org/proceedings/2006/nime2006_118.pdf}, keywords = {Collaborative interface, remote jamming, network music, interaction design, novice, media space INTRODUCTION Most would agree that music is an inherently social ac- tivity [30], but since the } }
Benjamin Knapp and Perry R. Cook. 2006. Creating a Network of Integral Music Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 124–128. http://doi.org/10.5281/zenodo.1176943
Abstract
Download PDF DOI
In this paper, we describe the networking of multiple Integral Music Controllers (IMCs) to enable an entirely new method for creating music by tapping into the composite gestures and emotions of not just one, but many performers. The concept and operation of an IMC is reviewed as well as its use in a network of IMC controllers. We then introduce a new technique of Integral Music Control by assessing the composite gesture(s) and emotion(s) of a group of performers through the use of a wireless mesh network. The Telemuse, an IMC designed precisely for this kind of performance, is described and its use in a new musical performance project under development by the , , authors is discussed.
@inproceedings{Knapp2006, author = {Knapp, Benjamin and Cook, Perry R.}, title = {Creating a Network of Integral Music Controllers}, pages = {124--128}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176943}, url = {http://www.nime.org/proceedings/2006/nime2006_124.pdf}, keywords = {Community-Institutional Relations,Health Services Accessibility,Medically Uninsured,Organizational Case Studies,Primary Health Care,Public-Private Sector Partnerships,San Francisco} }
Matthew Burtner. 2006. Perturbation Techniques for Multi-Performer or Multi- Agent Interactive Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 129–133. http://doi.org/10.5281/zenodo.1176887
Abstract
Download PDF DOI
This paper explores the use of perturbation in designing multiperformer or multi-agent interactive musical interfaces. A problem with the multi-performer approach is how to cohesively organize the independent data inputs into useable control information for synthesis engines. Perturbation has proven useful for navigating multi-agent NIMEs. The , , author’s Windtree is discussed as an example multi-performer instrument in which perturbation is used for multichannel ecological modeling. The Windtree uses a physical system turbulence model controlled in real time by four performers.
@inproceedings{Burtner2006, author = {Burtner, Matthew}, title = {Perturbation Techniques for Multi-Performer or Multi- Agent Interactive Musical Interfaces}, pages = {129--133}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176887}, url = {http://www.nime.org/proceedings/2006/nime2006_129.pdf}, keywords = {interface,mapping,movement,multi-agent,multi-performer,music composition,perturbation} }
Ryan Aylward and Joseph A. Paradiso. 2006. Sensemble: A Wireless, Compact, Multi-User Sensor System for Interactive Dance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 134–139. http://doi.org/10.5281/zenodo.1176865
Abstract
Download PDF DOI
We describe the design of a system of compact, wireless sensor modules meant to capture expressive motion whenworn at the wrists and ankles of a dancer. The sensors form ahigh-speed RF network geared toward real-time dataacquisition from multiple devices simultaneously, enabling asmall dance ensemble to become a collective interface formusic control. Each sensor node includes a 6-axis inertialmeasurement unit (IMU) comprised of three orthogonalgyroscopes and accelerometers in order to capture localdynamics, as well as a capacitive sensor to measure closerange node-to-node proximity. The nodes may also beaugmented with other digital or analog sensors. This paperdescribes application goals, presents the prototype hardwaredesign, introduces concepts for feature extraction andinterpretation, and discusses early test results.
@inproceedings{Aylward2006, author = {Aylward, Ryan and Paradiso, Joseph A.}, title = {Sensemble: A Wireless, Compact, Multi-User Sensor System for Interactive Dance}, pages = {134--139}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176865}, url = {http://www.nime.org/proceedings/2006/nime2006_134.pdf}, keywords = {Interactive dance, wearable sensor networks, inertial gesture tracking, collective motion analysis, multi-user interface } }
2006. The ZKM Klangdom. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 140–143. http://doi.org/10.5281/zenodo.1176991
BibTeX
Download PDF DOI
@inproceedings{Ramakrishnan2006, author = {}, title = {The ZKM Klangdom}, pages = {140--143}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176991}, url = {http://www.nime.org/proceedings/2006/nime2006_140.pdf}, keywords = {Sound Spatialization, Ambisonics, Vector Based Additive Panning (VBAP), Wave Field Synthesis, Acousmatic Music } }
Mike Wozniewski, Zack Settel, and Jeremy R. Cooperstock. 2006. A Framework for Immersive Spatial Audio Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 144–149. http://doi.org/10.5281/zenodo.1177021
Abstract
Download PDF DOI
Traditional uses of virtual audio environments tend to focus onperceptually accurate acoustic representations. Though spatialization of sound sources is important, it is necessary to leveragecontrol of the sonic representation when considering musical applications. The proposed framework allows for the creation ofperceptually immersive scenes that function as musical instruments. Loudspeakers and microphones are modeled within thescene along with the listener/performer, creating a navigable 3Dsonic space where sound sources and sinks process audio according to user-defined spatial mappings.
@inproceedings{Wozniewski2006, author = {Wozniewski, Mike and Settel, Zack and Cooperstock, Jeremy R.}, title = {A Framework for Immersive Spatial Audio Performance}, pages = {144--149}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1177021}, url = {http://www.nime.org/proceedings/2006/nime2006_144.pdf}, keywords = {Control paradigms, 3D audio, spatialization, immersive audio environments, auditory display, acoustic modeling, spatial inter- faces, virtual instrument design } }
Alexander R. Francois and Elaine Chew. 2006. An Architectural Framework for Interactive Music Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 150–155. http://doi.org/10.5281/zenodo.1176901
BibTeX
Download PDF DOI
@inproceedings{Francois2006, author = {Francois, Alexander R. and Chew, Elaine}, title = {An Architectural Framework for Interactive Music Systems}, pages = {150--155}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176901}, url = {http://www.nime.org/proceedings/2006/nime2006_150.pdf}, keywords = {Software Architecture, Interactive Systems, Music soft- ware } }
Christian Jacquemin and Serge de Laubier. 2006. Transmodal Feedback as a New Perspective for Audio-visual Effects. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 156–161. http://doi.org/10.5281/zenodo.1176929
BibTeX
Download PDF DOI
@inproceedings{Jacquemin2006, author = {Jacquemin, Christian and de Laubier, Serge}, title = {Transmodal Feedback as a New Perspective for Audio-visual Effects}, pages = {156--161}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176929}, url = {http://www.nime.org/proceedings/2006/nime2006_156.pdf}, keywords = {audio-visual composition,feedback,transmodality} }
Thor Magnusson. 2006. Screen-Based Musical Interfaces as Semiotic Machines. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 162–167. http://doi.org/10.5281/zenodo.1176969
Abstract
Download PDF DOI
The ixi software project started in 2000 with the intention to explore new interactive patterns and virtual interfaces in computer music software. The aim of this paper is not to describe these programs, as they have been described elsewhere [14][15], but rather explicate the theoretical background that underlies the design of these screen-based instruments. After an analysis of the similarities and differences in the design of acoustic and screen-based instruments, the paper describes how the creation of an interface is essentially the creation of a semiotic system that affects and influences the musician and the composer. Finally the terminology of this semiotics is explained as an interaction model.
@inproceedings{Magnusson2006, author = {Magnusson, Thor}, title = {Screen-Based Musical Interfaces as Semiotic Machines}, pages = {162--167}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176969}, url = {http://www.nime.org/proceedings/2006/nime2006_162.pdf}, keywords = {Interfaces, interaction design, HCI, semiotics, actors, OSC, mapping, interaction models, creative tools. } }
Mark Zadel and Gary Scavone. 2006. Different Strokes: a Prototype Software System for Laptop Performance and Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 168–171. http://doi.org/10.5281/zenodo.1177025
BibTeX
Download PDF DOI
@inproceedings{Zadel2006, author = {Zadel, Mark and Scavone, Gary}, title = {Different Strokes: a Prototype Software System for Laptop Performance and Improvisation}, pages = {168--171}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1177025}, url = {http://www.nime.org/proceedings/2006/nime2006_168.pdf}, keywords = {Software control of computer music, laptop performance, graphical interfaces, freehand input, dynamic simulation } }
Yu Nishibori and Toshio Iwai. 2006. TENORI-ON. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 172–175. http://doi.org/10.5281/zenodo.1176979
Abstract
Download PDF DOI
Development of a musical interface which allows people to play music intuitively and create music visibly.
@inproceedings{Nishibori2006, author = {Nishibori, Yu and Iwai, Toshio}, title = {TENORI-ON}, pages = {172--175}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176979}, url = {http://www.nime.org/proceedings/2006/nime2006_172.pdf} }
Alexander Refsum Jensenius, Tellef Kvifte, and Rolf Inge Godøy. 2006. Towards a Gesture Description Interchange Format. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 176–179. http://doi.org/10.5281/zenodo.1176931
Abstract
Download PDF DOI
This paper presents our need for a Gesture Description Interchange Format (GDIF) for storing, retrieving and sharing information about music-related gestures. Ideally, it should be possible to store all sorts of data from various commercial and custom made controllers, motion capture and computer vision systems, as well as results from different types of gesture analysis, in a coherent and consistent way. This would make it possible to use the information with different software, platforms and devices, and also allow for sharing data between research institutions. We present some of the data types that should be included, and discuss issues which need to be resolved.
@inproceedings{Jensenius2006a, author = {Jensenius, Alexander Refsum and Kvifte, Tellef and Godøy, Rolf Inge}, title = {Towards a Gesture Description Interchange Format}, pages = {176--179}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176931}, url = {http://www.nime.org/proceedings/2006/nime2006_176.pdf}, keywords = {Gesture description, gesture analysis, standards } }
Marcelo M. Wanderley, David Birnbaum, Joseph Malloch, Elliot Sinyor, and Julien Boissinot. 2006. SensorWiki.org: A Collaborative Resource for Researchers and Interface Designers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 180–183. http://doi.org/10.5281/zenodo.1177015
BibTeX
Download PDF DOI
@inproceedings{Wanderley2006, author = {Wanderley, Marcelo M. and Birnbaum, David and Malloch, Joseph and Sinyor, Elliot and Boissinot, Julien}, title = {SensorWiki.org: A Collaborative Resource for Researchers and Interface Designers}, pages = {180--183}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1177015}, url = {http://www.nime.org/proceedings/2006/nime2006_180.pdf}, keywords = {sensors, Wiki, collaborative website, open content } }
Smilen Dimitrov and Stefania Serafin. 2006. A Simple Practical Approach to a Wireless Data Acquisition Board. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 184–187. http://doi.org/10.5281/zenodo.1176891
BibTeX
Download PDF DOI
@inproceedings{Dimitrov2006, author = {Dimitrov, Smilen and Serafin, Stefania}, title = {A Simple Practical Approach to a Wireless Data Acquisition Board}, pages = {184--187}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176891}, url = {http://www.nime.org/proceedings/2006/nime2006_184.pdf} }
Kjetil F. Hansen and Roberto Bresin. 2006. Mapping Strategies in DJ Scratching. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 188–191. http://doi.org/10.5281/zenodo.1176921
BibTeX
Download PDF DOI
@inproceedings{Hansen2006, author = {Hansen, Kjetil F. and Bresin, Roberto}, title = {Mapping Strategies in DJ Scratching}, pages = {188--191}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176921}, url = {http://www.nime.org/proceedings/2006/nime2006_188.pdf}, keywords = {controllers,dj,instrument mapping,scratching,virtual} }
Kessous Loı̈c, Julien Castet, and Daniel Arfib. 2006. ’GXtar’, an Interface Using Guitar Techniques. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 192–195. http://doi.org/10.5281/zenodo.1176857
Abstract
Download PDF DOI
In this paper we describe a new guitar-like musical controller. The ’GXtar’ is an instrument which takes as a starting point a guitar but his role is to bring different and new musical possibilities while preserving the spirit and techniques of guitar. Therefore, it was conceived and carried out starting from the body of an electric guitar. The fingerboard of this guitar was equipped with two lines of sensors: linear position sensors, and tactile pressure sensors. These two lines of sensors are used as two virtual strings. Their two ends are the bridge and the nut of the guitar. The design of the instrument is made in a way that the position of a finger, on one of these virtual strings, corresponds to the note, which would have been played on a real and vibrating string. On the soundboard of the guitar, a controller, with 3 degrees of freedom, allows to drive other synthesis parameters. We then describe how this interface is integrated in a musical audio system and serves as a musical instrument.
@inproceedings{Kessous2006, author = {Kessous, Lo\''{\i}c and Castet, Julien and Arfib, Daniel}, title = {'GXtar', an Interface Using Guitar Techniques}, pages = {192--195}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176857}, url = {http://www.nime.org/proceedings/2006/nime2006_192.pdf}, keywords = {Guitar, alternate controller, sensors, synthesizer, multidimensional control. } }
Anne-Marie Burns and Marcelo M. Wanderley. 2006. Visual Methods for the Retrieval of Guitarist Fingering. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 196–199. http://doi.org/10.5281/zenodo.1176850
BibTeX
Download PDF DOI
@inproceedings{Burns2006, author = {Burns, Anne-Marie and Wanderley, Marcelo M.}, title = {Visual Methods for the Retrieval of Guitarist Fingering}, pages = {196--199}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176850}, url = {http://www.nime.org/proceedings/2006/nime2006_196.pdf}, keywords = {finger-tracking,gesture,guitar fingering,hough transform} }
Erwin Schoonderwaldt, Nicolas Rasamimanana, and Frédéric Bevilacqua. 2006. Combining Accelerometer and Video Camera: Reconstruction of Bow Velocity Profiles. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 200–203. http://doi.org/10.5281/zenodo.1177003
Abstract
Download PDF DOI
A cost-effective method was developed for the estimation of the bow velocity in violin playing, using an accelerometer on the bow in combination with point tracking using a standard video camera. The video data are used to detect the moments of bow direction changes. This information is used for piece-wise integration of the accelerometer signal, resulting in a drift-free reconstructed velocity signal with a high temporal resolution. The method was evaluated using a 3D motion capturing system, providing a reliable reference of the actual bow velocity. The method showed good results when the accelerometer and video stream are synchronized. Additional latency and jitter of the camera stream can importantly decrease the performance of the method, depending on the bow stroke type.
@inproceedings{Schoonderwaldt2006, author = {Schoonderwaldt, Erwin and Rasamimanana, Nicolas and Bevilacqua, Fr\'{e}d\'{e}ric}, title = {Combining Accelerometer and Video Camera: Reconstruction of Bow Velocity Profiles}, pages = {200--203}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1177003}, url = {http://www.nime.org/proceedings/2006/nime2006_200.pdf}, keywords = {Bowing gestures, bowed string, violin, bow velocity, accelerometer, video tracking. } }
Nicolas Leroy, Emmanuel Fléty, and Frédéric Bevilacqua. 2006. Reflective Optical Pickup For Violin. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 204–207. http://doi.org/10.5281/zenodo.1176859
BibTeX
Download PDF DOI
@inproceedings{Leroy2006, author = {Leroy, Nicolas and Fl\'{e}ty, Emmanuel and Bevilacqua, Fr\'{e}d\'{e}ric}, title = {Reflective Optical Pickup For Violin}, pages = {204--207}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176859}, url = {http://www.nime.org/proceedings/2006/nime2006_204.pdf} }
Sergi Jordà and Marcos Alonso. 2006. Mary Had a Little scoreTable* or the reacTable* Goes Melodic. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 208–211. http://doi.org/10.5281/zenodo.1176855
Abstract
Download PDF DOI
This paper introduces the scoreTable*, a tangible interactive music score editor which started as a simple application for demoing "traditional" approaches to music creation, using the reacTable* technology, and which has evolved into an independent research project on its own. After a brief discussion on the role of pitch in music, we present a brief overview of related tangible music editors, and discuss several paradigms in computer music creation, contrasting synchronous with asynchronous approaches. The final part of the paper describes the current state of the scoreTable* as well as its future lines of research.
@inproceedings{Jorda2006, author = {Jord\`{a}, Sergi and Alonso, Marcos}, title = {Mary Had a Little scoreTable* or the reacTable* Goes Melodic}, pages = {208--211}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176855}, url = {http://www.nime.org/proceedings/2006/nime2006_208.pdf}, keywords = {Musical instrument, Collaborative Music, Computer Supported Collaborative Work, Tangible User Interface, Music Theory. } }
Alain Crevoisier, Cédric Bornand, Arnaud Guichard, Seiichiro Matsumura, and Chuichi Arakawa. 2006. Sound Rose: Creating Music and Images with a Touch Table. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 212–215. http://doi.org/10.5281/zenodo.1176853
BibTeX
Download PDF DOI
@inproceedings{Crevoisier2006, author = {Crevoisier, Alain and Bornand, C\'{e}dric and Guichard, Arnaud and Matsumura, Seiichiro and Arakawa, Chuichi}, title = {Sound Rose: Creating Music and Images with a Touch Table}, pages = {212--215}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176853}, url = {http://www.nime.org/proceedings/2006/nime2006_212.pdf} }
Philip L. Davidson and Jefferson Y. Han. 2006. Synthesis and Control on Large Scale Multi-Touch Sensing Displays. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 216–219. http://doi.org/10.5281/zenodo.1176889
Abstract
Download PDF DOI
In this paper, we describe our experience in musical interface design for a large scale, high-resolution, multi-touch display surface. We provide an overview of historical and presentday context in multi-touch audio interaction, and describe our approach to analysis of tracked multi-finger, multi-hand data for controlling live audio synthesis.
@inproceedings{Davidson2006, author = {Davidson, Philip L. and Han, Jefferson Y.}, title = {Synthesis and Control on Large Scale Multi-Touch Sensing Displays}, pages = {216--219}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176889}, url = {http://www.nime.org/proceedings/2006/nime2006_216.pdf}, keywords = {multi-touch, touch, tactile, bi-manual, multi-user, synthesis, dynamic patching } }
Tellef Kvifte and Alexander Refsum Jensenius. 2006. Towards a Coherent Terminology and Model of Instrument Description and Design. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 220–225. http://doi.org/10.5281/zenodo.1176951
Abstract
Download PDF DOI
This paper discusses the need for a framework for describing musical instruments and their design, and discusses some possible elements in such a framework. The framework is meant as an aid in the development of a coherent terminology for describing, comparing and discussing different musical instruments and musical instrument designs. Three different perspectives are presented; that of the listener, the performer, and the constructor, and various levels of descriptions are introduced.
@inproceedings{Kvifte2006, author = {Kvifte, Tellef and Jensenius, Alexander Refsum}, title = {Towards a Coherent Terminology and Model of Instrument Description and Design}, pages = {220--225}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176951}, url = {http://www.nime.org/proceedings/2006/nime2006_220.pdf}, keywords = {Musical instrument design, mapping, gestures, organology. } }
Mark T. Marshall and Marcelo M. Wanderley. 2006. Vibrotactile Feedback in Digital Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 226–229. http://doi.org/10.5281/zenodo.1176973
BibTeX
Download PDF DOI
@inproceedings{Marshall2006, author = {Marshall, Mark T. and Wanderley, Marcelo M.}, title = {Vibrotactile Feedback in Digital Musical Instruments}, pages = {226--229}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176973}, url = {http://www.nime.org/proceedings/2006/nime2006_226.pdf}, keywords = {digital musical instruments,tactile feedback,vibro-tactile} }
Rodolphe Koehly, Denis Curtil, and Marcelo M. Wanderley. 2006. Paper FSRs and Latex/Fabric Traction Sensors: Methods for the Development of Home-Made Touch Sensors. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 230–233. http://doi.org/10.5281/zenodo.1176949
Abstract
Download PDF DOI
This paper presents the development of novel "home-made" touch sensors using conductive pigments and various substrate materials. We show that it is possible to build one’s own position, pressure and bend sensors with various electrical characteristics, sizes and shapes, and this for a very competitive price. We give examples and provide results from experimental tests of such developments.
@inproceedings{Koehly2006, author = {Koehly, Rodolphe and Curtil, Denis and Wanderley, Marcelo M.}, title = {Paper FSRs and Latex/Fabric Traction Sensors: Methods for the Development of Home-Made Touch Sensors}, pages = {230--233}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176949}, url = {http://www.nime.org/proceedings/2006/nime2006_230.pdf}, keywords = {Touch sensors, piezoresistive technology, conductive pigments, sensitive materials, interface design } }
John Bowers and Nicolas Villar. 2006. Creating Ad Hoc Instruments with Pin&Play&Perform. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 234–239. http://doi.org/10.5281/zenodo.1176881
BibTeX
Download PDF DOI
@inproceedings{Bowers2006, author = {Bowers, John and Villar, Nicolas}, title = {Creating Ad Hoc Instruments with Pin\&Play\&Perform}, pages = {234--239}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176881}, url = {http://www.nime.org/proceedings/2006/nime2006_234.pdf}, keywords = {Ad hoc instruments, Pin&Play, physical interfaces, music performance, new interfaces for musical expression. } }
Stefania Serafin, Amalia de Götzen, Niels Böttcher, and Steven Gelineck. 2006. Synthesis and Control of Everyday Sounds Reconstructing Russolo’s Intonarumori. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 240–245. http://doi.org/10.5281/zenodo.1177005
Abstract
Download PDF DOI
In this paper we introduce the Croaker, a novel input deviceinspired by Russolo’s Intonarumori. We describe the components of the controller and the sound synthesis engine whichallows to reproduce several everyday sounds.
@inproceedings{Serafin2006, author = {Serafin, Stefania and de G\''{o}tzen, Amalia and B\''{o}ttcher, Niels and Gelineck, Steven}, title = {Synthesis and Control of Everyday Sounds Reconstructing Russolo's Intonarumori}, pages = {240--245}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1177005}, url = {http://www.nime.org/proceedings/2006/nime2006_240.pdf}, keywords = {Noise machines, everyday sounds, physical models. } }
Gil Weinberg and Travis Thatcher. 2006. Interactive Sonification of Neural Activity. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 246–249. http://doi.org/10.5281/zenodo.1177019
BibTeX
Download PDF DOI
@inproceedings{Weinberg2006, author = {Weinberg, Gil and Thatcher, Travis}, title = {Interactive Sonification of Neural Activity}, pages = {246--249}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1177019}, url = {http://www.nime.org/proceedings/2006/nime2006_246.pdf}, keywords = {1,background and motivations,biological research,interactive auditory display,neural patterns,scholars are,sonification,with new developments in} }
Jacques Rémus. 2006. Non Haptic Control of Music by Video Analysis of Hand Movements: 14 Years of Experience with the ‘Caméra Musicale.’ Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 250–253. http://doi.org/10.5281/zenodo.1176989
BibTeX
Download PDF DOI
@inproceedings{Remus2006, author = {R\'{e}mus, Jacques}, title = {Non Haptic Control of Music by Video Analysis of Hand Movements: 14 Years of Experience with the `Cam\'{e}ra Musicale'}, pages = {250--253}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176989}, url = {http://www.nime.org/proceedings/2006/nime2006_250.pdf}, keywords = {camera musicale,interface,jacques r\'{e}mus,machines,musical camera,musical hand,non haptic instrument,s mappings,sculptures and mechanical musical,sound} }
Jan Borchers, Aristotelis Hadjakos, and Max Mühlhäuser. 2006. MICON A Music Stand for Interactive Conducting. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 254–259. http://doi.org/10.5281/zenodo.1176877
Abstract
Download PDF DOI
The MICON is an electronic music stand extending Maestro!, the latest in a series of interactive conducting exhibits that use real orchestral audio and video recordings. The MICON uses OpenGL-based rendering to display and animate score pages with a high degree of realism. It offers three different score display formats to match the user’s level of expertise. A realtime animated visual cueing system helps users with their conducting. The MICON has been evaluated with music students.
@inproceedings{Borchers2006, author = {Borchers, Jan and Hadjakos, Aristotelis and M\''{u}hlh\''{a}user, Max}, title = {MICON A Music Stand for Interactive Conducting}, pages = {254--259}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176877}, url = {http://www.nime.org/proceedings/2006/nime2006_254.pdf}, keywords = {Music stand, score display, exhibit, conducting. } }
Eric Lee, Ingo Grüll, Henning Keil, and Jan Borchers. 2006. conga: A Framework for Adaptive Conducting Gesture Analysis. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 260–265. http://doi.org/10.5281/zenodo.1176957
Abstract
Download PDF DOI
Designing a conducting gesture analysis system for public spacesposes unique challenges. We present conga, a software framework that enables automatic recognition and interpretation ofconducting gestures. conga is able to recognize multiple types ofgestures with varying levels of difficulty for the user to perform,from a standard four-beat pattern, to simplified up-down conducting movements, to no pattern at all. conga provides an extendablelibrary of feature detectors linked together into a directed acyclicgraph; these graphs represent the various conducting patterns asgesture profiles. At run-time, conga searches for the best profileto match a user’s gestures in real-time, and uses a beat prediction algorithm to provide results at the sub-beat level, in additionto output values such as tempo, gesture size, and the gesture’sgeometric center. Unlike some previous approaches, conga doesnot need to be trained with sample data before use. Our preliminary user tests show that conga has a beat recognition rate ofover 90%. conga is deployed as the gesture recognition systemfor Maestro!, an interactive conducting exhibit that opened in theBetty Brinn Children’s Museum in Milwaukee, USA in March2006.
@inproceedings{Lee2006, author = {Lee, Eric and Gr\''{u}ll, Ingo and Keil, Henning and Borchers, Jan}, title = {conga: A Framework for Adaptive Conducting Gesture Analysis}, pages = {260--265}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176957}, url = {http://www.nime.org/proceedings/2006/nime2006_260.pdf}, keywords = {gesture recognition, conducting, software gesture frameworks } }
Nicolas d’Alessandro, Christophe d’Alessandro, Sylvain Le Beux, and Boris Doval. 2006. Real-time CALM Synthesizer: New Approaches in Hands-Controlled Voice Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 266–271. http://doi.org/10.5281/zenodo.1176863
BibTeX
Download PDF DOI
@inproceedings{dAlessandro2006, author = {d'Alessandro, Nicolas and d'Alessandro, Christophe and Le Beux, Sylvain and Doval, Boris}, title = {Real-time CALM Synthesizer: New Approaches in Hands-Controlled Voice Synthesis}, pages = {266--271}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176863}, url = {http://www.nime.org/proceedings/2006/nime2006_266.pdf}, keywords = {Singing synthesis, voice source, voice quality, spectral model, formant synthesis, instrument, gestural control. } }
Bob Pritchard and Sidney S. Fels. 2006. GRASSP: Gesturally-Realized Audio, Speech and Song Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 272–276. http://doi.org/10.5281/zenodo.1176987
Abstract
Download PDF DOI
We describe the implementation of an environment for Gesturally-Realized Audio, Speech and Song Performance (GRASSP), which includes a glove-based interface, a mapping/training interface, and a collection of Max/MSP/Jitter bpatchers that allow the user to improvise speech, song, sound synthesis, sound processing, sound localization, and video processing. The mapping/training interface provides a framework for performers to specify by example the mapping between gesture and sound or video controls. We demonstrate the effectiveness of the GRASSP environment for gestural control of musical expression by creating a gesture-to-voice system that is currently being used by performers.
@inproceedings{Pritchard2006, author = {Pritchard, Bob and Fels, Sidney S.}, title = {GRASSP: Gesturally-Realized Audio, Speech and Song Performance}, pages = {272--276}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176987}, url = {http://www.nime.org/proceedings/2006/nime2006_272.pdf}, keywords = {Speech synthesis, parallel formant speech synthesizer, gesture control, Max/MSP, Jitter, Cyberglove, Polhemus, sound diffusion, UBC Toolbox, Glove-Talk, } }
Christopher Dobrian and Daniel Koppelman. 2006. The E in NIME: Musical Expression with New Computer Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 277–282. http://doi.org/10.5281/zenodo.1176893
Abstract
Download PDF DOI
Is there a distinction between New Interfaces for MusicalExpression and New Interfaces for Controlling Sound? Thisarticle begins with a brief overview of expression in musicalperformance, and examines some of the characteristics ofeffective "expressive" computer music instruments. Itbecomes apparent that sophisticated musical expressionrequires not only a good control interface but also virtuosicmastery of the instrument it controls. By studying effectiveacoustic instruments, choosing intuitive but complexgesture-sound mappings that take advantage of establishedinstrumental skills, designing intelligent characterizationsof performance gestures, and promoting long-term dedicatedpractice on a new interface, computer music instrumentdesigners can enhance the expressive quality of computermusic performance.
@inproceedings{Dobrian2006, author = {Dobrian, Christopher and Koppelman, Daniel}, title = {The E in NIME: Musical Expression with New Computer Interfaces}, pages = {277--282}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176893}, url = {http://www.nime.org/proceedings/2006/nime2006_277.pdf}, keywords = {Expression, instrument design, performance, virtuosity. } }
John Richards. 2006. 32kg: Performance Systems for a Post-Digital Age. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 283–287. http://doi.org/10.5281/zenodo.1176995
Abstract
Download PDF DOI
Why is a seemingly mundane issue such as airline baggageallowance of great significance in regards to the performancepractice of electronic music? This paper discusses how aperformance practice has evolved that seeks to question thebinary and corporate digital world. New ’instruments’ andapproaches have emerged that explore ’dirty electronics’ and’punktronics’: DIY electronic instruments made from junk.These instruments are not instruments in the traditionalsense, defined by physical dimensions or by a set number ofparameters, but modular systems, constantly evolving, nevercomplete, infinitely variable and designed to be portable. Acombination of lo- and hi-fi, analogue and digital,synchronous and asynchronous devices offer new modes ofexpression. The development of these new interfaces formusical expression run side-by-side with an emerging postdigital aesthetic.
@inproceedings{Richards2006, author = {Richards, John}, title = {32kg: Performance Systems for a Post-Digital Age}, pages = {283--287}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176995}, url = {http://www.nime.org/proceedings/2006/nime2006_283.pdf}, keywords = {bastardisation,dirty electronics,diy,ebay,live,modular,performance,portability,post-digital,punktronics} }
Serge de Laubier and Vincent Goudard. 2006. Meta-Instrument 3: a Look over 17 Years of Practice. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 288–291. http://doi.org/10.5281/zenodo.1176953
BibTeX
Download PDF DOI
@inproceedings{DeLaubier2006, author = {de Laubier, Serge and Goudard, Vincent}, title = {Meta-Instrument 3: a Look over 17 Years of Practice}, pages = {288--291}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176953}, url = {http://www.nime.org/proceedings/2006/nime2006_288.pdf}, keywords = {1,audio-graphic portable instrument,ethernet,from 1983 to 1988,genesis of the project,on,puce muse studios,r\'{e}pertoire,we worked at the,wifi} }
Suguru Goto. 2006. The Case Study of An Application of The System, ‘BodySuit’ and ‘RoboticMusic’: Its Introduction and Aesthetics. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 292–295. http://doi.org/10.5281/zenodo.1176913
Abstract
Download PDF DOI
This paper is intended to introduce the system, which combines "BodySuit" and "RoboticMusic", as well as its possibilities and its uses in an artistic application. "BodySuit" refers to a gesture controller in a Data Suit type. "RoboticMusic" refers to percussion robots, which are appliedto a humanoid robot type. In this paper, I will discuss their aesthetics and the concept, as well as the idea of the "Extended Body".
@inproceedings{Goto2006, author = {Goto, Suguru}, title = {The Case Study of An Application of The System, `BodySuit' and `RoboticMusic': Its Introduction and Aesthetics}, pages = {292--295}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176913}, url = {http://www.nime.org/proceedings/2006/nime2006_292.pdf}, keywords = {Robot, Gesture Controller, Humanoid Robot, Artificial Intelligence, Interaction } }
David Hindman. 2006. Modal Kombat: Competition and Choreography in Synesthetic Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 296–299. http://doi.org/10.5281/zenodo.1176923
BibTeX
Download PDF DOI
@inproceedings{Hindman2006, author = {Hindman, David}, title = {Modal Kombat: Competition and Choreography in Synesthetic Musical Performance}, pages = {296--299}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176923}, url = {http://www.nime.org/proceedings/2006/nime2006_296.pdf} }
Paul D. Lehrman and Eric Singer. 2006. A "Ballet Mécanique" for the 21st Century: Performing George Antheil’s Dadaist Masterpiece with Robots. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 300–303. http://doi.org/10.5281/zenodo.1176961
BibTeX
Download PDF DOI
@inproceedings{Lehrman2006, author = {Lehrman, Paul D. and Singer, Eric}, title = {A "Ballet M\'{e}canique" for the 21{s}t Century: Performing George Antheil's Dadaist Masterpiece with Robots}, pages = {300--303}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176961}, url = {http://www.nime.org/proceedings/2006/nime2006_300.pdf}, keywords = {Robotics, computer control, MIDI, player pianos, mechanical music, percussion, sound effects, Dadaism. } }
Serge Lemouton, Marco Stroppa, and Benny Sluchin. 2006. Using the Augmented Trombone in "I will not kiss your f.ing flag". Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 304–307. http://doi.org/10.5281/zenodo.1176963
Abstract
Download PDF DOI
This paper deals with the first musical usage of anexperimental system dedicated to the optical detection ofthe position of a trombone’s slide.
@inproceedings{Lemouton2006, author = {Lemouton, Serge and Stroppa, Marco and Sluchin, Benny}, title = {Using the Augmented Trombone in "I will not kiss your f.ing flag"}, pages = {304--307}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176963}, url = {http://www.nime.org/proceedings/2006/nime2006_304.pdf}, keywords = {augmented instrument,chamber electronics,computer,interaction,musical motivation,performer,trombone} }
Sébastien Schiesser and Caroline Traube. 2006. On Making and Playing an Electronically-augmented Saxophone. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 308–313. http://doi.org/10.5281/zenodo.1177001
BibTeX
Download PDF DOI
@inproceedings{Schiesser2006, author = {Schiesser, S\'{e}bastien and Traube, Caroline}, title = {On Making and Playing an Electronically-augmented Saxophone}, pages = {308--313}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1177001}, url = {http://www.nime.org/proceedings/2006/nime2006_308.pdf}, keywords = {saxophone, augmented instrument, live electronics, perfor- mance, gestural control } }
Tamara Smyth. 2006. Handheld Acoustic Filter Bank for Musical Control. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 314–317. http://doi.org/10.5281/zenodo.1177007
BibTeX
Download PDF DOI
@inproceedings{Smyth2006, author = {Smyth, Tamara}, title = {Handheld Acoustic Filter Bank for Musical Control}, pages = {314--317}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1177007}, url = {http://www.nime.org/proceedings/2006/nime2006_314.pdf}, keywords = {khaen, sound synthesis control, mapping, musical acoustics } }
Joshua J. Nixdorf and David Gerhard. 2006. Real-time Sound Source Spatialization as Used in Challenging Bodies: Implementation and Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 318–321. http://doi.org/10.5281/zenodo.1176981
Abstract
Download PDF DOI
In this paper we will report on the use of real-time soundspatialization in Challenging Bodies, a trans-disciplinaryperformance project at the University of Regina. Usingwell-understood spatialization techniques mapped to a custom interface, a computer system was built that allowedlive spatial control of ten sound signals from on-stage performers. This spatial control added a unique dynamic element to an already ultramodern performance. The systemis described in detail, including the main advantages overexisting spatialization systems: simplicity, usability, customization and scalability
@inproceedings{Nixdorf2006, author = {Nixdorf, Joshua J. and Gerhard, David}, title = {Real-time Sound Source Spatialization as Used in Challenging Bodies: Implementation and Performance}, pages = {318--321}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176981}, url = {http://www.nime.org/proceedings/2006/nime2006_318.pdf}, keywords = {gem,live systems,pd,performance sys-,real-time systems,sound architecture,sound localization,sound spatialization,surround sound,tems} }
Paolo Bottoni, Stefano Faralli, Anna Labella, and Mario Pierro. 2006. Mapping with Planning Agents in the Max/MSP Environment: the GO/Max Language. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 322–325. http://doi.org/10.5281/zenodo.1176879
BibTeX
Download PDF DOI
@inproceedings{Bottoni2006, author = {Bottoni, Paolo and Faralli, Stefano and Labella, Anna and Pierro, Mario}, title = {Mapping with Planning Agents in the Max/MSP Environment: the GO/Max Language}, pages = {322--325}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176879}, url = {http://www.nime.org/proceedings/2006/nime2006_322.pdf}, keywords = {mapping, planning, agent, Max/MSP } }
Alain Bonardi, Isis Truck, and Herman Akdag. 2006. Towards a Virtual Assistant for Performers and Stage Directors. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 326–329. http://doi.org/10.5281/zenodo.1176875
Abstract
Download PDF DOI
In this article, we present the first step of our research work todesign a Virtual Assistant for Performers and Stage Directors,able to give a feedback from performances. We use amethodology to automatically construct fuzzy rules in a FuzzyRule-Based System that detects contextual emotions from anactor’s performance during a show. We collect video data from a lot of performances of the sameshow from which it should be possible to visualize all the emotions and intents or more precisely "intent graphs". To perform this, the collected data defining low-level descriptors are aggregated and converted into high-level characterizations. Then, depending on the retrieved data and on their distributionon the axis, we partition the universes into classes. The last stepis the building of the fuzzy rules that are obtained from the classes and that permit to give conclusions to label the detected emotions.
@inproceedings{Bonardi2006, author = {Bonardi, Alain and Truck, Isis and Akdag, Herman}, title = {Towards a Virtual Assistant for Performers and Stage Directors}, pages = {326--329}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176875}, url = {http://www.nime.org/proceedings/2006/nime2006_326.pdf}, keywords = {Virtual Assistant, Intents, Emotion detector, Fuzzy Classes, Stage Director, Performance. } }
Yoichi Nagashima. 2006. Students’ Projects of Interactive Media-installations in SUAC. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 330–333. http://doi.org/10.5281/zenodo.1176977
Abstract
Download PDF DOI
This is a studio report of researches and projects in SUAC(Shizuoka University of Art and Culture). SUAC was foundedin April 2000, and organized NIME04 as you know. SUAC has "Faculty of Design" and "Department of Art and Science" and all students study interactive systems and media arts.SUAC has organized Media Art Festival (MAF) from 2001 to2005. Domestic/overseas artists participated in SUAC MAF,and SUAC students’ projects also joined and exhibited theirworks in MAF. I will introduce the production cases withinteractive media-installations by SUAC students’ projectsfrom the aspect experiences with novel interfaces ineducation and entertainment and reports on students projectsin the framework of NIME related courses.
@inproceedings{Nagashima2006, author = {Nagashima, Yoichi}, title = {Students' Projects of Interactive Media-installations in SUAC}, pages = {330--333}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176977}, url = {http://www.nime.org/proceedings/2006/nime2006_330.pdf}, keywords = {Interactive Installation, Sensors, Media Arts, Studio Reports } }
Morten Breinbjerg, Ole Caprani, Rasmus Lunding, and Line Kramhoft. 2006. An Acousmatic Composition Environment. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 334–337. http://doi.org/10.5281/zenodo.1176883
Abstract
Download PDF DOI
In this paper we describe the intentions, the design and functionality of an Acousmatic Composition Environment that allows children or musical novices to educate their auditory curiosity by recording, manipulating and mixing sounds of everyday life. The environment consists of three stands: A stand for sound recording with a soundproof box that ensure good recording facilities in a noisy environment; a stand for sound manipulation with five simple, tangible interfaces; a stand for sound mixing with a graphical computer interface presented on two touch screens.
@inproceedings{Breinbjerg2006, author = {Breinbjerg, Morten and Caprani, Ole and Lunding, Rasmus and Kramhoft, Line}, title = {An Acousmatic Composition Environment}, pages = {334--337}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176883}, url = {http://www.nime.org/proceedings/2006/nime2006_334.pdf}, keywords = {Acousmatic listening, aesthetics, tangible interfaces. } }
Robert Hamilton. 2006. Bioinformatic Feedback: Performer Bio-data as a Driver for Real-time Composition. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 338–341. http://doi.org/10.5281/zenodo.1176919
BibTeX
Download PDF DOI
@inproceedings{Hamilton2006, author = {Hamilton, Robert}, title = {Bioinformatic Feedback: Performer Bio-data as a Driver for Real-time Composition}, pages = {338--341}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176919}, url = {http://www.nime.org/proceedings/2006/nime2006_338.pdf}, keywords = {Bioinformatics, composition, real-time score generation. } }
Jonathan Pak. 2006. The Light Matrix: An Interface for Musical Expression and Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 342–345. http://doi.org/10.5281/zenodo.1176983
BibTeX
Download PDF DOI
@inproceedings{Pak2006, author = {Pak, Jonathan}, title = {The Light Matrix: An Interface for Musical Expression and Performance}, pages = {342--345}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176983}, url = {http://www.nime.org/proceedings/2006/nime2006_342.pdf} }
Shigeru Kobayashi, Takanori Endo, Katsuhiko Harada, and Shosei Oishi. 2006. GAINER: A Reconfigurable I/O Module and Software Libraries for Education. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 346–351. http://doi.org/10.5281/zenodo.1176945
BibTeX
Download PDF DOI
@inproceedings{Kobayashi2006, author = {Kobayashi, Shigeru and Endo, Takanori and Harada, Katsuhiko and Oishi, Shosei}, title = {GAINER: A Reconfigurable {I/O} Module and Software Libraries for Education}, pages = {346--351}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176945}, url = {http://www.nime.org/proceedings/2006/nime2006_346.pdf}, keywords = {learning,rapid prototyping,reconfigurable,sensor interface} }
Kirsty Beilharz, Joanne Jakovich, and Sam Ferguson. 2006. Hyper-shaku (Border-crossing): Towards the Multi-modal Gesture-controlled Hyper-Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 352–357. http://doi.org/10.5281/zenodo.1176867
Abstract
Download PDF DOI
Hyper-shaku (Border-Crossing) is an interactive sensor environment that uses motion sensors to trigger immediate responses and generative processes augmenting the Japanese bamboo shakuhachi in both the auditory and visual domain. The latter differentiates this process from many hyper-instruments by building a performance of visual design as well as electronic music on top of the acoustic performance. It utilizes a combination of computer vision and wireless sensing technologies conflated from preceding works. This paper outlines the use of gesture in these preparatory sound and audio-visual performative, installation and sonification works, leading to a description of the Hyper-shaku environment integrating sonification and generative elements.
@inproceedings{Beilharz2006, author = {Beilharz, Kirsty and Jakovich, Joanne and Ferguson, Sam}, title = {Hyper-shaku (Border-crossing): Towards the Multi-modal Gesture-controlled Hyper-Instrument}, pages = {352--357}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176867}, url = {http://www.nime.org/proceedings/2006/nime2006_352.pdf}, keywords = {Gesture-controllers, sonification, hyper-instrument } }
Neal Farwell. 2006. Adapting the Trombone: a Suite of Electro-acoustic Interventions for the Piece. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 358–363. http://doi.org/10.5281/zenodo.1176895
Abstract
Download PDF DOI
Three electro-acoustic systems were devised for a newtrombone work, Rouse. This paper presents the technicalsystems and outlines their musical context and motivation. TheuSlide measures trombone slide-extension by a minimalhardware ultrasonic technique. An easy calibration proceduremaps linear extension to the slide "positions" of the player. TheeMouth is a driver that replaces the mouthpiece, with softwareemulation of trombone tone and algorithmic musical lines,allowing the trombone to appear to play itself. The eMute isbuilt around a loudspeaker unit, driven so that it affects stronglythe player’s embouchure, allowing fine control of complex beatpatterns. eMouth and eMute, under control of the uSlide, set upimprovisatory worlds that are part of the composed architectureof Rouse.
@inproceedings{Farwell2006, author = {Farwell, Neal}, title = {Adapting the Trombone: a Suite of Electro-acoustic Interventions for the Piece}, pages = {358--363}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176895}, url = {http://www.nime.org/proceedings/2006/nime2006_358.pdf}, keywords = {composition,electro-acoustic adaptation,emulation,illusion,improvisation,mapping,mute,trombone,ultrasonic} }
Teemu Maki-Patola, Perttu Hämäläinen, and Aki Kanerva. 2006. The Augmented Djembe Drum — Sculpting Rhythms. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 364–369. http://doi.org/10.5281/zenodo.1176971
BibTeX
Download PDF DOI
@inproceedings{MakiPatola2006, author = {Maki-Patola, Teemu and H\''{a}m\''{a}l\''{a}inen, Perttu and Kanerva, Aki}, title = {The Augmented Djembe Drum --- Sculpting Rhythms}, pages = {364--369}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176971}, url = {http://www.nime.org/proceedings/2006/nime2006_364.pdf}, keywords = {1,2,2 9,3897,39,425,43,7,8,9} }
Stuart Favilla and Joanne Cannon. 2006. Children of Grainger: Leather Instruments for Free Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 370–375. http://doi.org/10.5281/zenodo.1176897
BibTeX
Download PDF DOI
@inproceedings{Favilla2006, author = {Favilla, Stuart and Cannon, Joanne}, title = {Children of Grainger: Leather Instruments for Free Music}, pages = {370--375}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176897}, url = {http://www.nime.org/proceedings/2006/nime2006_370.pdf} }
William Hsu. 2006. Managing Gesture and Timbre for Analysis and Instrument Control in an Interactive Environment. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 376–379. http://doi.org/10.5281/zenodo.1176927
Abstract
Download PDF DOI
This paper describes recent enhancements in an interactive system designed to improvise with saxophonist John Butcher [1]. In addition to musical parameters such as pitch and loudness, our system is able to analyze timbral characteristics of the saxophone tone in real-time, and use timbral information to guide the generation of response material. We capture each saxophone gesture on the fly, extract a set of gestural and timbral contours, and store them in a repository. Improvising agents can consult the repository when generating responses. The gestural or timbral progression of a saxophone phrase can be remapped or transformed; this enables a variety of response material that also references audible contours of the original saxophone gestures. A single simple framework is used to manage gestural and timbral information extracted from analysis, and for expressive control of virtual instruments in a free improvisation context.
@inproceedings{Hsu2006, author = {Hsu, William}, title = {Managing Gesture and Timbre for Analysis and Instrument Control in an Interactive Environment}, pages = {376--379}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176927}, url = {http://www.nime.org/proceedings/2006/nime2006_376.pdf}, keywords = {Interactive music systems, timbre analysis, instrument control. } }
Keith Hamel. 2006. Integrated Interactive Music Performance Environment. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 380–383. http://doi.org/10.5281/zenodo.1176917
BibTeX
Download PDF DOI
@inproceedings{Hamel2006, author = {Hamel, Keith}, title = {Integrated Interactive Music Performance Environment}, pages = {380--383}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176917}, url = {http://www.nime.org/proceedings/2006/nime2006_380.pdf} }
Sam Ferguson. 2006. Learning Musical Instrument Skills Through Interactive Sonification. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 384–389. http://doi.org/10.5281/zenodo.1176899
BibTeX
Download PDF DOI
@inproceedings{Ferguson2006, author = {Ferguson, Sam}, title = {Learning Musical Instrument Skills Through Interactive Sonification}, pages = {384--389}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176899}, url = {http://www.nime.org/proceedings/2006/nime2006_384.pdf}, keywords = {interactive sonification,music,sonification,sound visualization} }
Cornelius Poepel and Dan Overholt. 2006. Recent Developments in Violin-related Digital Musical Instruments: Where Are We and Where Are We Going? Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 390–395. http://doi.org/10.5281/zenodo.1176985
Abstract
Download PDF DOI
In this paper, some of the more recent developments in musical instruments related to the violin family are described, and analyzed according to several criteria adapted from other publications. While it is impossible to cover all such developments, we have tried to sample a variety of instruments from the last decade or so, with a greater focus on those published in the computer music literature. Experiences in the field of string players focusing on such developments are presented. Conclusions are drawn in which further research into violin-related digital instruments for string players may benefit from the presented criteria as well as the experiences.
@inproceedings{Poepel2006, author = {Poepel, Cornelius and Overholt, Dan}, title = {Recent Developments in Violin-related Digital Musical Instruments: Where Are We and Where Are We Going?}, pages = {390--395}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176985}, url = {http://www.nime.org/proceedings/2006/nime2006_390.pdf}, keywords = {Violin, viola, cello, bass, digital, electronic, synthesis, controller. } }
Diana Young, Patrick Nunn, and Artem Vassiliev. 2006. Composing for Hyperbow: A Collaboration Between MIT and the Royal Academy of Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 396–401. http://doi.org/10.5281/zenodo.1177023
Abstract
Download PDF DOI
In this paper we present progress of an ongoingcollaboration between researchers at the MIT MediaLaboratory and the Royal Academy of Music (RAM). The aimof this project is to further explore the expressive musicalpotential of the Hyperbow, a custom music controller firstdesigned for use in violin performance. Through the creationof new repertoire, we hope to stimulate the evolution of thisinterface, advancing its usability and refining itscapabilities. In preparation for this work, the Hyperbowsystem has been adapted for cello (acoustic and electric)performance. The structure of our collaboration is described,and two of the pieces currently in progress are presented.Feedback from the performers is also discussed, as well asfuture plans.
@inproceedings{Young2006, author = {Young, Diana and Nunn, Patrick and Vassiliev, Artem}, title = {Composing for Hyperbow: A Collaboration Between {MIT} and the Royal Academy of Music}, pages = {396--401}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1177023}, url = {http://www.nime.org/proceedings/2006/nime2006_396.pdf}, keywords = {Cello, bow, controller, electroacoustic music, composition. } }
Frédéric Bevilacqua, Nicolas Rasamimanana, Emmanuel Fléty, Serge Lemouton, and Florence Baschet. 2006. The Augmented Violin Project: Research, Composition and Performance Report. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 402–406. http://doi.org/10.5281/zenodo.1176871
BibTeX
Download PDF DOI
@inproceedings{Bevilacqua2006, author = {Bevilacqua, Fr\'{e}d\'{e}ric and Rasamimanana, Nicolas and Fl\'{e}ty, Emmanuel and Lemouton, Serge and Baschet, Florence}, title = {The Augmented Violin Project: Research, Composition and Performance Report}, pages = {402--406}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176871}, url = {http://www.nime.org/proceedings/2006/nime2006_402.pdf} }
Mari Kimura and Jean-Claude Risset. 2006. Auditory Illusion and Violin: Demonstration of a Work by Jean-Claude Risset Written for Mari Kimura. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 407–408. http://doi.org/10.5281/zenodo.1176939
Abstract
Download PDF DOI
This is a description of a demonstration, regarding theuse of auditory illusions and psycho-acoustic phenomenonused in the interactive work of Jean-Claude Risset, writtenfor violinist Mari Kimura.
@inproceedings{Kimura2006, author = {Kimura, Mari and Risset, Jean-Claude}, title = {Auditory Illusion and Violin: Demonstration of a Work by Jean-Claude Risset Written for Mari Kimura}, pages = {407--408}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176939}, url = {http://www.nime.org/proceedings/2006/nime2006_407.pdf}, keywords = {Violin, psycho-acoustic phenomena, auditory illusions, sig- nal processing, subharmonics, Risset, Kimura. } }
Adrian Freed, David Wessel, Michael Zbyszynski, and Frances M. Uitti. 2006. Augmenting the Cello. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 409–413. http://doi.org/10.5281/zenodo.1176905
Abstract
Download PDF DOI
Software and hardware enhancements to an electric 6-stringcello are described with a focus on a new mechanical tuningdevice, a novel rotary sensor for bow interaction and controlstrategies to leverage a suite of polyphonic soundprocessing effects.
@inproceedings{Freed2006a, author = {Freed, Adrian and Wessel, David and Zbyszynski, Michael and Uitti, Frances M.}, title = {Augmenting the Cello}, pages = {409--413}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2006}, address = {Paris, France}, issn = {2220-4806}, doi = {10.5281/zenodo.1176905}, url = {http://www.nime.org/proceedings/2006/nime2006_409.pdf}, keywords = {Cello, chordophone, FSR, Rotary Absolute Position Encoder, Double Bowing, triple stops, double stops, convolution. } }
2005
Don Buchla. 2005. A History of Buchla’s Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 1–1. http://doi.org/10.5281/zenodo.1176715
BibTeX
Download PDF DOI
@inproceedings{Buchla2005, author = {Buchla, Don}, title = {A History of Buchla's Musical Instruments}, pages = {1--1}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176715}, url = {http://www.nime.org/proceedings/2005/nime2005_001.pdf} }
Golan Levin. 2005. A Personal Chronology of Audiovisual Systems Research. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 2–3. http://doi.org/10.5281/zenodo.1176770
BibTeX
Download PDF DOI
@inproceedings{Levin2005, author = {Levin, Golan}, title = {A Personal Chronology of Audiovisual Systems Research}, pages = {2--3}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176770}, url = {http://www.nime.org/proceedings/2005/nime2005_002.pdf} }
Bill Buxton. 2005. Causality and Striking the Right Note. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 4–4. http://doi.org/10.5281/zenodo.1176717
BibTeX
Download PDF DOI
@inproceedings{Buxton2005, author = {Buxton, Bill}, title = {Causality and Striking the Right Note}, pages = {4--4}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176717}, url = {http://www.nime.org/proceedings/2005/nime2005_004.pdf} }
John Bowers and Phil Archer. 2005. Not Hyper, Not Meta, Not Cyber but Infra-Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 5–10. http://doi.org/10.5281/zenodo.1176713
Abstract
Download PDF DOI
As a response to a number of notable contemporary aesthetic tendencies, this paper introduces the notion of an infra-instrument as a kind of ‘new interface for musical expression’ worthy of study and systematic design. In contrast to hyper-, meta- and virtual instruments, we propose infra-instruments as devices of restricted interactive potential, with little sensor enhancement, which engender simple musics with scarce opportunity for conventional virtuosity. After presenting numerous examples from our work, we argue that it is precisely such interactionally and sonically challenged designs that leave requisite space for computer-generated augmentations in hybrid, multi-device performance settings.
@inproceedings{Bowers2005, author = {Bowers, John and Archer, Phil}, title = {Not Hyper, Not Meta, Not Cyber but Infra-Instruments}, pages = {5--10}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176713}, url = {http://www.nime.org/proceedings/2005/nime2005_005.pdf}, keywords = {Infra-instruments, hyperinstruments, meta-instruments, virtual instruments, design concepts and principles. } }
Teemu Mäki-patola, Juha Laitinen, Aki Kanerva, and Tapio Takala. 2005. Experiments with Virtual Reality Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 11–16. http://doi.org/10.5281/zenodo.1176780
Abstract
Download PDF DOI
In this paper, we introduce and analyze four gesture-controlled musical instruments. We briefly discuss the test platform designed to allow for rapid experimentation of new interfaces and control mappings. We describe our design experiences and discuss the effects of system features such as latency, resolution and lack of tactile feedback. The instruments use virtual reality hardware and computer vision for user input, and three-dimensional stereo vision as well as simple desktop displays for providing visual feedback. The instrument sounds are synthesized in real-time using physical sound modeling.
@inproceedings{Makipatola2005, author = {M\''{a}ki-patola, Teemu and Laitinen, Juha and Kanerva, Aki and Takala, Tapio}, title = {Experiments with Virtual Reality Instruments}, pages = {11--16}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176780}, url = {http://www.nime.org/proceedings/2005/nime2005_011.pdf}, keywords = {Musical instrument design, virtual instrument, gesture, widgets, physical sound modeling, control mapping.} }
Gil Weinberg and Scott Driscoll. 2005. iltur – Connecting Novices and Experts Through Collaborative Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 17–22. http://doi.org/10.5281/zenodo.1176840
Abstract
Download PDF DOI
The iltur system features a novel method of interaction between expert and novice musicians through a set of musical controllers called Beatbugs. Beatbug players can record live musical input from MIDI and acoustic instruments and respond by transforming the recorded material in real-time, creating motif-and-variation call-and-response routines on the fly. A central computer system analyzes MIDI and audio played by expert players and allows novice Beatbug players to personalize the analyzed material using a variety of transformation algorithms. This paper presents the motivation for developing the iltur system, followed by a brief survey of pervious and related work that guided the definition of the project’s goals. We then present the hardware and software approaches that were taken to address these goals, as well as a couple of compositions that were written for the system. The paper ends with a discussion based on observations of players using the iltur system and a number of suggestions for future work.
@inproceedings{Weinberg2005, author = {Weinberg, Gil and Driscoll, Scott}, title = {iltur -- Connecting Novices and Experts Through Collaborative Improvisation}, pages = {17--22}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176840}, url = {http://www.nime.org/proceedings/2005/nime2005_017.pdf}, keywords = {Collaboration, improvisation, gestrual handheld controllers, novices, mapping} }
Sergi Jordà. 2005. Multi-user Instruments: Models, Examples and Promises. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 23–26. http://doi.org/10.5281/zenodo.1176760
Abstract
Download PDF DOI
In this paper we study the potential and the challenges posed by multi-user instruments, as tools that can facilitate interaction and responsiveness not only between performers and their instrument but also between performers as well. Several previous studies and taxonomies are mentioned, after what different paradigms exposed with examples based on traditional mechanical acoustic instruments. In the final part, several existing systems and implementations, now in the digital domain, are described and identified according to the models and paradigms previously introduced.
@inproceedings{Jorda2005, author = {Jord\`{a}, Sergi}, title = {Multi-user Instruments: Models, Examples and Promises}, pages = {23--26}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176760}, url = {http://www.nime.org/proceedings/2005/nime2005_023.pdf}, keywords = {Multi-user instruments, collaborative music, new instruments design guidelines. } }
Tina Blaine. 2005. The Convergence of Alternate Controllers and Musical Interfaces in Interactive Entertainment. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 27–33. http://doi.org/10.5281/zenodo.1176709
Abstract
Download PDF DOI
This paper will investigate a variety of alternate controllers that are making an impact in interactive entertainment, particularly in the video game industry. Since the late 1990’s, the surging popularity of rhythmic and musical performance games in Japanese arcades has led to the development of new interfaces and alternate controllers for the consumer market worldwide. Rhythm action games such as Dance Dance Revolution, Taiko No Tatsujin (Taiko: Drum Master), and Donkey Konga are stimulating collaborative gameplay and exposing consumers to custom controllers designed specifically for musical and physical interaction. We are witnessing the emergence and acceptance of these breakthrough controllers and models for gameplay as an international cultural phenomenon penetrating the video game and toy markets in record numbers. Therefore, it is worth considering the potential benefits to developers of musical interfaces, electronic devices and alternate controllers in light of these new and emerging opportunities, particularly in the realm of video gaming, toy development, arcades, and other interactive entertainment experiences.
@inproceedings{Blaine2005, author = {Blaine, Tina}, title = {The Convergence of Alternate Controllers and Musical Interfaces in Interactive Entertainment}, pages = {27--33}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176709}, url = {http://www.nime.org/proceedings/2005/nime2005_027.pdf}, keywords = {Alternate controllers, musical interaction, interactive entertainment, video game industry, arcades, rhythm action, collaborative gameplay, musical performance games} }
Dan Overholt. 2005. The Overtone Violin. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 34–37. http://doi.org/10.5281/zenodo.1176796
BibTeX
Download PDF DOI
@inproceedings{Overholt2005, author = {Overholt, Dan}, title = {The Overtone Violin}, pages = {34--37}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176796}, url = {http://www.nime.org/proceedings/2005/nime2005_034.pdf} }
Juan Pablo Cáceres, Gautham J. Mysore, and Jeffrey Treviño. 2005. SCUBA: The Self-Contained Unified Bass Augmenter. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 38–41. http://doi.org/10.5281/zenodo.1176719
Abstract
Download PDF DOI
The Self-Contained Unified Bass Augmenter (SCUBA) is a new augmentative OSC (Open Sound Control) [5] controller for the tuba. SCUBA adds new expressive possibilities to the existing tuba interface through onboard sensors. These sensors provide continuous and discrete user-controlled parametric data to be mapped at will to signal processing parameters, virtual instrument control parameters, sound playback, and various other functions. In its current manifestation, control data is mapped to change the processing of the instrument’s natural sound in Pd (Pure Data) [3]. SCUBA preserves the unity of the solo instrument interface by acoustically mixing direct and processed sound in the instrument’s bell via mounted satellite speakers, which are driven by a subwoofer below the performer’s chair. The end result augments the existing interface while preserving its original unity and functionality.
@inproceedings{Caceres2005, author = {C\'{a}ceres, Juan Pablo and Mysore, Gautham J. and Trevi\~{n}o, Jeffrey}, title = {{SC}UBA: The Self-Contained Unified Bass Augmenter}, pages = {38--41}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176719}, url = {http://www.nime.org/proceedings/2005/nime2005_038.pdf}, keywords = {Interactive music, electro-acoustic musical instruments, musical instrument design, human computer interface, signal processing, Open Sound Control (OSC) } }
Elliot Sinyor and Marcelo M. Wanderley. 2005. Gyrotyre : A dynamic hand-held computer-music controller based on a spinning wheel. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 42–45. http://doi.org/10.5281/zenodo.1176820
Abstract
Download PDF DOI
This paper presents a novel controller built to exploit thephysical behaviour of a simple dynamical system, namely aspinning wheel. The phenomenon of gyroscopic precessioncauses the instrument to slowly oscillate when it is spunquickly, providing the performer with proprioceptive feedback. Also, due to the mass of the wheel and tire and theresulting rotational inertia, it maintains a relatively constant angular velocity once it is set in motion. Various sensors were used to measure continuous and discrete quantitiessuch as the the angular frequency of the wheel, its spatialorientation, and the performer’s finger pressure. In addition, optical and hall-effect sensors detect the passing of aspoke-mounted photodiode and two magnets. A base software layer was developed in Max/MSP and various patcheswere written with the goal of mapping the dynamic behaviorof the wheel to varied musical processes.
@inproceedings{Sinyor2005, author = {Sinyor, Elliot and Wanderley, Marcelo M.}, title = {Gyrotyre : A dynamic hand-held computer-music controller based on a spinning wheel}, pages = {42--45}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176820}, url = {http://www.nime.org/proceedings/2005/nime2005_042.pdf}, keywords = {HCI, Digital Musical Instruments, Gyroscopic Precession, Rotational Inertia, Open Sound Control } }
Angelo Fraietta. 2005. The Smart Controller Workbench. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 46–49. http://doi.org/10.5281/zenodo.1176745
Abstract
Download PDF DOI
The Smart Controller is a portable hardware device that responds to input control voltage, OSC, and MIDI messages; producing output control voltage, OSC, and MIDI messages (depending upon the loaded custom patch). The Smart Controller is a stand alone device; a powerful, reliable, and compact instrument capable of reducing the number of electronic modules required in a live performance or installation, particularly the requirement of a laptop computer. More powerful, however, is the Smart Controller Workbench, a complete interactive development environment. In addition to enabling the composer to create and debug their patches, the Smart Controller Workbench accurately simulates the behaviour of the hardware, and functions as an incircuit debugger that enables the performer to remotely monitor, modify, and tune patches running in an installation without the requirement of stopping or interrupting the live performance.
@inproceedings{Fraietta2005a, author = {Fraietta, Angelo}, title = {The Smart Controller Workbench}, pages = {46--49}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176745}, url = {http://www.nime.org/proceedings/2005/nime2005_046.pdf}, keywords = {Control Voltage, Open Sound Control, Algorithmic Composition, MIDI, Sound Installations, programmable logic control, synthesizers, electronic music, Sensors, Actuators, Interaction. } }
Eric Singer, Jeff Feddersen, and Bil Bowen. 2005. A Large-Scale Networked Robotic Musical Instrument Installation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 50–55. http://doi.org/10.5281/zenodo.1176818
Abstract
Download PDF DOI
This paper describes an installation created by LEMUR(League of Electronic Musical Urban Robots) in January, 2005.The installation included over 30 robotic musical instrumentsand a multi-projector real-time video projection and wascontrollable and programmable over a MIDI network. Theinstallation was also controllable remotely via the Internet andcould be heard and viewed via room mics and a robotic webcam connected to a streaming server.
@inproceedings{Singer2005, author = {Singer, Eric and Feddersen, Jeff and Bowen, Bil}, title = {A Large-Scale Networked Robotic Musical Instrument Installation}, pages = {50--55}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176818}, url = {http://www.nime.org/proceedings/2005/nime2005_050.pdf}, keywords = {Robotics, music, instruments, MIDI, video, interactive, networked, streaming.} }
Jesse T. Allison and Timothy Place. 2005. Teabox: A Sensor Data Interface System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 56–59. http://doi.org/10.5281/zenodo.1176693
Abstract
Download PDF DOI
Artists have long sought after alternative controllers, sensors, and other means for controlling computer-based musical performance in real-time. Traditional techniques for transmitting the data generated by such devices typically employ the use of MIDI as the transport protocol. Recently, several devices have been developed using alternatives to MIDI, including Ethernet-based and USB-based sensor interfaces. We have designed and produced a system that uses S/PDIF as the transport mechanism for a sensor interface. This provides robust performance, together with extremely low latency and high resolution. In our system, data from all sensors is multiplexed onto the digital audio line and demultiplexed in software on the computer using standard techniques. We have written demultiplexer objects and plugins for Max/MSP and Jade, as well as a MIDI Conversion program for interapplicaton uses, while others are in the works for PD, SuperCollider, and AudioUnits.
@inproceedings{Allison2005, author = {Allison, Jesse T. and Place, Timothy}, title = {Teabox: A Sensor Data Interface System}, pages = {56--59}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176693}, url = {http://www.nime.org/proceedings/2005/nime2005_056.pdf}, keywords = {Teabox, Electrotap, Sensor Interface, High Speed, High Resolution, Sensors, S/PDIF} }
Sageev Oore. 2005. Learning Advanced Skills on New Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 60–64. http://doi.org/10.5281/zenodo.1176794
Abstract
Download PDF DOI
When learning a classical instrument, people often either take lessons in which an existing body of “technique” is de- livered, evolved over generations of performers, or in some cases people will “teach themselves” by watching people play and listening to existing recordings. What does one do with a complex new digital instrument? In this paper I address this question drawing on my expe- rience in learning several very different types of sophisticated instruments: the Glove Talk II real-time gesture-to-speech interface, the Digital Marionette controller for virtual 3D puppets, and pianos and keyboards. As the primary user of the first two systems, I have spent hundreds of hours with Digital Marionette and Glove-Talk II, and thousands of hours with pianos and keyboards (I continue to work as a professional musician). I will identify some of the under- lying principles and approaches that I have observed during my learning and playing experience common to these instru- ments. While typical accounts of users learning new inter- faces generally focus on reporting beginner’s experiences, for various practical reasons, this is fundamentally different by focusing on the expert’s learning experience.
@inproceedings{Oore2005, author = {Oore, Sageev}, title = {Learning Advanced Skills on New Instruments}, pages = {60--64}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176794}, url = {http://www.nime.org/proceedings/2005/nime2005_060.pdf}, keywords = {performance, learning new instruments } }
Dan Livingstone and Eduardo Miranda. 2005. Orb3 – Adaptive Interface Design for Real time Sound Synthesis & Diffusion within Socially Mediated Spaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 65–69. http://doi.org/10.5281/zenodo.1176774
Abstract
Download PDF DOI
Haptic and Gestural interfaces offer new and novel ways of interacting with and creating new musical forms. Increasingly it is the integration of these interfaces with more complex adaptive systems or dynamically variable social contexts that provide significant opportunities for socially mediated composition through conscious and subconscious interaction. This paper includes a brief comparative survey of related works and articulates the design process and interaction modes or ‘play states’ for the Orb3 interface – 3 wireless mobile globes that collect and share environmental data and user interactions to synthesize and diffuse sound material in real time, a ‘social’ group of composer and listener objects. The physical interfaces are integrated into a portable 8 channel auditory sphere for collaborative interaction but can also be integrated with large-scale social environments, such as atria and other public spaces with embedded sound systems.
@inproceedings{Livingstone2005, author = {Livingstone, Dan and Miranda, Eduardo}, title = {Orb3 -- Adaptive Interface Design for Real time Sound Synthesis \& Diffusion within Socially Mediated Spaces}, pages = {65--69}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176774}, url = {http://www.nime.org/proceedings/2005/nime2005_065.pdf}, keywords = {Adaptive System, Sound Installation, Smart Interfaces, Music Robots, Spatial Music, Conscious Subconscious Interaction.} }
Georg Essl and Sile O’Modhrain. 2005. Scrubber: An Interface for Friction-induced Sounds. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 70–75. http://doi.org/10.5281/zenodo.1176737
Abstract
Download PDF DOI
The Scrubber is a general controller for friction-induced sound. Allowing the user to engage in familiar gestures and feel- ing actual friction, the synthesized sound gains an evocative nature for the performer and a meaningful relationship between gesture and sound for the audience. It can control a variety of sound synthesis algorithms of which we demonstrate examples based on granular synthesis, wave-table synthesis and physically informed modeling.
@inproceedings{Essl2005, author = {Essl, Georg and O'Modhrain, Sile}, title = {Scrubber: An Interface for Friction-induced Sounds}, pages = {70--75}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176737}, url = {http://www.nime.org/proceedings/2005/nime2005_070.pdf} }
David Topper and Peter V. Swendsen. 2005. Wireless Dance Control : PAIR and WISEAR. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 76–79. http://doi.org/10.5281/zenodo.1176830
Abstract
Download PDF DOI
WISEAR (Wireless Sensor Array) is a Linux based Embeddedx86 TS-5600 SBC (Single Board Computer) specifically configured for use with music, dance and video performance technologies. The device offers a general purpose solution to many sensor and gestural controller problems. Much like the general purpose CPU, which resolved many issues of its predecessor (ie., the special purpose DSP chip), the WISEAR box attempts to move beyond custom made BASIC stamp projects that are often created on a per-performance basis and rely heavily on MIDI. WISEAR is both lightweight and wireless. Unlike several commercial alternatives, it is also a completely open source project. PAIR (Partnering Analysis in Real Time) exploits the power of WISEAR and revisits the potential of hardware-based systems for real-time measurement of bodily movement. Our goal was to create a robust yet adaptable system that could attend to both general and precise aspects of performer interaction. Though certain commonalities with existing hardware systems exist, our PAIR system takes a fundamentally different approach by focusing specifically on the interaction of two or more dancers.
@inproceedings{Topper2005, author = {Topper, David and Swendsen, Peter V.}, title = {Wireless Dance Control : PAIR and WISEAR}, pages = {76--79}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176830}, url = {http://www.nime.org/proceedings/2005/nime2005_076.pdf} }
Roger B. Dannenberg, Ben Brown, Garth Zeglin, and Ron Lupish. 2005. McBlare: A Robotic Bagpipe Player. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 80–84. http://doi.org/10.5281/zenodo.1176729
Abstract
Download PDF DOI
McBlare is a robotic bagpipe player developed by the Robotics Institute at Carnegie Mellon University. McBlare plays a standard set of bagpipes, using a custom air compressor to supply air and electromechanical “fingers” to control the chanter. McBlare is MIDI controlled, allowing for simple interfacing to a keyboard, computer, or hardware sequencer. The control mechanism exceeds the measured speed of expert human performers. On the other hand, human performers surpass McBlare in their ability to compensate for limitations and imperfections in reeds, and we discuss future enhancements to address these problems. McBlare has been used to perform traditional bagpipe music as well as experimental computer generated music.
@inproceedings{Dannenberg2005, author = {Dannenberg, Roger B. and Brown, Ben and Zeglin, Garth and Lupish, Ron}, title = {McBlare: A Robotic Bagpipe Player}, pages = {80--84}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176729}, url = {http://www.nime.org/proceedings/2005/nime2005_080.pdf}, keywords = {bagpipes, robot, music, instrument, MIDI } }
Frédéric Bevilacqua, Rémy Müller, and Norbert Schnell. 2005. MnM: a Max/MSP mapping toolbox. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 85–88. http://doi.org/10.5281/zenodo.1176703
Abstract
Download PDF DOI
In this report, we describe our development on the Max/MSPtoolbox MnM dedicated to mapping between gesture andsound, and more generally to statistical and machine learningmethods. This library is built on top of the FTM library, whichenables the efficient use of matrices and other data structuresin Max/MSP. Mapping examples are described based onvarious matrix manipulations such as Single ValueDecomposition. The FTM and MnM libraries are freelyavailable.
@inproceedings{Bevilacqua2005, author = {Bevilacqua, Fr\'{e}d\'{e}ric and M\''{u}ller, R\'{e}my and Schnell, Norbert}, title = {MnM: a Max/MSP mapping toolbox}, pages = {85--88}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176703}, url = {http://www.nime.org/proceedings/2005/nime2005_085.pdf}, keywords = {Mapping, interface design, matrix, Max/MSP. } }
Jean-Marc Pelletier. 2005. A Graphical Interface for Real-Time Signal Routing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 89–92. http://doi.org/10.5281/zenodo.1176800
Abstract
Download PDF DOI
This paper describes DspMap, a graphical user interface (GUI)designed to assist the dynamic routing of signal generators andmodifiers currently being developed at the International Academy of Media Arts & Sciences. Instead of relying on traditional boxand-line approaches, DspMap proposes a design paradigm whereconnections are determined by the relative positions of the variouselements in a single virtual space.
@inproceedings{Pelletier2005, author = {Pelletier, Jean-Marc}, title = {A Graphical Interface for Real-Time Signal Routing}, pages = {89--92}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176800}, url = {http://www.nime.org/proceedings/2005/nime2005_089.pdf}, keywords = {Graphical user interface, real-time performance, map, dynamic routing } }
Gary Scavone and Andrey R. Silva. 2005. Frequency Content of Breath Pressure and Implications for Use in Control. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 93–96. http://doi.org/10.5281/zenodo.1176810
Abstract
Download PDF DOI
The breath pressure signal applied to wind music instruments is generally considered to be a slowly varying function of time. In a context of music control, this assumptionimplies that a relatively low digital sample rate (100-200Hz) is sufficient to capture and/or reproduce this signal.We tested this assumption by evaluating the frequency content in breath pressure, particularly during the use of extended performance techniques such as growling, humming,and flutter tonguing. Our results indicate frequency contentin a breath pressure signal up to about 10 kHz, with especially significant energy within the first 1000 Hz. We furtherinvestigated the frequency response of several commerciallyavailable pressure sensors to assess their responsiveness tohigher frequency breath signals. Though results were mixed,some devices were found capable of sensing frequencies upto at least 1.5 kHz. Finally, similar measurements were conducted with Yamaha WX11 and WX5 wind controllers andresults suggest that their breath pressure outputs are sampled at about 320 Hz and 280 Hz, respectively.
@inproceedings{Scavone2005, author = {Scavone, Gary and Silva, Andrey R.}, title = {Frequency Content of Breath Pressure and Implications for Use in Control}, pages = {93--96}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176810}, url = {http://www.nime.org/proceedings/2005/nime2005_093.pdf}, keywords = {Breath Control, Wind Controller, Breath Sensors } }
Alain Crevoisier and Pietro Polotti. 2005. Tangible Acoustic Interfaces and their Applications for the Design of New Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 97–100. http://doi.org/10.5281/zenodo.1176727
Abstract
Download PDF DOI
Tangible Acoustic Interfaces (TAI) rely on various acousticsensing technologies, such as sound source location and acoustic imaging, to detect the position of contact of users interacting with the surface of solid materials. With their ability to transform almost any physical objects, flat or curved surfaces and walls into interactive interfaces, acoustic sensing technologies show a promising way to bring the sense of touch into the realm of computer interaction. Because music making has been closely related to this sense during centuries, an application of particular interest is the use of TAI’s for the design of new musical instruments that matches the physicality and expressiveness of classical instruments. This paper gives an overview of the various acoustic-sensing technologies involved in the realisation of TAI’s and develops on the motivation underlying their use for the design of new musical instruments.
@inproceedings{Crevoisier2005, author = {Crevoisier, Alain and Polotti, Pietro}, title = {Tangible Acoustic Interfaces and their Applications for the Design of New Musical Instruments}, pages = {97--100}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176727}, url = {http://www.nime.org/proceedings/2005/nime2005_097.pdf}, keywords = {Tangible interfaces, new musical instruments design. } }
Ross Bencina. 2005. The Metasurface – Applying Natural Neighbour Interpolation to Two-to-Many Mapping. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 101–104. http://doi.org/10.5281/zenodo.1176701
Abstract
Download PDF DOI
This report describes The Metasurface – a mapping interface supporting interactive design of two-to-many mappings through the placement and interpolation of parameter snapshots on a plane. The Metasurface employs natural neighbour interpolation, a local interpolation method based on Voronoi tessellation, to interpolate between parameter snapshots. Compared to global field based methods, natural neighbour interpolation offers increased predictability and the ability to represent multi-scale surfaces. An implementation of the Metasurface in the AudioMulch software environment is presented and key architectural features of AudioMulch which facilitate this implementation are discussed.
@inproceedings{Bencina2005, author = {Bencina, Ross}, title = {The Metasurface -- Applying Natural Neighbour Interpolation to Two-to-Many Mapping}, pages = {101--104}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176701}, url = {http://www.nime.org/proceedings/2005/nime2005_101.pdf}, keywords = {computational geometry,design,design support,high-level control,interpolation,mapping,of interpo-,this section reviews related,user interface,work in the field} }
Andrey R. Silva, Marcelo M. Wanderley, and Gary Scavone. 2005. On the Use of Flute Air Jet as A Musical Control Variable. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 105–108. http://doi.org/10.5281/zenodo.1176814
Abstract
Download PDF DOI
This paper aims to present some perspectives on mappingembouchure gestures of flute players and their use as controlvariables. For this purpose, we have analyzed several typesof sensors, in terms of sensitivity, dimension, accuracy andprice, which can be used to implement a system capable ofmapping embouchure parameters such as air jet velocity andair jet direction. Finally, we describe the implementationof a sensor system used to map embouchure gestures of aclassical Boehm flute.
@inproceedings{Silva2005, author = {Silva, Andrey R. and Wanderley, Marcelo M. and Scavone, Gary}, title = {On the Use of Flute Air Jet as A Musical Control Variable}, pages = {105--108}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176814}, url = {http://www.nime.org/proceedings/2005/nime2005_105.pdf}, keywords = {Embouchure, air pressure sensors, hot wires, mapping, augmented flute. } }
Xavier Rodet, Jean-Philippe Lambert, Roland Cahen, et al. 2005. Study of haptic and visual interaction for sound and music control in the Phase project. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 109–114. http://doi.org/10.5281/zenodo.1176804
Abstract
Download PDF DOI
The PHASE project is a research project devoted to the study and the realization of systems of multi-modal interaction for generation, handling and control of sound and music. Supported by the network RIAM (Recherche et Innovation en Audiovisuel et Multimédia), it was carried out by the CEA-LIST for haptic research, Haption for the realization of the haptic device, Ondim for integration and visual realization and Ircam for research and realization about sound, music and the metaphors for interaction. The integration of the three modalities offers completely innovative capacities for interaction. The objectives are scientific, cultural and educational. Finally, an additional objective was to test such a prototype system, including its haptic arm, in real conditions for general public and over a long duration in order to measure its solidity, its reliability and its interest for users. Thus, during the last three months of the project, a demonstrator was presented and evaluated in a museum in Paris, in the form of an interactive installation offering the public a musical game. Different from a video game, the aim is not to animate the pixels on the screen but to play music and to incite musical awareness.
@inproceedings{Rodet2005, author = {Rodet, Xavier and Lambert, Jean-Philippe and Cahen, Roland and Gaudy, Thomas and Guedy, Fabrice and Gosselin, Florian and Mobuchon, Pascal}, title = {Study of haptic and visual interaction for sound and music control in the Phase project}, pages = {109--114}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176804}, url = {http://www.nime.org/proceedings/2005/nime2005_109.pdf}, keywords = {Haptic, interaction, sound, music, control, installation. } }
Golan Levin and Zachary Lieberman. 2005. Sounds from Shapes: Audiovisual Performance with Hand Silhouette Contours in The Manual Input Sessions. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 115–120. http://doi.org/10.5281/zenodo.1176772
Abstract
Download PDF DOI
We report on The Manual Input Sessions, a series of audiovisual vignettes which probe the expressive possibilities of free-form hand gestures. Performed on a hybrid projection system which combines a traditional analog overhead projector and a digital PC video projector, our vision-based software instruments generate dynamic sounds and graphics solely in response to the forms and movements of the silhouette contours of the user’s hands. Interactions and audiovisual mappings which make use of both positive (exterior) and negative (interior) contours are discussed.
@inproceedings{Levin2005a, author = {Levin, Golan and Lieberman, Zachary}, title = {Sounds from Shapes: Audiovisual Performance with Hand Silhouette Contours in The Manual Input Sessions}, pages = {115--120}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176772}, url = {http://www.nime.org/proceedings/2005/nime2005_115.pdf}, keywords = {Audiovisual performance, hand silhouettes, computer vision, contour analysis, sound-image relationships, augmented reality. } }
Tomoko Yonezawa, Takahiko Suzuki, Kenji Mase, and Kiyoshi Kogure. 2005. HandySinger : Expressive Singing Voice Morphing using Personified Hand-puppet Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 121–126. http://doi.org/10.5281/zenodo.1176844
Abstract
Download PDF DOI
The HandySinger system is a personified tool developed to naturally express a singing voice controlled by the gestures of a hand puppet. Assuming that a singing voice is a kind of musical expression, natural expressions of the singing voice are important for personification. We adopt a singing voice morphing algorithm that effectively smoothes out the strength of expressions delivered with a singing voice. The system’s hand puppet consists of a glove with seven bend sensors and two pressure sensors. It sensitively captures the user’s motion as a personified puppet’s gesture. To synthesize the different expressional strengths of a singing voice, the “normal” (without expression) voice of a particular singer is used as the base of morphing, and three different expressions, “dark,” “whisper” and “wet,” are used as the target. This configuration provides musically expressed controls that are intuitive to users. In the experiment, we evaluate whether 1) the morphing algorithm interpolates expressional strength in a perceptual sense, 2) the handpuppet interface provides gesture data at sufficient resolution, and 3) the gestural mapping of the current system works as planned.
@inproceedings{Yonezawa2005, author = {Yonezawa, Tomoko and Suzuki, Takahiko and Mase, Kenji and Kogure, Kiyoshi}, title = {HandySinger : Expressive Singing Voice Morphing using Personified Hand-puppet Interface}, pages = {121--126}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176844}, url = {http://www.nime.org/proceedings/2005/nime2005_121.pdf}, keywords = {Personified Expression, Singing Voice Morphing, Voice Ex- pressivity, Hand-puppet Interface } }
Mathias Funk, Kazuhiro Kuwabara, and Michael J. Lyons. 2005. Sonification of Facial Actions for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 127–131. http://doi.org/10.5281/zenodo.1176750
Abstract
Download PDF DOI
The central role of the face in social interaction and non-verbal communication suggest we explore facial action as a means of musical expression. This paper presents the design, implementation, and preliminary studies of a novel system utilizing face detection and optic flow algorithms to associate facial movements with sound synthesis in a topographically specific fashion. We report on our experience with various gesture-to-sound mappings and applications, and describe our preliminary experiments at musical performance using the system.
@inproceedings{Funk2005, author = {Funk, Mathias and Kuwabara, Kazuhiro and Lyons, Michael J.}, title = {Sonification of Facial Actions for Musical Expression}, pages = {127--131}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176750}, url = {http://www.nime.org/proceedings/2005/nime2005_127.pdf}, keywords = {Video-based musical interface; gesture-based interaction; facial expression; facial therapy interface. } }
Jordi Janer. 2005. Voice-controlled plucked bass guitar through two synthesis techniques. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 132–135. http://doi.org/10.5281/zenodo.1176758
Abstract
Download PDF DOI
In this paper we present an example of the use of the singingvoice as a controller for digital music synthesis. The analysis of the voice with spectral processing techniques, derivedfrom the Short-Time Fourier Transform, provides ways ofdetermining a performer’s vocal intentions. We demonstratea prototype, in which the extracted vocal features drive thesynthesis of a plucked bass guitar. The sound synthesis stageincludes two different synthesis techniques, Physical Modelsand Spectral Morph.
@inproceedings{Janer2005, author = {Janer, Jordi}, title = {Voice-controlled plucked bass guitar through two synthesis techniques}, pages = {132--135}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176758}, url = {http://www.nime.org/proceedings/2005/nime2005_132.pdf}, keywords = {Singing voice, musical controller, sound synthesis, spectral processing. } }
Paul D. Lehrman and Todd M. Ryan. 2005. Bridging the Gap Between Art and Science Education Through Teaching Electronic Musical Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 136–139. http://doi.org/10.5281/zenodo.1176768
Abstract
Download PDF DOI
Electronic Musical Instrument Design is an excellent vehiclefor bringing students from multiple disciplines together towork on projects, and help bridge the perennial gap betweenthe arts and the sciences. This paper describes how at TuftsUniversity, a school with no music technology program,students from the engineering (electrical, mechanical, andcomputer), music, performing arts, and visual arts areas usetheir complementary skills, and teach each other, to developnew devices and systems for music performance and control.
@inproceedings{Lehrman2005, author = {Lehrman, Paul D. and Ryan, Todd M.}, title = {Bridging the Gap Between Art and Science Education Through Teaching Electronic Musical Instrument Design}, pages = {136--139}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176768}, url = {http://www.nime.org/proceedings/2005/nime2005_136.pdf}, keywords = {Science education, music education, engineering, electronic music, gesture controllers, MIDI. } }
Hans-christoph Steiner. 2005. [hid] toolkit: a Unified Framework for Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 140–143. http://doi.org/10.5281/zenodo.1176824
Abstract
Download PDF DOI
The [hid] toolkit is a set of software objects for designingcomputer-based gestural instruments. All too frequently,computer-based performers are tied to the keyboard-mousemonitor model, narrowly constraining the range of possiblegestures. A multitude of gestural input devices are readilyavailable, making it easy to utilize a broader range of gestures. Human Interface Devices (HIDs) such as joysticks,tablets, and gamepads are cheap and can be good musicalcontrollers. Some even provide haptic feedback. The [hid]toolkit provides a unified, consistent framework for gettinggestural data from these devices, controlling the feedback,and mapping this data to the desired output. The [hid]toolkit is built in Pd, which provides an ideal platform forthis work, combining the ability to synthesize and controlaudio and video. The addition of easy access to gesturaldata allows for rapid prototypes. A usable environmentalso makes computer music instrument design accessible tonovices.
@inproceedings{Steiner2005, author = {Steiner, Hans-christoph}, title = {[hid] toolkit: a Unified Framework for Instrument Design}, pages = {140--143}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176824}, url = {http://www.nime.org/proceedings/2005/nime2005_140.pdf}, keywords = {Instrument design, haptic feedback, gestural control, HID } }
Teemu Maki-patola. 2005. User Interface Comparison for Virtual Drums. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 144–147. http://doi.org/10.5281/zenodo.1176784
Abstract
Download PDF DOI
An experimental study comparing different user interfaces for a virtual drum is reported. Virtual here means that the drum is not a physical object. 16 subjects played the drum on five different interfaces and two metronome patterns trying to match their hits to the metronome clicks. Temporal accuracy of the playing was evaluated. The subjects also rated the interfaces subjectively. The results show that hitting the drum alternately from both sides with motion going through the drum plate was less accurate than the traditional one sided hitting. A physical stick was more accurate than a virtual computer graphic stick. Visual feedback of the drum slightly increased accuracy compared to receiving only auditory feedback. Most subjects evaluated the physical stick to offer a better feeling and to be more pleasant than the virtual stick.
@inproceedings{Makipatola2005b, author = {Maki-patola, Teemu}, title = {User Interface Comparison for Virtual Drums}, pages = {144--147}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176784}, url = {http://www.nime.org/proceedings/2005/nime2005_144.pdf}, keywords = {Virtual drum, user interface, feedback, musical instrument design, virtual reality, sound control, percussion instrument. } }
Jürg Gutknecht, Art Clay, and Thomas Frey. 2005. GoingPublik: Using Realtime Global Score Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 148–151. http://doi.org/10.5281/zenodo.1176754
Abstract
Download PDF DOI
This paper takes the reader through various elements of the GoingPublik sound artwork for distributive ensemble and introduces the Realtime Score Synthesis tool (RSS) used as a controller in the work. The collaboration between artists and scientists, details concerning the experimental hardware and software, and new theories of sound art are briefly explained and illustrated. The scope of this project is too broad to be fully covered in this paper, therefore the selection of topics made attempts to draw attention to the work itself and balance theory with practice.
@inproceedings{Gutknecht2005, author = {Gutknecht, J{\''u}rg and Clay, Art and Frey, Thomas}, title = {GoingPublik: Using Realtime Global Score Synthesis}, pages = {148--151}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176754}, url = {http://www.nime.org/proceedings/2005/nime2005_148.pdf}, keywords = {Mobile Multimedia, Wearable Computers, Score Synthesis, Sound Art, System Research, HCIs } }
Lars Pellarin, Niels Böttcher, Jakob M. Olsen, Ole Gregersen, Stefania Serafin, and Michel Guglielmi. 2005. Connecting Strangers at a Train Station. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 152–155. http://doi.org/10.5281/zenodo.1176798
Abstract
Download PDF DOI
In this paper we describe a virtual instrument or a performance space, placed at Høje Tåstrup train station in Denmark, which is meant to establish communicative connections between strangers, by letting users of the system create soundscapes together across the rails. We discuss mapping strategies and complexity and suggest a possible solution for a final instance of our interactive musical performance system.
@inproceedings{Pellarin2005, author = {Pellarin, Lars and B\"{o}ttcher, Niels and Olsen, Jakob M. and Gregersen, Ole and Serafin, Stefania and Guglielmi, Michel}, title = {Connecting Strangers at a Train Station}, pages = {152--155}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176798}, url = {http://www.nime.org/proceedings/2005/nime2005_152.pdf}, keywords = {Motion tracking, mapping strategies, public installation, multiple participants music interfaces. } }
Greg Schiemer and Mark Havryliv. 2005. Pocket Gamelan: a Pure Data interface for mobile phones. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 156–159. http://doi.org/10.5281/zenodo.1176812
Abstract
Download PDF DOI
This paper describes software tools used to create java applications for performing music using mobile phones. The tools provide a means for composers working in the Pure Data composition environment to design and audition performances using ensembles of mobile phones. These tools were developed as part of a larger project motivated by the desire to allow large groups of non-expert players to perform music based on just intonation using ubiquitous technology. The paper discusses the process that replicates a Pure Data patch so that it will operate within the hardware and software constraints of the Java 2 Micro Edition. It also describes development of objects that will enable mobile phone performances to be simulated accurately in PD and to audition microtonal tuning implemented using MIDI in the j2me environment. These tools eliminate the need for composers to compose for mobile phones by writing java code. In a single desktop application, they offer the composer the flexibility to write music for multiple phones.
@inproceedings{Schiemer2005, author = {Schiemer, Greg and Havryliv, Mark}, title = {Pocket Gamelan: a Pure Data interface for mobile phones}, pages = {156--159}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176812}, url = {http://www.nime.org/proceedings/2005/nime2005_156.pdf}, keywords = {Java 2 Micro Edition; j2me; Pure Data; PD; Real-Time Media Performance; Just Intonation. } }
David Birchfield, David Lorig, and Kelly Phillips. 2005. Sustainable: a dynamic, robotic, sound installation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 160–163. http://doi.org/10.5281/zenodo.1176705
Abstract
Download PDF DOI
This paper details the motivations, design, and realization of Sustainable, a dynamic, robotic sound installation that employs a generative algorithm for music and sound creation. The piece is comprised of seven autonomous water gong nodes that are networked together by water tubes to distribute water throughout the system. A water resource allocation algorithm guides this distribution process and produces an ever-evolving sonic and visual texture. A simple set of behaviors govern the individual gongs, and the system as a whole exhibits emergent properties that yield local and large scale forms in sound and light.
@inproceedings{Birchfield2005, author = {Birchfield, David and Lorig, David and Phillips, Kelly}, title = {Sustainable: a dynamic, robotic, sound installation}, pages = {160--163}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176705}, url = {http://www.nime.org/proceedings/2005/nime2005_160.pdf}, keywords = {computing,dynamic systems,evolutionary,generative arts,installation art,music,robotics,sculpture,sound} }
Paulo Maria Rodrigues, Luis Miguel Girão, and Rolf Gehlhaar. 2005. CyberSong. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 164–167. http://doi.org/10.5281/zenodo.1176808
Abstract
Download PDF DOI
We present our work in the development of an interface for an actor/singer and its use in performing. Our work combines aspects of theatrical music with technology. Our interface has allowed the development of a new vocabulary for musical and theatrical expression and the possibility for merging classical and experimental music. It gave rise to a strong, strange, unpredictable, yet coherent, "character" and opens up the possibility for a full performance that will explore aspects of voice, theatrical music and, in the future, image projection.
@inproceedings{Rodrigues2005, author = {Rodrigues, Paulo Maria and Gir\~{a}o, Luis Miguel and Gehlhaar, Rolf}, title = {CyberSong}, pages = {164--167}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176808}, url = {http://www.nime.org/proceedings/2005/nime2005_164.pdf}, keywords = {Theatrical music, computer interaction, voice, gestural control. } }
Jamie Allen. 2005. boomBox. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 168–171. http://doi.org/10.5281/zenodo.1176691
Abstract
Download PDF DOI
This paper describes the development, function andperformance contexts of a digital musical instrument called "boomBox". The instrument is a wireless, orientation-awarelow-frequency, high-amplitude human motion controller forlive and sampled sound. The instrument has been used inperformance and sound installation contexts. I describe someof what I have learned from the project herein.
@inproceedings{Allen2005, author = {Allen, Jamie}, title = {boomBox}, pages = {168--171}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176691}, url = {http://www.nime.org/proceedings/2005/nime2005_168.pdf}, keywords = {Visceral control, sample manipulation, Bluetooth®, metaphor, remutualizing instrument, Human Computer Interaction.} }
Alex Loscos and Thomas Aussenac. 2005. The wahwactor: a voice controlled wah-wah pedal. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 172–175. http://doi.org/10.5281/zenodo.1176776
Abstract
Download PDF DOI
Using a wah-wah pedal guitar is something guitar players have to learn. Recently, more intuitive ways to control such effect have been proposed. In this direction, the Wahwactor system controls a wah-wah transformation in real-time using the guitar player’s voice, more precisely, using the performer [wa-wa] utterances. To come up with this system, different vocal features derived from spectral analysis have been studied as candidates for being used as control parameters. This paper details the results of the study and presents the implementation of the whole system.
@inproceedings{Loscos2005, author = {Loscos, Alex and Aussenac, Thomas}, title = {The wahwactor: a voice controlled wah-wah pedal}, pages = {172--175}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176776}, url = {http://www.nime.org/proceedings/2005/nime2005_172.pdf} }
William Carter and Leslie S. Liu. 2005. Location33: A Mobile Musical. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 176–179. http://doi.org/10.5281/zenodo.1176723
Abstract
Download PDF DOI
In this paper, we describe a course of research investigating thepotential for new types of music made possible by locationtracking and wireless technologies. Listeners walk arounddowntown Culver City, California and explore a new type ofmusical album by mixing together songs and stories based ontheir movement. By using mobile devices as an interface, wecan create new types of musical experiences that allowlisteners to take a more interactive approach to an album.
@inproceedings{Carter2005, author = {Carter, William and Liu, Leslie S.}, title = {Location33: A Mobile Musical}, pages = {176--179}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176723}, url = {http://www.nime.org/proceedings/2005/nime2005_176.pdf}, keywords = {Mobile Music, Digital Soundscape, Location-Based Entertainment, Mobility, Interactive Music, Augmented Reality } }
Laszlo Bardos, Stefan Korinek, Eric Lee, and Jan Borchers. 2005. Bangarama: Creating Music With Headbanging. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 180–183. http://doi.org/10.5281/zenodo.1176699
Abstract
Download PDF DOI
Bangarama is a music controller using headbanging as the primary interaction metaphor. It consists of a head-mounted tilt sensor and aguitar-shaped controller that does not require complex finger positions. We discuss the specific challenges of designing and building this controller to create a simple, yet responsive and playable instrument, and show how ordinary materials such as plywood, tinfoil, and copper wire can be turned into a device that enables a fun, collaborative music-making experience.
@inproceedings{Bardos2005, author = {Bardos, Laszlo and Korinek, Stefan and Lee, Eric and Borchers, Jan}, title = {Bangarama: Creating Music With Headbanging}, pages = {180--183}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176699}, url = {http://www.nime.org/proceedings/2005/nime2005_180.pdf}, keywords = {head movements, music controllers, interface design, input devices } }
Alvaro Barbosa, Jorge Cardoso, and Günter Geiger. 2005. Network Latency Adaptive Tempo in the Public Sound Objects System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 184–187. http://doi.org/10.5281/zenodo.1176697
Abstract
Download PDF DOI
In recent years Computer Network-Music has increasingly captured the attention of the Computer Music Community. With the advent of Internet communication, geographical displacement amongst the participants of a computer mediated music performance achieved world wide extension. However, when established over long distance networks, this form of musical communication has a fundamental problem: network latency (or net-delay) is an impediment for real-time collaboration. From a recent study, carried out by the authors, a relation between network latency tolerance and Music Tempo was established. This result emerged from an experiment, in which simulated network latency conditions were applied to the performance of different musicians playing jazz standard tunes. The Public Sound Objects (PSOs) project is web-based shared musical space, which has been an experimental framework to implement and test different approaches for on-line music communication. This paper describe features implemented in the latest version of the PSOs system, including the notion of a network-music instrument incorporating latency as a software function, by dynamically adapting its tempo to the communication delay measured in real-time.
@inproceedings{Barbosa2005, author = {Barbosa, Alvaro and Cardoso, Jorge and Geiger, G\''{u}nter}, title = {Network Latency Adaptive Tempo in the Public Sound Objects System}, pages = {184--187}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176697}, url = {http://www.nime.org/proceedings/2005/nime2005_184.pdf}, keywords = {Network Music Instruments; Latency in Real-Time Performance; Interface-Decoupled Electronic Musical Instruments; Behavioral Driven Interfaces; Collaborative Remote Music Performance; } }
Nicolas Villar, Adam T. Lindsay, and Hans Gellersen. 2005. Pin & Play & Perform: A rearrangeable interface for musical composition and performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 188–191. http://doi.org/10.5281/zenodo.1176834
Abstract
Download PDF DOI
We present the Pin&Play&Perform system: an interface inthe form of a tablet on which a number of physical controlscan be added, removed and arranged on the fly. These controls can easily be mapped to existing music sofware usingthe MIDI protocol. The interface provides a mechanism fordirect manipulation of application parameters and eventsthrough a set of familiar controls, while also encouraging ahigh degree of customisation through the ability to arrange,rearrange and annotate the spatial layout of the interfacecomponents on the surface of the tablet.The paper describes how we have realized this concept using the Pin&Play technology. As an application example, wedescribe our experiences in using our interface in conjunction with Propellerheads’ Reason, a popular piece of musicsynthesis software.
@inproceedings{Villar2005, author = {Villar, Nicolas and Lindsay, Adam T. and Gellersen, Hans}, title = {Pin \& Play \& Perform: A rearrangeable interface for musical composition and performance}, pages = {188--191}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176834}, url = {http://www.nime.org/proceedings/2005/nime2005_188.pdf}, keywords = {tangible interface, rearrangeable interface, midi controllers } }
David Birnbaum, Rebecca Fiebrink, Joseph Malloch, and Marcelo M. Wanderley. 2005. Towards a Dimension Space for Musical Devices. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 192–195. http://doi.org/10.5281/zenodo.1176707
Abstract
Download PDF DOI
While several researchers have grappled with the problem of comparing musical devices across performance, installation, and related contexts, no methodology yet exists for producing holistic, informative visualizations for these devices. Drawing on existing research in performance interaction, human-computer interaction, and design space analysis, the authors propose a dimension space representation that can be adapted for visually displaying musical devices. This paper illustrates one possible application of the dimension space to existing performance and interaction systems, revealing its usefulness both in exposing patterns across existing musical devices and aiding in the design of new ones.
@inproceedings{Birnbaum2005, author = {Birnbaum, David and Fiebrink, Rebecca and Malloch, Joseph and Wanderley, Marcelo M.}, title = {Towards a Dimension Space for Musical Devices}, pages = {192--195}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176707}, url = {http://www.nime.org/proceedings/2005/nime2005_192.pdf}, keywords = {design space analysis,human-computer interaction,interfaces for musical expression,new} }
Ge Wang and Perry R. Cook. 2005. Yeah, ChucK It! = > Dynamic , Controllable Interface Mapping. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 196–199. http://doi.org/10.5281/zenodo.1176838
Abstract
Download PDF DOI
ChucK is a programming language for real-time sound synthesis. It provides generalized audio abstractions and precise control over timing and concurrency — combining the rapid-prototyping advantages of high-level programming tools, such as Pure Data, with the flexibility and controllability of lower-level, text-based languages like C/C++. In this paper, we present a new time-based paradigm for programming controllers with ChucK. In addition to real-time control over sound synthesis, we show how features such as dynamic patching, on-the-fly controller mapping, multiple control rates, and precisely-timed recording and playback of sensors can be employed under the ChucK programming model. Using this framework, composers, programmers, and performers can quickly write (and read/debug) complex controller/synthesis programs, and experiment with controller mapping on-the-fly.
@inproceedings{Wang2005a, author = {Wang, Ge and Cook, Perry R.}, title = {Yeah, ChucK It! = > Dynamic , Controllable Interface Mapping}, pages = {196--199}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176838}, url = {http://www.nime.org/proceedings/2005/nime2005_196.pdf}, keywords = {Controller mapping, programming language, on-the-fly programming, real-time interaction, concurrency. } }
Adam R. Tindale, Ajay Kapur, George Tzanetakis, Peter Driessen, and Andrew Schloss. 2005. A Comparison of Sensor Strategies for Capturing Percussive Gestures. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 200–203. http://doi.org/10.5281/zenodo.1176828
Abstract
Download PDF DOI
Drum controllers designed by researchers and commercialcompanies use a variety of techniques for capturing percussive gestures. It is challenging to obtain both quick responsetimes and low-level data (such as position) that contain expressive information. This research is a comprehensive studyof current methods to evaluate the available strategies andtechnologies. This study aims to demonstrate the benefitsand detriments of the current state of percussion controllersas well as yield tools for those who would wish to conductthis type of study in the future.
@inproceedings{Tindale2005, author = {Tindale, Adam R. and Kapur, Ajay and Tzanetakis, George and Driessen, Peter and Schloss, Andrew}, title = {A Comparison of Sensor Strategies for Capturing Percussive Gestures}, pages = {200--203}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176828}, url = {http://www.nime.org/proceedings/2005/nime2005_200.pdf}, keywords = {Percussion Controllers, Timbre-recognition based instruments, Electronic Percussion, Sensors for Interface Design } }
Eric Lee and Jan Borchers. 2005. The Role of Time in Engineering Computer Music Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 204–207. http://doi.org/10.5281/zenodo.1176766
Abstract
Download PDF DOI
Discussion of time in interactive computer music systems engineering has been largely limited to data acquisition rates and latency.Since music is an inherently time-based medium, we believe thattime plays a more important role in both the usability and implementation of these systems. In this paper, we present a time designspace, which we use to expose some of the challenges of developing computer music systems with time-based interaction. Wedescribe and analyze the time-related issues we encountered whilstdesigning and building a series of interactive music exhibits thatfall into this design space. These issues often occur because ofthe varying and sometimes conflicting conceptual models of timein the three domains of user, application (music), and engineering.We present some of our latest work in conducting gesture interpretation and frameworks for digital audio, which attempt to analyzeand address these conflicts in temporal conceptual models.
@inproceedings{Lee2005, author = {Lee, Eric and Borchers, Jan}, title = {The Role of Time in Engineering Computer Music Systems}, pages = {204--207}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176766}, url = {http://www.nime.org/proceedings/2005/nime2005_204.pdf}, keywords = {time design, conceptual models of time, design spaces, interactive music exhibits, engineering music systems} }
Shigeru Kobayashi and Akamasu Masayuki. 2005. Spinner: A Simple Approach to Reconfigurable User Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 208–211. http://doi.org/10.5281/zenodo.1176764
Abstract
Download PDF DOI
This paper reports our recent development on a reconfigurable user interface. We created a system that consists of a dial type controller ‘Spinner’, and the GUI (Graphical User Interface) objects for the Max/MSP environment[1]. One physical controller corresponds to one GUI controller on a PC’s display device, and a user can freely change the connection on the fly (i.e. associate the physical controller to another GUI controller). Since the user interface on the PC side is running on the Max/MSP environment that has high flexibility, a user can freely reconfigure the layout of GUI controllers. A single ‘Spinner’ control device consists of a rotary encoder with a push button to count rotations and a photo IC to detect specific patterns from the GUI objects to identify. Since ‘Spinner’ features a simple identification method, it is capable of being used with normal display devices like LCD (Liquid Crystal Display) or a CRT (Cathode Ray Tube) and so on. A user can access multiple ‘Spinner’ devices simultaneously. By using this system, a user can build a reconfigurable user interface.
@inproceedings{Kobayashi2005, author = {Kobayashi, Shigeru and Masayuki, Akamasu}, title = {Spinner: A Simple Approach to Reconfigurable User Interfaces}, pages = {208--211}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176764}, url = {http://www.nime.org/proceedings/2005/nime2005_208.pdf}, keywords = {Reconfigurable, Sensors, Computer Music } }
Thor Magnusson. 2005. ixi software: The Interface as Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 212–215. http://doi.org/10.5281/zenodo.1176782
Abstract
Download PDF DOI
This paper describes the audio human computer interface experiments of ixi in the past and outlines the current platform for future research. ixi software [5] was founded by Thor Magnusson and Enrike Hurtado Mendieta in year 2000 and since then we’ve been working on building prototypes in the form of screen-based graphical user interfaces for musical performance, researching human computer interaction in the field of music and creating environments which other people can use to do similar work and for us to use in our workshops. Our initial starting point was that computer music software and the way their interfaces are built need not necessarily be limited to copying the acoustic musical instruments and studio technology that we already have, but additionally we can create unique languages and work processes for the virtual world. The computer is a vast creative space with specific qualities that can and should be explored.
@inproceedings{Magnusson2005, author = {Magnusson, Thor}, title = {ixi software: The Interface as Instrument}, pages = {212--215}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176782}, url = {http://www.nime.org/proceedings/2005/nime2005_212.pdf}, keywords = {Graphical user interfaces, abstract graphical interfaces, hypercontrol, intelligent instruments, live performance, machine learning, catalyst software, OSC, interfacing code, open source, Pure Data, SuperCollider. } }
Eduardo Miranda and Andrew Brouse. 2005. Toward Direct Brain-Computer Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 216–219. http://doi.org/10.5281/zenodo.1176792
Abstract
Download PDF DOI
Musicians and composers have been using brainwaves as generative sources in music for at least 40 years and the possibility of a brain-computer interface for direct communication and control was first seriously investigated in the early 1970s. Work has been done by many artists and technologists in the intervening years to attempt to control music systems with brainwaves and — indeed — many other biological signals. Despite the richness of EEG, fMRI and other data which can be read from the human brain, there has up to now been only limited success in translating the complex encephalographic data into satisfactory musical results. We are currently pursuing research which we believe will lead to the possibility of direct brain-computer interfaces for rich and expressive musical control. This report will outline the directions of our current research and results.
@inproceedings{Miranda2005, author = {Miranda, Eduardo and Brouse, Andrew}, title = {Toward Direct Brain-Computer Musical Interfaces}, pages = {216--219}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176792}, url = {http://www.nime.org/proceedings/2005/nime2005_216.pdf}, keywords = {Brain-Computer Interface, BCI, Electroencephalogram, EEG, brainwaves, music and the brain, interactive music systems.} }
Robyn Taylor, Daniel Torres, and Pierre Boulanger. 2005. Using Music to Interact with a Virtual Character. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 220–223. http://doi.org/10.5281/zenodo.1176826
Abstract
Download PDF DOI
We present a real-time system which allows musicians tointeract with synthetic virtual characters as they perform.Using Max/MSP to parameterize keyboard and vocal input, meaningful features (pitch, amplitude, chord information, and vocal timbre) are extracted from live performancein real-time. These extracted musical features are thenmapped to character behaviour in such a way that the musician’s performance elicits a response from the virtual character. The system uses the ANIMUS framework to generatebelievable character expressions. Experimental results arepresented for simple characters.
@inproceedings{Taylor2005, author = {Taylor, Robyn and Torres, Daniel and Boulanger, Pierre}, title = {Using Music to Interact with a Virtual Character}, pages = {220--223}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176826}, url = {http://www.nime.org/proceedings/2005/nime2005_220.pdf}, keywords = {Music, synthetic characters, advanced man-machine interfaces, virtual reality, behavioural systems, interaction techniques, visualization, immersive entertainment, artistic in- stallations } }
Elaine Chew, Alexander R. Francois, Jie Liu, and Aaron Yang. 2005. ESP: A Driving Interface for Expression Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 224–227. http://doi.org/10.5281/zenodo.1176725
Abstract
Download PDF DOI
In the Expression Synthesis Project (ESP), we propose adriving interface for expression synthesis. ESP aims toprovide a compelling metaphor for expressive performance soas to make high-level expressive decisions accessible to nonexperts. In ESP, the user drives a car on a virtual road thatrepresents the music with its twists and turns; and makesdecisions on how to traverse each part of the road. The driver’sdecisions affect in real-time the rendering of the piece. Thepedals and wheel provide a tactile interface for controlling thecar dynamics and musical expression, while the displayportrays a first person view of the road and dashboard from thedriver’s seat. This game-like interface allows non-experts tocreate expressive renderings of existing music without havingto master an instrument, and allows expert musicians toexperiment with expressive choice without having to firstmaster the notes of the piece. The prototype system has beentested and refined in numerous demonstrations. This paperpresents the concepts underlying the ESP system and thearchitectural design and implementation of a prototype.
@inproceedings{Chew2005, author = {Chew, Elaine and Francois, Alexander R. and Liu, Jie and Yang, Aaron}, title = {ESP: A Driving Interface for Expression Synthesis}, pages = {224--227}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176725}, url = {http://www.nime.org/proceedings/2005/nime2005_224.pdf}, keywords = {Music expression synthesis system, driving interface. } }
Cornelius Poepel. 2005. On Interface Expressivity: A Player-Based Study. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 228–231. http://doi.org/10.5281/zenodo.1176802
Abstract
Download PDF DOI
While many new interfaces for musical expression have been presented in the past, methods to evaluate these interfaces are rare.This paper presents a method and a study comparing the potentialfor musical expression of different string-instrument based musicalinterfaces. Cues for musical expression are defined based on results of research in musical expression and on methods for musicaleducation in instrumental pedagogy. Interfaces are evaluated according to how well they are estimated to allow players making useof their existing technique for the creation of expressive music.
@inproceedings{Poepel2005, author = {Poepel, Cornelius}, title = {On Interface Expressivity: A Player-Based Study}, pages = {228--231}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176802}, url = {http://www.nime.org/proceedings/2005/nime2005_228.pdf}, keywords = {Musical Expression, electronic bowed string instrument, evaluation of musical input devices, audio signal driven sound synthesis } }
Johnny Wingstedt, Mats Liljedahl, Stefan Lindberg, and Jan Berg. 2005. REMUPP – An Interactive Tool for Investigating Musical Properties and Relations. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 232–235. http://doi.org/10.5281/zenodo.1176842
Abstract
Download PDF DOI
A typical experiment design within the field of music psychology is playing music to a test subject who listens and reacts – most often by verbal means. One limitation of this kind of test is the inherent difficulty of measuring an emotional reaction in a laboratory setting. This paper describes the design, functions and possible uses of the software tool REMUPP (Relations between musical parameters and perceived properties), designed for investigating various aspects of musical experience. REMUPP allows for non-verbal examination of selected musical parameters (such as tonality, tempo, timbre, articulation, volume, register etc.) in a musical context. The musical control is put into the hands of the subject, introducing an element of creativity and enhancing the sense of immersion. Information acquired with REMUPP can be output as numerical data for statistical analysis, but the tool is also suited for the use with more qualitatively oriented methods.
@inproceedings{Wingstedt2005, author = {Wingstedt, Johnny and Liljedahl, Mats and Lindberg, Stefan and Berg, Jan}, title = {REMUPP -- An Interactive Tool for Investigating Musical Properties and Relations}, pages = {232--235}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176842}, url = {http://www.nime.org/proceedings/2005/nime2005_232.pdf}, keywords = {Musical experience, non-verbal test techniques, musical parameters.} }
Perry R. Cook. 2005. Real-Time Performance Controllers for Synthesized Singing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 236–237. http://doi.org/10.5281/zenodo.1176846
Abstract
Download PDF DOI
A wide variety of singing synthesis models and methods exist,but there are remarkably few real-time controllers for thesemodels. This paper describes a variety of devices developedover the last few years for controlling singing synthesismodels implemented in the Synthesis Toolkit in C++ (STK),Max/MSP, and ChucK. All of the controllers share somecommon features, such as air-pressure sensing for breathingand/or loudness control, means to control pitch, and methodsfor selecting and blending phonemes, diphones, and words.However, the form factors, sensors, mappings, and algorithmsvary greatly between the different controllers.
@inproceedings{Cook2005, author = {Cook, Perry R.}, title = {Real-Time Performance Controllers for Synthesized Singing}, pages = {236--237}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176846}, url = {http://www.nime.org/proceedings/2005/nime2005_236.pdf}, keywords = {Singing synthesis, real-time singing synthesis control. } }
David Kim-Boyle. 2005. Musical Score Generation in Valses and Etudes. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 238–239. http://doi.org/10.5281/zenodo.1176762
Abstract
Download PDF DOI
The author describes a recent composition for piano and computer in which the score performed by the pianist, read from a computer monitor, is generated in real-time from a vocabulary of predetermined scanned score excerpts. The author outlines the algorithm used to choose and display a particular excerpt and describes some of the musical difficulties faced by the pianist in a performance of the work.
@inproceedings{KimBoyle2005, author = {Kim-Boyle, David}, title = {Musical Score Generation in Valses and Etudes}, pages = {238--239}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176762}, url = {http://www.nime.org/proceedings/2005/nime2005_238.pdf}, keywords = {Score generation, Jitter. } }
Kevin C. Baird. 2005. Real-Time Generation of Music Notation via Audience Interaction Using Python and GNU Lilypond. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 240–241. http://doi.org/10.5281/zenodo.1176695
Abstract
Download PDF DOI
No Clergy is an interactive music performance/installation inwhich the audience is able to shape the ongoing music. In it,members of a small acoustic ensemble read music notation fromcomputer screens. As each page refreshes, the notation is alteredand shaped by both stochastic transformations of earlier musicwith the same performance and audience feedback, collected viastandard CGI forms.
@inproceedings{Baird2005, author = {Baird, Kevin C.}, title = {Real-Time Generation of Music Notation via Audience Interaction Using Python and {GNU} Lilypond}, pages = {240--241}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176695}, url = {http://www.nime.org/proceedings/2005/nime2005_240.pdf}, keywords = {notation, stochastic, interactive, audience, Python, Lilypond } }
Jesse Fox and Jennifer Carlile. 2005. SoniMime: Movement Sonification for Real-Time Timbre Shaping. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 242–243. http://doi.org/10.5281/zenodo.1176741
Abstract
Download PDF DOI
This paper describes the design of SoniMime, a system forthe sonification of hand movement for real-time timbre shaping. We explore the application of the tristimulus timbremodel for the sonification of gestural data, working towardthe goals of musical expressivity and physical responsiveness. SoniMime uses two 3-D accelerometers connected toan Atmel microprocessor which outputs OSC control messages. Data filtering, parameter mapping, and sound synthesis take place in Pd running on a Linux computer.
@inproceedings{Fox2005, author = {Fox, Jesse and Carlile, Jennifer}, title = {SoniMime: Movement Sonification for Real-Time Timbre Shaping}, pages = {242--243}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176741}, url = {http://www.nime.org/proceedings/2005/nime2005_242.pdf}, keywords = {Sonification, Musical Controller, Human Computer Interaction } }
Robert Huott. 2005. Precise Control on Compound Curves. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 244–245. http://doi.org/10.5281/zenodo.1176848
Abstract
Download PDF DOI
This paper presents the ‘Bean’, a novel controller employing a multi-touch sensate surface in a compound curve shape. The design goals, construction, and mapping system are discussed, along with a retrospective from a previous, similar design.
@inproceedings{Huott2005, author = {Huott, Robert}, title = {Precise Control on Compound Curves}, pages = {244--245}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176848}, url = {http://www.nime.org/proceedings/2005/nime2005_244.pdf}, keywords = {Musical controller, sensate surface, mapping system } }
Robert Lugo and Jack Damondrick. 2005. Beat Boxing : Expressive Control for Electronic Music Performance and Musical Applications. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 246–247. http://doi.org/10.5281/zenodo.1176778
Abstract
Download PDF DOI
This paper describes the design and implementation of BeatBoxing, a percussive gestural interface for the liveperformance of electronic music and control of computerbased games and musical activities.
@inproceedings{Lugo2005, author = {Lugo, Robert and Damondrick, Jack}, title = {Beat Boxing : Expressive Control for Electronic Music Performance and Musical Applications}, pages = {246--247}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176778}, url = {http://www.nime.org/proceedings/2005/nime2005_246.pdf}, keywords = {Performance, Gestural Mapping, Music Controller, Human-Computer Interaction, PureData (Pd), OSC } }
Ivan Franco. 2005. The Airstick: A Free-Gesture Controller Using Infrared Sensing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 248–249. http://doi.org/10.5281/zenodo.1176747
Abstract
Download PDF DOI
This paper describes the development of AirStick, an interface for musical expression. AirStick is played in the air, in a Theremin style. It is composed of an array of infrared proximity sensors, which allow the mapping of the position of any interfering obstacle inside a bi-dimensional zone. This controller sends both x and y control data to various real-time synthesis algorithms.
@inproceedings{Franco2005, author = {Franco, Ivan}, title = {The Airstick: A Free-Gesture Controller Using Infrared Sensing}, pages = {248--249}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176747}, url = {http://www.nime.org/proceedings/2005/nime2005_248.pdf}, keywords = {Music Controller, Infrared Sensing, Computer Music. } }
Jennifer Carlile and Björn Hartmann. 2005. OROBORO: A Collaborative Controller with Interpersonal Haptic Feedback. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 250–251. http://doi.org/10.5281/zenodo.1176721
Abstract
Download PDF DOI
OROBORO is a novel collaborative controller which focuses on musical performance as social experience by exploring synchronized actions of two musicians operating a single instrument. Each performer uses two paddle mechanisms – one for hand orientation sensing and one for servo-motor actuated feedback. We introduce a haptic mirror in which the movement of one performer’s sensed hand is used to induce movement of the partner’s actuated hand and vice versa. We describe theoretical motivation, and hardware/software implementation.
@inproceedings{Carlile2005, author = {Carlile, Jennifer and Hartmann, Bj{\''{o}}rn}, title = {{OR}OBORO: A Collaborative Controller with Interpersonal Haptic Feedback}, pages = {250--251}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176721}, url = {http://www.nime.org/proceedings/2005/nime2005_250.pdf}, keywords = {Musical Controller, Collaborative Control, Haptic Interfaces } }
Rodrı́guez David and Rodrı́guez Iván. 2005. VIFE _alpha v.01 Real-time Visual Sound Installation performed by Glove-Gesture. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 252–253. http://doi.org/10.5281/zenodo.1176806
Abstract
Download PDF DOI
We present a Virtual Interface to Feel Emotions called VIFE _alpha v.01 (Virtual Interface to Feel Emotions). The work investigates the idea of Synaesthesia and her enormous possibilities creating new realities, sensations and zones where the user can find new points of interaction. This interface allows the user to create sonorous and visual compositions in real time. 6 three-dimensional sonorous forms are modified according to the movements of the user. These forms represent sonorous objects that respond to this by means of sensorial stimuli. Multiple combinations of colors and sound effects superpose to an a the others to give rise to a unique experience.
@inproceedings{Rodriguez2005, author = {Rodr\'{\i}guez, David and Rodr\'{\i}guez, Iv\'{a}n}, title = {VIFE \_alpha v.01 Real-time Visual Sound Installation performed by Glove-Gesture}, pages = {252--253}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176806}, url = {http://www.nime.org/proceedings/2005/nime2005_252.pdf}, keywords = {Synaesthesia, 3D render, new reality, virtual interface, creative interaction, sensors. } }
David Hindman and Spencer Kiser. 2005. Sonictroller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 254–255. http://doi.org/10.5281/zenodo.1176756
Abstract
Download PDF DOI
The Sonictroller was originally conceived as a means ofintroducing competition into an improvisatory musicalperformance. By reverse-engineering a popular video gameconsole, we were able to map sound information (volume,pitch, and pitch sequences) to any continuous or momentaryaction of a video game sprite.
@inproceedings{Hindman2005, author = {Hindman, David and Kiser, Spencer}, title = {Sonictroller}, pages = {254--255}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176756}, url = {http://www.nime.org/proceedings/2005/nime2005_254.pdf}, keywords = {video game, Nintendo, music, sound, controller, Mortal Kombat, trumpet, guitar, voice } }
William Verplank. 2005. Haptic Music Exercises. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 256–257. http://doi.org/10.5281/zenodo.1176832
Abstract
Download PDF DOI
Pluck, ring, rub, bang, strike, and squeeze are all simple gestures used in controlling music. A single motor/encoder plus a force-sensor has proved to be a useful platform for experimenting with haptic feedback in controlling computer music. The surprise is that the “best” haptics (precise, stable) may not be the most “musical”.
@inproceedings{Verplank2005, author = {Verplank, William}, title = {Haptic Music Exercises}, pages = {256--257}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176832}, url = {http://www.nime.org/proceedings/2005/nime2005_256.pdf}, keywords = {Music control, haptic feedback, physical interaction design, Input/output devices, interactive systems, haptic I/O} }
John Eaton and Robert Moog. 2005. Multiple-Touch-Sensitive Keyboard. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 258–259. http://doi.org/10.5281/zenodo.1176735
Abstract
Download PDF DOI
In this presentation, we discuss and demonstrate a multiple touch sensitive (MTS) keyboard developed by Robert Moog for John Eaton. Each key of the keyboard is equipped with sensors that detect the three-dimensional position of the performer’s finger. The presentation includes some of Eaton’s performances for certain earlier prototypes as well as this keyboard.
@inproceedings{Eaton2005, author = {Eaton, John and Moog, Robert}, title = {Multiple-Touch-Sensitive Keyboard}, pages = {258--259}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176735}, url = {http://www.nime.org/proceedings/2005/nime2005_258.pdf}, keywords = {Multiple touch sensitive, MTS, keyboard, key sensor design, upgrading to present-day computers } }
Angelo Fraietta. 2005. Smart Controller / Bell Garden Demo. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 260–261. http://doi.org/10.5281/zenodo.1176743
Abstract
Download PDF DOI
This paper will demonstrate the use of the Smart Controller workbench in the Interactive Bell Garden.
@inproceedings{Fraietta2005, author = {Fraietta, Angelo}, title = {Smart Controller / Bell Garden Demo}, pages = {260--261}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176743}, url = {http://www.nime.org/proceedings/2005/nime2005_260.pdf}, keywords = {Control Voltage, Open Sound Control, Algorithmic Composition, MIDI, Sound Installations, Programmable Logic Control, Synthesizers. } }
Mauricio Melo and Doria Fan. 2005. Swayway — Midi Chimes. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 262–263. http://doi.org/10.5281/zenodo.1176790
Abstract
Download PDF DOI
The Swayway is an audio/MIDI device inspired by the simpleconcept of the wind chime.This interactive sculpture translates its swaying motion,triggered by the user, into sound and light. Additionally, themotion of the reeds contributes to the visual aspect of thepiece, converting the whole into a sensory and engagingexperience.
@inproceedings{Melo2005, author = {Melo, Mauricio and Fan, Doria}, title = {Swayway --- Midi Chimes}, pages = {262--263}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176790}, url = {http://www.nime.org/proceedings/2005/nime2005_262.pdf}, keywords = {Interactive sound sculpture, flex sensors, midi chimes, LEDs, sound installation. } }
Derek Wang. 2005. Bubbaboard and Mommaspeaker: Creating Digital Tonal Sounds from an Acoustic Percussive Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 264–265. http://doi.org/10.5281/zenodo.1176836
Abstract
Download PDF DOI
This paper describes the transformation of an everyday object into a digital musical instrument. By tracking hand movements and tilt on one of two axes, the Bubbaboard, a transformed handheld washboard, allows a user to play scales at different octaves while simultaneously offering the ability to use its inherent acoustic percussive qualities. Processed sound is fed to the Mommaspeaker, which creates physically generated vibrato at a speed determined by tilting the Bubbaboard on its second axis.
@inproceedings{Wang2005, author = {Wang, Derek}, title = {Bubbaboard and Mommaspeaker: Creating Digital Tonal Sounds from an Acoustic Percussive Instrument}, pages = {264--265}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176836}, url = {http://www.nime.org/proceedings/2005/nime2005_264.pdf}, keywords = {Gesture based controllers, Musical Performance, MIDI, Accelerometer, Microcontroller, Contact Microphone } }
Emmanuel Fléty. 2005. The WiSe Box: a Multi-performer Wireless Sensor Interface using WiFi and OSC. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 266–267. http://doi.org/10.5281/zenodo.1176739
Abstract
Download PDF DOI
The Wise Box is a new wireless digitizing interface for sensors and controllers. An increasing demand for this kind of hardware, especially in the field of dance and computer performance lead us to design a wireless digitizer that allows for multiple users, with high bandwidth and accuracy. The interface design was initiated in early 2004 and shortly described in reference [1]. Our recent effort was directed to make this device available for the community on the form of a manufactured product, similarly to our previous interfaces such as AtoMIC Pro, Eobody or Ethersense [1][2][3]. We describe here the principles we used for the design of the device as well as its technical specifications. The demo will show several devices running at once and used in real-time with a various set of sensors.
@inproceedings{Flety2005, author = {Fl\'{e}ty, Emmanuel}, title = {The WiSe Box: a Multi-performer Wireless Sensor Interface using {WiFi} and OSC}, pages = {266--267}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176739}, url = {http://www.nime.org/proceedings/2005/nime2005_266.pdf}, keywords = {Gesture, Sensors, WiFi, 802.11, OpenSoundControl. } }
Adam Bowen. 2005. Soundstone: A 3-D Wireless Music Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 268–269. http://doi.org/10.5281/zenodo.1176711
Abstract
Download PDF DOI
Soundstone is a small wireless music controller that tracks movement and gestures, and maps these signals to characteristics of various synthesized and sampled sounds. It is intended to become a general-purpose platform for exploring the sonification of movement, with an emphasis on tactile (haptic) feedback.
@inproceedings{Bowen2005, author = {Bowen, Adam}, title = {Soundstone: A {3-D} Wireless Music Controller}, pages = {268--269}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176711}, url = {http://www.nime.org/proceedings/2005/nime2005_268.pdf}, keywords = {Gesture recognition, haptics, human factors, force, acceleration, tactile feedback, general purpose controller, wireless. } }
Alain C. Guisan. 2005. Interactive Sound Installation: INTRIUM. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 270–270. http://doi.org/10.5281/zenodo.1176752
Abstract
Download PDF DOI
INTRIUM is an interactive sound installation exploring the inside vibration of the atrium. A certain number of architectural elements are fitted with acoustic sensors in order to capture the vibration they produce when they are manipulated or touched by hands. This raw sound is further processed in real-time, allowing the participants to create a sonic landscape in the atrium, as the result of a collaborative and collective work between them.
@inproceedings{Guisan2005, author = {Guisan, Alain C.}, title = {Interactive Sound Installation: INTRIUM}, pages = {270--270}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176752}, url = {http://www.nime.org/proceedings/2005/nime2005_270.pdf}, keywords = {Interactive sound installation, collaborative work, sound processing, acoustic source localization.} }
Eric Socolofsky. 2005. Contemplace. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 271–271. http://doi.org/10.5281/zenodo.1176822
Abstract
Download PDF DOI
Contemplace is a spatial personality that redesigns itselfdynamically according to its conversations with its visitors.Sometimes welcoming, sometimes shy, and sometimeshostile, Contemplace’s mood is apparent through a display ofprojected graphics, spatial sound, and physical motion.Contemplace is an environment in which inhabitationbecomes a two-way dialogue.
@inproceedings{Socolofsky2005, author = {Socolofsky, Eric}, title = {Contemplace}, pages = {271--271}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176822}, url = {http://www.nime.org/proceedings/2005/nime2005_271.pdf}, keywords = {Interactive space, spatial installation, graphic and aural display, motion tracking, Processing, Flosc } }
Maia Marinelli, Jared Lamenzo, and Liubo Borissov. 2005. Mocean. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 272–272. http://doi.org/10.5281/zenodo.1176786
Abstract
Download PDF DOI
Mocean is an immersive environment that creates sensoryrelationships between natural media, particularly exploringthe potential of water as an emotive interface.
@inproceedings{Marinelli2005, author = {Marinelli, Maia and Lamenzo, Jared and Borissov, Liubo}, title = {Mocean}, pages = {272--272}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176786}, url = {http://www.nime.org/proceedings/2005/nime2005_272.pdf}, keywords = {New interface, water, pipe organ, natural media, PIC microcontroller, wind instrument, human computer interface. } }
Seiichiro Matsumura and Chuichi Arakawa. 2005. Hop Step Junk: Sonic Visualization using Footsteps. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 273–273. http://doi.org/10.5281/zenodo.1176788
Abstract
Download PDF DOI
’Hop Step Junk’ is an interactive sound installation that creates audio and visual representations of the audience’s footsteps. The sound of a footstep is very expressive. Depending on one’s weight, clothing and gate, a footstep can sound quite different. The period between steps defines one’s personal rhythm. The sound output of ’Hop Step Junk’ is wholly derived from the audience’s footsteps. ’Hop Step Junk’ creates a multi-generational playground, an instrument that an audience can easily play.
@inproceedings{Matsumura2005, author = {Matsumura, Seiichiro and Arakawa, Chuichi}, title = {Hop Step Junk: Sonic Visualization using Footsteps}, pages = {273--273}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176788}, url = {http://www.nime.org/proceedings/2005/nime2005_273.pdf}, keywords = {Footsteps, body action, interactive, visualization, simple and reliable interface, contact microphone, sound playground} }
Meghan Deutscher, Sidney S. Fels, Reynald Hoskinson, and Sachiyo Takahashi. 2005. Echology. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 274–274. http://doi.org/10.5281/zenodo.1176733
BibTeX
Download PDF DOI
@inproceedings{Deutscher2005, author = {Deutscher, Meghan and Fels, Sidney S. and Hoskinson, Reynald and Takahashi, Sachiyo}, title = {Echology}, pages = {274--274}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2005}, address = {Vancouver, BC, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176733}, url = {http://www.nime.org/proceedings/2005/nime2005_274.pdf}, keywords = {Mediascape, sound spatialization, interactive art, Beluga whale} }
2004
Sidney S. Fels, Linda Kaastra, Sachiyo Takahashi, and Graeme Mccaig. 2004. Evolving Tooka: from Experiment to Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 1–6. http://doi.org/10.5281/zenodo.1176595
Abstract
Download PDF DOI
The Tooka was created as an exploration of two personinstruments. We have worked with two Tooka performers toenhance the original experimental device to make a musicalinstrument played and enjoyed by them. The main additions tothe device include: an additional button that behaves as amusic capture button, a bend sensor, an additional thumbactuated pressure sensor for vibrato, additional musicalmapping strategies, and new interfacing hardware. Thesedevelopments a rose through exper iences andrecommendations from the musicians playing it. In addition tothe changes to the Tooka, this paper describes the learningprocess and experiences of the musicians performing with theTooka.
@inproceedings{Fels2004, author = {Fels, Sidney S. and Kaastra, Linda and Takahashi, Sachiyo and Mccaig, Graeme}, title = {Evolving Tooka: from Experiment to Instrument}, pages = {1--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176595}, url = {http://www.nime.org/proceedings/2004/nime2004_001.pdf}, keywords = {Musician-centred design, two-person musical instrument.} }
Ajay Kapur, Ariel J. Lazier, Philip L. Davidson, Scott Wilson, and Perry R. Cook. 2004. The Electronic Sitar Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 7–12. http://doi.org/10.5281/zenodo.1176623
Abstract
Download PDF DOI
This paper describes the design of an Electronic Sitar controller, adigitally modified version of Saraswati’s (the Hindu Goddess ofMusic) 19-stringed, pumpkin shelled, traditional North Indianinstrument. The ESitar uses sensor technology to extract gesturalinformation from a performer, deducing music information suchas pitch, pluck timing, thumb pressure, and 3-axes of head tilt totrigger real-time sounds and graphics. It allows for a variety oftraditional sitar technique as well as new performance methods.Graphical feedback allows for artistic display and pedagogicalfeedback. The ESitar uses a programmable Atmel microprocessorwhich outputs control messages via a standard MIDI jack.
@inproceedings{Kapur2004, author = {Kapur, Ajay and Lazier, Ariel J. and Davidson, Philip L. and Wilson, Scott and Cook, Perry R.}, title = {The Electronic Sitar Controller}, pages = {7--12}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176623}, url = {http://www.nime.org/proceedings/2004/nime2004_007.pdf}, keywords = {atmel microcontroller,controller,electronic sitar,esitar,human computer interface,indian string controller,instrument graphical feedback,midi,veldt} }
Masami Takahata, Kensuke Shiraki, Yutaka Sakane, and Yoichi Takebayashi. 2004. Sound Feedback for Powerful Karate Training. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 13–18. http://doi.org/10.5281/zenodo.1176673
Abstract
Download PDF DOI
We have developed new sound feedback for powerful karate training with pleasure, which enables to extract player’s movement, understand player’s activities, and generate them to sounds. We have designed a karate training environment which consists of a multimodal room with cameras, microphones, video displays and loud speakers, and wearable devices with a sensor and a sound generator. Experiments have been conducted on ten Karate beginnners for ten months to examine the effectiveness to learn appropriate body action and sharpness in basic punch called TSUKI. The experimental results suggest the proposed sound feedback and the training environments enable beginners to achieve enjoyable Karate.
@inproceedings{Takahata2004, author = {Takahata, Masami and Shiraki, Kensuke and Sakane, Yutaka and Takebayashi, Yoichi}, title = {Sound Feedback for Powerful Karate Training}, pages = {13--18}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176673}, url = {http://www.nime.org/proceedings/2004/nime2004_013.pdf}, keywords = {Sound feedback, Karate, Learning environment, Wearable device} }
Martin Kaltenbrunner, Günter Geiger, and Sergi Jordà. 2004. Dynamic Patches for Live Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 19–22. http://doi.org/10.5281/zenodo.1176621
Abstract
Download PDF DOI
This article reflects the current state of the reacTable* project,an electronic music instrument with a tangible table-basedinterface, which is currently under development at theAudiovisual Institute at the Universitat Pompeu Fabra. In thispaper we are focussing on the issue of Dynamic Patching,which is a particular and unique aspect of the sound synthesisand control paradigms of the reacTable*. Unlike commonvisual programming languages for sound synthesis, whichconceptually separate the patch building process from theactual musical performance, the reacTable* combines theconstruction and playing of the instrument in a unique way.The tangible interface allows direct manipulation control overany of the used building blocks, which physically representthe whole synthesizer function.
@inproceedings{Kaltenbrunner2004, author = {Kaltenbrunner, Martin and Geiger, G\''{u}nter and Jord\`{a}, Sergi}, title = {Dynamic Patches for Live Musical Performance}, pages = {19--22}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176621}, url = {http://www.nime.org/proceedings/2004/nime2004_019.pdf}, keywords = {dynamic patching,musical instrument,sound synthesis,tangible interfaces,visual programming} }
Diana Young and Ichiro Fujinaga. 2004. AoBachi: A New Interface for Japanese Drumming. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 23–26. http://doi.org/10.5281/zenodo.1176687
Abstract
Download PDF DOI
We present a prototype of a new musical interface for Japanese drumming techniques and styles. Our design used in the Aobachi drumming sticks provides 5 gesture parameters (3 axes of acceleration, and 2 axes of angular velocity) for each of the two sticks and transmits this data wirelessly using Bluetooth® technology. This system utilizes minimal hardware embedded in the two drumming sticks, allowing for gesture tracking of drum strokes by an interface of traditional form, appearance, and feel. Aobachi is portable, versatile, and robust, and may be used for a variety of musical applications, as well as analytical studies.
@inproceedings{Young2004, author = {Young, Diana and Fujinaga, Ichiro}, title = {AoBachi: A New Interface for {Japan}ese Drumming}, pages = {23--26}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176687}, url = {http://www.nime.org/proceedings/2004/nime2004_023.pdf}, keywords = {bluetooth,drum stick,japanese drum,taiko,wireless} }
Nick Bryan-Kinns and Patrick G. Healey. 2004. Daisyphone: Support for Remote Music Collaboration. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 27–30. http://doi.org/10.5281/zenodo.1176583
Abstract
Download PDF DOI
We have seen many new and exciting developments in new interfaces for musical expression. In this paper we present the design of an interface for remote group music improvisation and composition - Daisyphone. The approach relies on players creating and editing short shared loops of music which are semi-synchronously updated. The interface emphasizes the looping nature of the music and is designed to be engaging and deployable on a wide range of interaction devices. Observations of the use of the tool with different levels of persistence of contribution are reported and discussed. Future developments centre around ways to string loops together into larger pieces (composition) and investigating suitable rates of decay to encourage more group improvisation.
@inproceedings{BryanKinns2004, author = {Bryan-Kinns, Nick and Healey, Patrick G.}, title = {Daisyphone: Support for Remote Music Collaboration}, pages = {27--30}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176583}, url = {http://www.nime.org/proceedings/2004/nime2004_027.pdf}, keywords = {collaboration,composition,improvisation,music} }
Christophe Havel and Myriam Desainte-Catherine. 2004. Modeling an Air Percussion for Composition and Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 31–34. http://doi.org/10.5281/zenodo.1176609
Abstract
Download PDF DOI
This paper presents a project involving a percussionist playing on a virtual percussion. Both artistic and technical aspects of the project are developed. Especially, a method forstrike recognition using the Flock of Birds is presented, aswell as its use for artistic purpose.
@inproceedings{Havel2004, author = {Havel, Christophe and Desainte-Catherine, Myriam}, title = {Modeling an Air Percussion for Composition and Performance}, pages = {31--34}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176609}, url = {http://www.nime.org/proceedings/2004/nime2004_031.pdf}, keywords = {Gesture analysis, virtual percussion, strike recognition.} }
Mark Nelson and Belinda Thom. 2004. A Survey of Real-Time MIDI Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 35–38. http://doi.org/10.5281/zenodo.1176643
Abstract
Download PDF DOI
Although MIDI is often used for computer-based interactive music applications, its real-time performance is rarely quantified, despite concerns about whether it is capable of adequate performance in realistic settings. We extend existing proposals for MIDI performance benchmarking so they are useful in realistic interactive scenarios, including those with heavy MIDI traffic and CPU load. We have produced a cross-platform freely-available testing suite that is easy to use, and have used it to survey the interactive performance of several commonly-used computer/MIDI setups. We describe the suite, summarize the results of our performance survey, and detail the benefits of this testing methodology.
@inproceedings{Nelson2004, author = {Nelson, Mark and Thom, Belinda}, title = {A Survey of Real-Time {MIDI} Performance}, pages = {35--38}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176643}, url = {http://www.nime.org/proceedings/2004/nime2004_035.pdf} }
Arshia Cont, Thierry Coduys, and Cyrille Henry. 2004. Real-time Gesture Mapping in Pd Environment using Neural Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 39–42. http://doi.org/10.5281/zenodo.1176589
Abstract
Download PDF DOI
In this paper, we describe an adaptive approach to gesture mapping for musical applications which serves as a mapping system for music instrument design. A neural network approach is chosen for this goal and all the required interfaces and abstractions are developed and demonstrated in the Pure Data environment. In this paper, we will focus on neural network representation and implementation in a real-time musical environment. This adaptive mapping is evaluated in different static and dynamic situations by a network of sensors sampled at a rate of 200Hz in real-time. Finally, some remarks are given on the network design and future works.
@inproceedings{Cont2004, author = {Cont, Arshia and Coduys, Thierry and Henry, Cyrille}, title = {Real-time Gesture Mapping in Pd Environment using Neural Networks}, pages = {39--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176589}, url = {http://www.nime.org/proceedings/2004/nime2004_039.pdf}, keywords = {Real-time gesture control, adaptive interfaces, Sensor and actuator technologies for musical applications, Musical mapping algorithms and intelligent controllers, Pure Data.} }
Assaf K. Talmudi. 2004. The Decentralized Pianola: Evolving Mechanical Music Instruments using a Genetic Algorithm. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 43–46. http://doi.org/10.5281/zenodo.1176675
Abstract
Download PDF DOI
This paper presents computer experiments concerning the decentralized pianola, a hypothetical mechanical music instrument, whose large-scale musical behavior is the result of local physical interactions between simple elements.Traditional mechanical music instruments like the pianola and the music box rely for their operation on the separation between a sequential memory unit and an execution unit. In a decentralized mechanical instrument, musical memory is an emergent global property of the system, undistinguishable from the execution process. Such a machine is botha score andan instrument. The paper starts by discussing the difference between sequential memory systems and systems exhibiting emergent decentralized musical behavior. Next, the use of particle system simulation for exploring virtual decentralized instruments is demonstrated, and the architecture for a simple decentralized instrument is outlined. The paper continues by describing the use of a genetic algorithm for evolving decentralized instruments that reproduce a given musical behavior.
@inproceedings{Talmudi2004, author = {Talmudi, Assaf K.}, title = {The Decentralized Pianola: Evolving Mechanical Music Instruments using a Genetic Algorithm}, pages = {43--46}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176675}, url = {http://www.nime.org/proceedings/2004/nime2004_043.pdf} }
James Mandelis and Phil Husbands. 2004. Don’t Just Play it, Grow it! : Breeding Sound Synthesis and Performance Mappings. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 47–50. http://doi.org/10.5281/zenodo.1176635
Abstract
Download PDF DOI
This paper describes the use of evolutionary and artificial life techniques in sound design and the development of performance mapping to facilitate the real-time manipulation of such sounds through some input device controlled by the performer. A concrete example of such a system is described which allows musicians without detailed knowledge and experience of sound synthesis techniques to interactively develop new sounds and performance manipulation mappings according to their own aesthetic judgements. Experiences with the system are discussed.
@inproceedings{Mandelis2004, author = {Mandelis, James and Husbands, Phil}, title = {Don't Just Play it, Grow it! : Breeding Sound Synthesis and Performance Mappings}, pages = {47--50}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176635}, url = {http://www.nime.org/proceedings/2004/nime2004_047.pdf}, keywords = {musical interaction,performance mapping,sound synthesis} }
Judith Shatin and David Topper. 2004. Tree Music: Composing with GAIA. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 51–54. http://doi.org/10.5281/zenodo.1176663
Abstract
Download PDF DOI
In this report, we discuss Tree Music, an interactive computer music installation created using GAIA (Graphical Audio Interface Application), a new open-source interface for controlling the RTcmix synthesis and effects processing engine. Tree Music, commissioned by the University of Virginia Art Museum, used a wireless camera with a wide-angle lens to capture motion and occlusion data from exhibit visitors. We show how GAIA was used to structure and navigate the compositional space, and how this program supports both graphical and text-based programming in the same application. GAIA provides a GUI which combines two open-source applications: RTcmix and Perl.
@inproceedings{Shatin2004, author = {Shatin, Judith and Topper, David}, title = {Tree Music: Composing with GAIA}, pages = {51--54}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176663}, url = {http://www.nime.org/proceedings/2004/nime2004_051.pdf}, keywords = {Composition, new interfaces, interactive systems, open source, Real time audio, GUI controllers, video tracking} }
Gideon D’Arcangelo. 2004. Recycling Music, Answering Back: Toward an Oral Tradition of Electronic Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 55–58. http://doi.org/10.5281/zenodo.1176591
Abstract
Download PDF DOI
This essay outlines a framework for understanding newmusical compositions and performances that utilizepre-existing sound recordings. In attempting toarticulate why musicians are increasingly using soundrecordings in their creative work, the author calls fornew performance tools that enable the dynamic use ofpre-recorded music.
@inproceedings{DArcangelo2004, author = {D'Arcangelo, Gideon}, title = {Recycling Music, Answering Back: Toward an Oral Tradition of Electronic Music}, pages = {55--58}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176591}, url = {http://www.nime.org/proceedings/2004/nime2004_055.pdf}, keywords = {Call and response, turntablism, DJ tools, oral culture} }
Sergi Jordà. 2004. Digital Instruments and Players: Part I – Efficiency and Apprenticeship. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 59–63. http://doi.org/10.5281/zenodo.1176619
Abstract
Download PDF DOI
When envisaging new digital instruments, designers do not have to limit themselves to their sonic capabilities (which can be absolutely any), not even to their algorithmic power; they must be also especially careful about the instruments’ conceptual capabilities, to the ways instruments impose or suggest to their players new ways of thinking, new ways of establishing relations, new ways of interacting, new ways of organizing time and textures; new ways, in short, of playing new musics. This article explores the dynamic relation that builds between the player and the instrument, introducing concepts such as efficiency, apprenticeship and learning curve It aims at constructing a framework in which the possibilities and the diversity of music instruments as well as the possibilities and the expressive freedom of human music performers could start being evaluated.
@inproceedings{Jorda2004, author = {Jord\`{a}, Sergi}, title = {Digital Instruments and Players: Part I -- Efficiency and Apprenticeship}, pages = {59--63}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176619}, url = {http://www.nime.org/proceedings/2004/nime2004_059.pdf}, keywords = {Musical instruments design, learning curve, apprenticeship, musical efficiency.} }
Nikita Pashenkov. 2004. A New Mix of Forgotten Technology: Sound Generation, Sequencing and Performance Using an Optical Turntable. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 64–67. http://doi.org/10.5281/zenodo.1176651
Abstract
Download PDF DOI
This report presents a novel interface for musical performance which utilizes a record-player turntable augmented with a computation engine and a high-density optical sensing array. The turntable functions as a standalone step sequencer for MIDI events transmitted to a computer or another device and it is programmed in real-time using visual disks. The program instructions are represented on printed paper disks directly as characters of English alphabet that could be read by human as effectively as they are picked up by the machine’s optical cartridge. The result is a tangible interface that allows the user to manipulate pre-arranged musical material by hand, by adding together instrumental tracks to form a dynamic mix. A functional implementation of this interface is discussed in view of historical background and other examples of electronic instruments for music creation and performance incorporating optical turntable as a central element.
@inproceedings{Pashenkov2004, author = {Pashenkov, Nikita}, title = {A New Mix of Forgotten Technology: Sound Generation, Sequencing and Performance Using an Optical Turntable}, pages = {64--67}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176651}, url = {http://www.nime.org/proceedings/2004/nime2004_064.pdf}, keywords = {Interaction, visualization, tangible interface, controllers, optical turntable, performance.} }
Eric Lee, Teresa M. Nakra, and Jan Borchers. 2004. You’re The Conductor: A Realistic Interactive Conducting System for Children. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 68–73. http://doi.org/10.5281/zenodo.1176629
Abstract
Download PDF DOI
This paper describes the first system designed to allow children to conduct an audio and video recording of an orchestra. No prior music experience is required to control the orchestra, and the system uses an advanced algorithm to time stretch the audio in real-time at high quality and without altering the pitch. We will discuss the requirements and challenges of designing an interface to target our particular user group (children), followed by some system implementation details. An overview of the algorithm used for audio time stretching will also be presented. We are currently using this technology to study and compare professional and non-professional conducting behavior, and its implications when designing new interfaces for multimedia. You’re the Conductor is currently a successful exhibit at the Children’s Museum in Boston, USA.
@inproceedings{Lee2004, author = {Lee, Eric and Nakra, Teresa M. and Borchers, Jan}, title = {You're The Conductor: A Realistic Interactive Conducting System for Children}, pages = {68--73}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176629}, url = {http://www.nime.org/proceedings/2004/nime2004_068.pdf}, keywords = {conducting systems,design patterns,gesture recogni-,interactive exhibits,real-time audio stretching,tion} }
Sile O’Modhrain and Georg Essl. 2004. PebbleBox and CrumbleBag: Tactile Interfaces for Granular Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 74–79. http://doi.org/10.5281/zenodo.1176647
Abstract
Download PDF DOI
The PebbleBox and the CrumbleBag are examples of a granular interaction paradigm, in which the manipulation ofphysical grains of arbitrary material becomes the basis forinteracting with granular sound synthesis models. The soundsmade by the grains as they are manipulated are analysed,and parameters such as grain rate, grain amplitude andgrain density are extracted. These parameters are then usedto control the granulation of arbitrary sound samples in realtime. In this way, a direct link is made between the haptic sensation of interacting with grains and the control ofgranular sounds.
@inproceedings{OModhrain2004, author = {O'Modhrain, Sile and Essl, Georg}, title = {PebbleBox and CrumbleBag: Tactile Interfaces for Granular Synthesis}, pages = {74--79}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176647}, url = {http://www.nime.org/proceedings/2004/nime2004_074.pdf}, keywords = {Musical instrument, granular synthesis, haptic} }
Garth Paine. 2004. Gesture and Musical Interaction : Interactive Engagement Through Dynamic Morphology. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 80–86. http://doi.org/10.5281/zenodo.1176649
Abstract
Download PDF DOI
New Interfaces for Musical Expression must speak to the nature of ’instrument’, that is, it must always be understood that the interface binds to a complex musical phenomenon. This paper explores the nature of engagement, the point of performance that occurs when a human being engages with a computer based instrument. It asks questions about the nature of the instrument in computer music and offers some conceptual models for the mapping of gesture to sonic outcomes.
@inproceedings{Paine2004, author = {Paine, Garth}, title = {Gesture and Musical Interaction : Interactive Engagement Through Dynamic Morphology}, pages = {80--86}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176649}, url = {http://www.nime.org/proceedings/2004/nime2004_080.pdf}, keywords = {dynamic,dynamic morphology,gesture,interaction,mapping,mind,music,orchestration,spectral morphology} }
Doug Van Nort, Marcelo M. Wanderley, and Philippe Depalle. 2004. On the Choice of Mappings Based on Geometric Properties. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 87–91. http://doi.org/10.5281/zenodo.1176681
Abstract
Download PDF DOI
The choice of mapping strategies to effectively map controller variables to sound synthesis algorithms is examined.Specifically, we look at continuous mappings that have ageometric representation. Drawing from underlying mathematical theory, this paper presents a way to compare mapping strategies, with the goal of achieving an appropriatematch between mapping and musical performance context.This method of comparison is applied to existing techniques,while a suggestion is offered on how to integrate and extendthis work through a new implementation.
@inproceedings{VanNort2004, author = {Van Nort, Doug and Wanderley, Marcelo M. and Depalle, Philippe}, title = {On the Choice of Mappings Based on Geometric Properties}, pages = {87--91}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176681}, url = {http://www.nime.org/proceedings/2004/nime2004_087.pdf}, keywords = {Mapping, Interface Design, Interpolation, Computational Geometry} }
Brian Sheehan. 2004. The Squiggle: A Digital Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 92–95. http://doi.org/10.5281/zenodo.1176665
Abstract
Download PDF DOI
This paper discusses some of the issues pertaining to the design of digital musical instruments that are to effectively fill the role of traditional instruments (i.e. those based on physical sound production mechanisms). The design and implementation of a musical instrument that addresses some of these issues, using scanned synthesis coupled to a "smart" physical system, is described.
@inproceedings{Sheehan2004, author = {Sheehan, Brian}, title = {The Squiggle: A Digital Musical Instrument}, pages = {92--95}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176665}, url = {http://www.nime.org/proceedings/2004/nime2004_092.pdf}, keywords = {Digital musical instruments, real-time performance, scanned synthesis, pd, tactile interfaces, sensors, Shapetape, mapping.} }
David Gerhard, Daryl Hepting, and Matthew Mckague. 2004. Exploration of the Correspondence between Visual and Acoustic Parameter Spaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 96–99. http://doi.org/10.5281/zenodo.1176603
Abstract
Download PDF DOI
This paper describes an approach to match visual and acoustic parameters to produce an animated musical expression.Music may be generated to correspond to animation, asdescribed here; imagery may be created to correspond tomusic; or both may be developed simultaneously. This approach is intended to provide new tools to facilitate bothcollaboration between visual artists and musicians and examination of perceptual issues between visual and acousticmedia. As a proof-of-concept, a complete example is developed with linear fractals as a basis for the animation, andarranged rhythmic loops for the music. Since both visualand acoustic elements in the example are generated fromconcise specifications, the potential of this approach to create new works through parameter space exploration is accentuated, however, there are opportunities for applicationto a wide variety of source material. These additional applications are also discussed, along with issues encounteredin development of the example.
@inproceedings{Gerhard2004, author = {Gerhard, David and Hepting, Daryl and Mckague, Matthew}, title = {Exploration of the Correspondence between Visual and Acoustic Parameter Spaces}, pages = {96--99}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176603}, url = {http://www.nime.org/proceedings/2004/nime2004_096.pdf}, keywords = {Multimedia creation and interaction, parameter space, visualization, sonification.} }
Chandrasekhar Ramakrishnan, Jason Freeman, and Kristjan Varnik. 2004. The Architecture of Auracle: a Real-Time, Distributed, Collaborative Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 100–103. http://doi.org/10.5281/zenodo.1176657
Abstract
Download PDF DOI
Auracle is a "group instrument," controlled by the voice, for real-time, interactive, distributed music making over the Internet. It is implemented in the Java™ programming language using a combination of publicly available libraries (JSyn and TransJam) and custom-built components. This paper describes how the various pieces — the voice analysis, network communication, and sound synthesis — are individually built and how they are combined to form Auracle.
@inproceedings{Ramakrishnan2004, author = {Ramakrishnan, Chandrasekhar and Freeman, Jason and Varnik, Kristjan}, title = {The Architecture of Auracle: a Real-Time, Distributed, Collaborative Instrument}, pages = {100--103}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176657}, url = {http://www.nime.org/proceedings/2004/nime2004_100.pdf}, keywords = {Interactive Music Systems, Networking and Control, Voice and Speech Analysis, Auracle, JSyn, TransJam, Linear Prediction, Neural Networks, Voice Interface, Open Sound Control} }
Homei Miyashita and Kazushi Nishimoto. 2004. Thermoscore: A New-type Musical Score with Temperature Sensation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 104–107. http://doi.org/10.5281/zenodo.1176637
Abstract
Download PDF DOI
In this paper, we propose Thermoscore, a musical score form-that dynamically alters the temperature of the instrument/player interface. We developed the first version of theThermoscore display by lining Peltier devices on piano keys.The system is controlled by MIDI notes-on messages from anMIDI sequencer, so that a composer can design songs that aresequences of temperature for each piano key. We also discussmethodologies for composing with this system, and suggesttwo approaches. The first is to make desirable keys (or otherkeys) hot. The second one uses chroma-profile, that is, a radarchart representation of the frequency of pitch notations in the-piece. By making keys of the same chroma hot in reverse proportion to the value of the chroma-profile, it is possible to-constrain the performer’s improvisation and to bring the tonality space close to a certain piece.
@inproceedings{Miyashita2004, author = {Miyashita, Homei and Nishimoto, Kazushi}, title = {Thermoscore: A New-type Musical Score with Temperature Sensation}, pages = {104--107}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176637}, url = {http://www.nime.org/proceedings/2004/nime2004_104.pdf}, keywords = {musical score, improvisation, peltier device, chroma profile} }
Stefania Serafin and Diana Young. 2004. Toward a Generalized Friction Controller: from the Bowed String to Unusual Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 108–111. http://doi.org/10.5281/zenodo.1176659
Abstract
Download PDF DOI
We present case studies of unusual instruments that share the same excitation mechanism as that of the bowed string. The musical saw, Tibetan singing bow, glass harmonica, and bowed cymbal all produce sound by rubbing a hard object on the surface of the instrument. For each, we discuss the design of its physical model and present a means for expressively controlling it. Finally, we propose a new kind of generalized friction controller to be used in all these examples.
@inproceedings{Serafin2004, author = {Serafin, Stefania and Young, Diana}, title = {Toward a Generalized Friction Controller: from the Bowed String to Unusual Musical Instruments}, pages = {108--111}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176659}, url = {http://www.nime.org/proceedings/2004/nime2004_108.pdf} }
Philippe S. Zaborowski. 2004. ThumbTec: A New Handheld Input Device. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 112–115. http://doi.org/10.5281/zenodo.1176689
Abstract
Download PDF DOI
This paper describes ThumbTEC, a novel general purpose input device for the thumb or finger that is useful in a wide variety of applications from music to text entry. The device is made up of three switches in a row and one miniature joystick on top of the middle switch. The combination of joystick direction and switch(es) controls what note or alphanumeric character is selected by the finger. Several applications are detailed.
@inproceedings{Zaborowski2004, author = {Zaborowski, Philippe S.}, title = {ThumbTec: A New Handheld Input Device}, pages = {112--115}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176689}, url = {http://www.nime.org/proceedings/2004/nime2004_112.pdf}, keywords = {One-Thumb Input Device, HCI, Isometric Joystick, Mobile Computing, Handheld Devices, Musical Instrument.} }
Ryan H. Torchia and Cort Lippe. 2004. Techniques for Multi-Channel Real-Time Spatial Distribution Using Frequency-Domain Processing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 116–119. http://doi.org/10.5281/zenodo.1176679
Abstract
Download PDF DOI
The authors have developed several methods for spatially distributing spectral material in real-time using frequency-domain processing. Applying spectral spatialization techniques to more than two channels introduces a few obstacles, particularly with controllers, visualization and the manipulation of large amounts of control data. Various interfaces are presented which address these issues. We also discuss 3D “cube” controllers and visualizations, which go a long way in aiding usability. A range of implementations were realized, each with its own interface, automation, and output characteristics. We also explore a number of novel techniques. For example, a sound’s spectral components can be mapped in space based on its own components’ energy, or the energy of another signal’s components (a kind of spatial cross-synthesis). Finally, we address aesthetic concerns, such as perceptual and sonic coherency, which arise when sounds have been spectrally dissected and scattered across a multi-channel spatial field in 64, 128 or more spectral bands.
@inproceedings{Torchia2004, author = {Torchia, Ryan H. and Lippe, Cort}, title = {Techniques for Multi-Channel Real-Time Spatial Distribution Using Frequency-Domain Processing}, pages = {116--119}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176679}, url = {http://www.nime.org/proceedings/2004/nime2004_116.pdf} }
Rumi Hiraga, Roberto Bresin, Keiji Hirata, and Haruhiro Katayose. 2004. Rencon 2004: Turing Test for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 120–123. http://doi.org/10.5281/zenodo.1176611
Abstract
Download PDF DOI
Rencon is an annual international event that started in 2002. It has roles of (1) pursuing evaluation methods for systems whose output includes subjective issues, and (2) providing a forum for researches of several fields related to musical expression. In the past. Rencon was held as a workshop associated with a musical contest that provided a forum for presenting and discussing the latest research in automatic performance rendering. This year we introduce new evaluation methods of performance expression to Rencon: a Turing Test and a Gnirut Test, which is a reverse Turing Test, for performance expression. We have opened a section of the contests to any instruments and genre of music, including synthesized human voices.
@inproceedings{Hiraga2004, author = {Hiraga, Rumi and Bresin, Roberto and Hirata, Keiji and Katayose, Haruhiro}, title = {Rencon 2004: Turing Test for Musical Expression}, pages = {120--123}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176611}, url = {http://www.nime.org/proceedings/2004/nime2004_120.pdf}, keywords = {Rencon, Turing Test, Musical Expression, Performance Rendering} }
Haruhiro Katayose and Keita Okudaira. 2004. Using an Expressive Performance Template in a Music Conducting Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 124–129. http://doi.org/10.5281/zenodo.1176625
Abstract
Download PDF DOI
This paper describes an approach for playing expressivemusic, as it refers to a pianist’s expressiveness, with atapping-style interface. MIDI-formatted expressiveperformances played by pianists were first analyzed andtransformed into performance templates, in which thedeviations from a canonical description was separatelydescribed for each event. Using one of the templates as askill complement, a player can play music expressivelyover and under the beat level. This paper presents ascheduler that allows a player to mix her/his own intensionand the expressiveness in the performance template. Theresults of a forty-subject user study suggest that using theexpression template contributes the subject’s joy of playingmusic with the tapping-style performance interface. Thisresult is also supported by a brain activation study that wasdone using a near-infrared spectroscopy (NIRS).Categories and Subject DescriptorsH.5.5 [Information Interfaces and Presentation]: Sound andMusic Computing methodologies and techniques.
@inproceedings{Katayose2004, author = {Katayose, Haruhiro and Okudaira, Keita}, title = {Using an Expressive Performance Template in a Music Conducting Interface}, pages = {124--129}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176625}, url = {http://www.nime.org/proceedings/2004/nime2004_124.pdf}, keywords = {Rencon, interfaces for musical expression, visualization} }
Hideki Kawahara, Hideki Banno, and Masanori Morise. 2004. Acappella Synthesis Demonstrations using RWC Music Database. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 130–131. http://doi.org/10.5281/zenodo.1176627
Abstract
Download PDF DOI
A series of demonstrations of synthesized acappella songsbased on an auditory morphing using STRAIGHT [5] willbe presented. Singing voice data for morphing were extracted from the RWCmusic database of musical instrument sound. Discussions on a new extension of the morphing procedure to deal with vibrato will be introduced basedon the statistical analysis of the database and its effect onsynthesized acappella will also be demonstrated.
@inproceedings{Kawahara2004, author = {Kawahara, Hideki and Banno, Hideki and Morise, Masanori}, title = {Acappella Synthesis Demonstrations using RWC Music Database}, pages = {130--131}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176627}, url = {http://www.nime.org/proceedings/2004/nime2004_130.pdf}, keywords = {Rencon, Acappella, RWCdatabase, STRAIGHT, morphing} }
Roger B. Dannenberg. 2004. Aura II: Making Real-Time Systems Safe for Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 132–137. http://doi.org/10.5281/zenodo.1176593
Abstract
Download PDF DOI
Real-time interactive software can be difficult to construct and debug. Aura is a software platform to facilitate highly interactive systems that combine audio signal processing, sophisticated control, sensors, computer animation, video processing, and graphical user interfaces. Moreover, Aura is open-ended, allowing diverse software components to be interconnected in a real-time framework. A recent assessment of Aura has motivated a redesign of the communication system to support remote procedure call. In addition, the audio signal processing framework has been altered to reduce programming errors. The motivation behind these changes is discussed, and measurements of run-time performance offer some general insights for system designers.
@inproceedings{Dannenberg2004, author = {Dannenberg, Roger B.}, title = {Aura II: Making Real-Time Systems Safe for Music}, pages = {132--137}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176593}, url = {http://www.nime.org/proceedings/2004/nime2004_132.pdf} }
Ge Wang and Perry R. Cook. 2004. On-the-fly Programming: Using Code as an Expressive Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 138–143. http://doi.org/10.5281/zenodo.1176683
Abstract
Download PDF DOI
On-the-fly programming is a style of programming in which the programmer/performer/composer augments and modifies the program while it is running, without stopping or restarting, in order to assert expressive, programmable control at runtime. Because of the fundamental powers of programming languages, we believe the technical and aesthetic aspects of on-the-fly programming are worth exploring. In this paper, we present a formalized framework for on-the-fly programming, based on the ChucK synthesis language, which supports a truly concurrent audio programming model with sample-synchronous timing, and a highly on-the-fly style of programming. We first provide a well-defined notion of on-thefly programming. We then address four fundamental issues that confront the on-the-fly programmer: timing, modularity, conciseness, and flexibility. Using the features and properties of ChucK, we show how it solves many of these issues. In this new model, we show that (1) concurrency provides natural modularity for on-the-fly programming, (2) the timing mechanism in ChucK guarantees on-the-fly precision and consistency, (3) the Chuck syntax improves conciseness, and (4) the overall system is a useful framework for exploring on-the-fly programming. Finally, we discuss the aesthetics of on-the-fly performance.
@inproceedings{Wang2004, author = {Wang, Ge and Cook, Perry R.}, title = {On-the-fly Programming: Using Code as an Expressive Musical Instrument}, pages = {138--143}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176683}, url = {http://www.nime.org/proceedings/2004/nime2004_138.pdf}, keywords = {code as interface,compiler,concurrency,concurrent audio programming,on-the-fly programming,real-,synchronization,synthesis,time,timing,virtual machine} }
Michael Lew. 2004. Live Cinema: Designing an Instrument for Cinema Editing as a Live Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 144–149. http://doi.org/10.5281/zenodo.1176631
Abstract
Download PDF DOI
This paper describes the design of an expressive tangible interface for cinema editing as a live performance. A short survey of live video practices is provided. The Live Cinema instrument is a cross between a musical instrument and a film editing tool, tailored for improvisational control as well as performance presence. Design specifications for the instrument evolved based on several types of observations including: our own performances in which we used a prototype based on available tools; an analysis of performative aspects of contemporary DJ equipment; and an evaluation of organizational aspects of several generations of film editing tools. Our instrument presents the performer with a large canvas where projected images can be grabbed and moved around with both hands simultaneously; the performer also has access to two video drums featuring haptic display to manipulate the shots and cut between streams. The paper ends with a discussion of issues related to the tensions between narrative structure and hands-on control, live and recorded arts and the scoring of improvised films.
@inproceedings{Lew2004, author = {Lew, Michael}, title = {Live Cinema: Designing an Instrument for Cinema Editing as a Live Performance}, pages = {144--149}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176631}, url = {http://www.nime.org/proceedings/2004/nime2004_144.pdf}, keywords = {live cinema, video controller, visual music, DJ, VJ, film editing, tactile interface, two-hand interaction, improvisation, performance, narrative structure.} }
Cornelius Poepel. 2004. Synthesized Strings for String Players. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 150–153. http://doi.org/10.5281/zenodo.1176655
Abstract
Download PDF DOI
A system is introduced that allows a string player to control a synthesis engine with the gestural skills he is used to. The implemented system is based on an electric viola and a synthesis engine that is directly controlled by the unanalysed audio signal of the instrument and indirectly by control parameters mapped to the synthesis engine. This method offers a highly string-specific playability, as it is sensitive to the kinds of musical articulation produced by traditional playing techniques. Nuances of sound variation applied by the player will be present in the output signal even if those nuances are beyond traditionally measurable parameters like pitch, amplitude or brightness. The relatively minimal hardware requirements make the instrument accessible with little expenditure.
@inproceedings{Poepel2004, author = {Poepel, Cornelius}, title = {Synthesized Strings for String Players}, pages = {150--153}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176655}, url = {http://www.nime.org/proceedings/2004/nime2004_150.pdf}, keywords = {Electronic bowed string instrument, playability, musical instrument design, human computer interface, oscillation controlled sound synthesis} }
Atau Tanaka. 2004. Mobile Music Making. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 154–156. http://doi.org/10.5281/zenodo.1176677
Abstract
Download PDF DOI
We present a system for collaborative musical creation onmobile wireless networks. The work extends on simple peerto-peer file sharing systems towards ad-hoc mobility andstreaming. It extends upon music listening from a passiveact to a proactive, participative activity. The system consistsof a network based interactive music engine and a portablerendering player. It serves as a platform for experiments onstudying the sense of agency in collaborative creativeprocess, and requirements for fostering musical satisfactionin remote collaboration.
@inproceedings{Tanaka2004, author = {Tanaka, Atau}, title = {Mobile Music Making}, pages = {154--156}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176677}, url = {http://www.nime.org/proceedings/2004/nime2004_154.pdf}, keywords = {mobile music,peer-to-peer,wireless ad-hoc networks} }
Emmanuel Fléty, Nicolas Leroy, Jean-Christophe Ravarini, and Frédéric Bevilacqua. 2004. Versatile Sensor Acquisition System Utilizing Network Technology. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 157–160. http://doi.org/10.5281/zenodo.1176597
Abstract
Download PDF DOI
This paper reports our recent developments on sensor acquisition systems, taking advantage of computer network technology. We present a versatile hardware system which can be connected to wireless modules, Analog to Digital Converters, and enables Ethernet communication. We are planning to make freely available the design of this architecture. We describe also several approaches we tested for wireless communication. Such technology developments are currently used in our newly formed Performance Arts Technology Group.
@inproceedings{Flety2004, author = {Fl\'{e}ty, Emmanuel and Leroy, Nicolas and Ravarini, Jean-Christophe and Bevilacqua, Fr\'{e}d\'{e}ric}, title = {Versatile Sensor Acquisition System Utilizing Network Technology}, pages = {157--160}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176597}, url = {http://www.nime.org/proceedings/2004/nime2004_157.pdf}, keywords = {Gesture, Sensors, Ethernet, 802.11, Computer Music.} }
Lalya Gaye and Lars E. Holmquist. 2004. In Duet with Everyday Urban Settings: A User Study of Sonic City. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 161–164. http://doi.org/10.5281/zenodo.1176601
Abstract
Download PDF DOI
Sonic City is a wearable system enabling the use of the urban environment as an interface for real-time electronic music making, when walking through and interacting with a city. The device senses everyday interactions and surrounding contexts, and maps this information in real time to the sound processing of urban sounds. We conducted a short-term study with various participants using our prototype in everyday settings. This paper describes the course of the study and preliminary results in terms of how the participants used and experienced the system. These results showed that the city was perceived as the main performer but that the user improvised different tactics and ad hoc interventions to actively influence and participate in how the music was created.
@inproceedings{Gaye2004, author = {Gaye, Lalya and Holmquist, Lars E.}, title = {In Duet with Everyday Urban Settings: A User Study of Sonic City}, pages = {161--164}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176601}, url = {http://www.nime.org/proceedings/2004/nime2004_161.pdf}, keywords = {User study, new interface for musical expression, interactive music, wearable computing, mobility, context-awareness.} }
Enrique Franco, Niall J. Griffith, and Mikael Fernström. 2004. Issues for Designing a Flexible Expressive Audiovisual System for Real-time Performance & Composition. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 165–168. http://doi.org/10.5281/zenodo.1176599
Abstract
Download PDF DOI
This paper begins by evaluating various systems in terms of factors for building interactive audiovisual environments. The main issues for flexibility and expressiveness in the generation of dynamic sounds and images are then isolated. The design and development of an audiovisual system prototype is described at the end.
@inproceedings{Franco2004, author = {Franco, Enrique and Griffith, Niall J. and Fernstr\''{o}m, Mikael}, title = {Issues for Designing a Flexible Expressive Audiovisual System for Real-time Performance \& Composition}, pages = {165--168}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176599}, url = {http://www.nime.org/proceedings/2004/nime2004_165.pdf}, keywords = {Audiovisual, composition, performance, gesture, image, representation, mapping, expressiveness.} }
Gamhewage C. de Silva, Tamara Smyth, and Michael J. Lyons. 2004. A Novel Face-tracking Mouth Controller and its Application to Interacting with Bioacoustic Models. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 169–172. http://doi.org/10.5281/zenodo.1176667
Abstract
Download PDF DOI
We describe a simple, computationally light, real-time system for tracking the lower face and extracting informationabout the shape of the open mouth from a video sequence.The system allows unencumbered control of audio synthesismodules by action of the mouth. We report work in progressto use the mouth controller to interact with a physical modelof sound production by the avian syrinx.
@inproceedings{Silva2004, author = {de Silva, Gamhewage C. and Smyth, Tamara and Lyons, Michael J.}, title = {A Novel Face-tracking Mouth Controller and its Application to Interacting with Bioacoustic Models}, pages = {169--172}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176667}, url = {http://www.nime.org/proceedings/2004/nime2004_169.pdf}, keywords = {Mouth Controller, Face Tracking, Bioacoustics} }
Yoichi Nagashima. 2004. Measurement of Latency in Interactive Multimedia Art. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 173–176. http://doi.org/10.5281/zenodo.1176641
Abstract
Download PDF DOI
In this paper, I would like to introduce my experimental study of multimedia psychology. My initial focus of investigation is the interaction between perceptions of auditory and visual beats. When the musical and graphical beats are completely synchronized with each other, as in a music video for promotional purposes, the audience feels that they are natural and comforting. My initial experiment has proved that the actual tempos of music and images are a little different. If a slight timelag exists between the musical and pictorial beats, the audience tries to keep them in synchronization by unconsciously changing the interpretation of the time-based beat points. As the lag increases over time, the audience seems to perceive that the beat synchronization has changed from being more downbeat to more upbeat, and continues enjoying it. I have developed an experiment system that can generateand control out-of-phase visual and auditory beats in real time, and have tested many subjects with it. This paper describes the measurement of time lags generated in the experiment system, as part of my psychological experiment.
@inproceedings{Nagashima2004, author = {Nagashima, Yoichi}, title = {Measurement of Latency in Interactive Multimedia Art}, pages = {173--176}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176641}, url = {http://www.nime.org/proceedings/2004/nime2004_173.pdf} }
Katsuhisa Ishida, Tetsuro Kitahara, and Masayuki Takeda. 2004. ism: Improvisation Supporting System based on Melody Correction. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 177–180. http://doi.org/10.5281/zenodo.1176617
Abstract
Download PDF DOI
In this paper, we describe a novel improvisation supporting system based on correcting musically unnatural melodies. Since improvisation is the musical performance style that involves creating melodies while playing, it is not easy even for the people who can play musical instruments. However, previous studies have not dealt with improvisation support for the people who can play musical instruments but cannot improvise. In this study, to support such players’ improvisation, we propose a novel improvisation supporting system called ism, which corrects musically unnatural melodies automatically. The main issue in realizing this system is how to detect notes to be corrected (i.e., musically unnatural or inappropriate). We propose a method for detecting notes to be corrected based on the N-gram model. This method first calculates N-gram probabilities of played notes, and then judges notes with low N-gram probabilities to be corrected. Experimental results show that the N-gram-based melody correction and the proposed system are useful for supporting improvisation.
@inproceedings{Ishida2004, author = {Ishida, Katsuhisa and Kitahara, Tetsuro and Takeda, Masayuki}, title = {ism: Improvisation Supporting System based on Melody Correction}, pages = {177--180}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176617}, url = {http://www.nime.org/proceedings/2004/nime2004_177.pdf}, keywords = {Improvisation support, jam session, melody correction, N-gram model, melody modeling, musical instrument} }
Eric Singer, Jeff Feddersen, Chad Redmon, and Bil Bowen. 2004. LEMUR’s Musical Robots. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 181–184. http://doi.org/10.5281/zenodo.1176669
Abstract
Download PDF DOI
This paper describes new work and creations of LEMUR, agroup of artists and technologists creating robotic musicalinstruments.
@inproceedings{Singer2004, author = {Singer, Eric and Feddersen, Jeff and Redmon, Chad and Bowen, Bil}, title = {LEMUR's Musical Robots}, pages = {181--184}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176669}, url = {http://www.nime.org/proceedings/2004/nime2004_181.pdf}, keywords = {additional computer or special,commands allows,familiar tools with no,improvisations,the musician or composer,to control the instrument,use of standard midi,using} }
Anthony J. Hornof and Linda Sato. 2004. EyeMusic: Making Music with the Eyes. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 185–188. http://doi.org/10.5281/zenodo.1176613
Abstract
Download PDF DOI
Though musical performers routinely use eye movements to communicate with each other during musical performances, very few performers or composers have used eye tracking devices to direct musical compositions and performances. EyeMusic is a system that uses eye movements as an input to electronic music compositions. The eye movements can directly control the music, or the music can respond to the eyes moving around a visual scene. EyeMusic is implemented so that any composer using established composition software can incorporate prerecorded eye movement data into their musical compositions.
@inproceedings{Hornof2004, author = {Hornof, Anthony J. and Sato, Linda}, title = {EyeMusic: Making Music with the Eyes}, pages = {185--188}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176613}, url = {http://www.nime.org/proceedings/2004/nime2004_185.pdf}, keywords = {Electronic music composition, eye movements, eye tracking, human-computer interaction, Max/MSP.} }
Mark Argo. 2004. The Slidepipe: A Timeline-Based Controller for Real-Time Sample Manipulation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 189–192. http://doi.org/10.5281/zenodo.1176581
Abstract
Download PDF DOI
When working with sample-based media, a performer is managing timelines, loop points, sample parameters and effects parameters. The Slidepipe is a performance controller that gives the artist a visually simple way to work with their material. Its design is modular and lightweight, so it can be easily transported and quickly assembled. Also, its large stature magnifies the gestures associated with its play, providing a more convincing performance. In this paper, I will describe what the controller is, how this new controller interface has affected my live performance, and how it can be used in different performance scenarios.
@inproceedings{Argo2004, author = {Argo, Mark}, title = {The Slidepipe: A Timeline-Based Controller for Real-Time Sample Manipulation}, pages = {189--192}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176581}, url = {http://www.nime.org/proceedings/2004/nime2004_189.pdf}, keywords = {Controller, Sample Manipulation, Live Performance, Open Sound Control, Human Computer Interaction} }
Matthew Burtner. 2004. A Theory of Modulated Objects for New Shamanic Controller Design. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 193–196. http://doi.org/10.5281/zenodo.1176585
Abstract
Download PDF DOI
This paper describes a theory for modulated objects based on observations of recent musical interface design trends. The theory implies extensions to an object-based approach to controller design. Combining NIME research with ethnographic study of shamanic traditions. The author discusses the creation of new controllers based on the shamanic use of ritual objects.
@inproceedings{Burtner2004, author = {Burtner, Matthew}, title = {A Theory of Modulated Objects for New Shamanic Controller Design}, pages = {193--196}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176585}, url = {http://www.nime.org/proceedings/2004/nime2004_193.pdf}, keywords = {Music and Video Controllers, New Interface Design, Music Composition, Multimedia, Mythology, Shamanism, Ecoacoustics} }
Jean-Marc Pelletier. 2004. A Shape-Based Approach to Computer Vision Musical Performance Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 197–198. http://doi.org/10.5281/zenodo.1176653
Abstract
Download PDF DOI
In this paper, I will describe a computer vision-based musical performance system that uses morphological assessments to provide control data. Using shape analysis allows the system to provide qualitative descriptors of the scene being captured while ensuring its use in a wide variety of different settings. This system was implemented under Max/MSP/Jitter, augmented with a number of external objects. (1)
@inproceedings{Pelletier2004, author = {Pelletier, Jean-Marc}, title = {A Shape-Based Approach to Computer Vision Musical Performance Systems}, pages = {197--198}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176653}, url = {http://www.nime.org/proceedings/2004/nime2004_197.pdf}, keywords = {computer vision,image analysis,maxmsp,morphology,musical} }
Stephen Hughes, Cormac Cannon, and Sile O’Modhrain. 2004. Epipe : A Novel Electronic Woodwind Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 199–200. http://doi.org/10.5281/zenodo.1176615
Abstract
Download PDF DOI
The Epipe is a novel electronic woodwind controller with continuous tonehole coverage sensing, an initial design for which was introduced at NIME ’03. Since then, we have successfully completed two fully operational prototypes. This short paper describes some of the issues encountered during the design and construction of this controller. It also details our own early experiences and impressions of the interface as well as its technical specifications.
@inproceedings{Hughes2004, author = {Hughes, Stephen and Cannon, Cormac and O'Modhrain, Sile}, title = {Epipe : A Novel Electronic Woodwind Controller}, pages = {199--200}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176615}, url = {http://www.nime.org/proceedings/2004/nime2004_199.pdf}, keywords = {woodwind controller, variable tonehole control, MIDI, capacitive sensing} }
Geoffrey C. Morris, Sasha Leitman, and Marina Kassianidou. 2004. SillyTone Squish Factory. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 201–202. http://doi.org/10.5281/zenodo.1176639
Abstract
Download PDF DOI
This paper describes the SillyTone Squish Factory, a haptically engaging musical interface. It contains the motivation behind the device’s development, a description of the interface, various mappings of the interface to musical applications, details of its construction, and the requirements to demo the interface.
@inproceedings{Morris2004, author = {Morris, Geoffrey C. and Leitman, Sasha and Kassianidou, Marina}, title = {SillyTone Squish Factory}, pages = {201--202}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176639}, url = {http://www.nime.org/proceedings/2004/nime2004_201.pdf} }
Hans-Christoph Steiner. 2004. StickMusic: Using Haptic Feedback with a Phase Vocoder. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 203–204. http://doi.org/10.5281/zenodo.1176671
Abstract
Download PDF DOI
StickMusic is an instrument comprised of two haptic devices, a joystick and a mouse, which control a phase vocoder in real time. The purpose is to experiment with ideas of how to apply haptic feedback when controlling synthesis algorithms that have no direct analogy to methods of generating sound in the physical world.
@inproceedings{Steiner2004, author = {Steiner, Hans-Christoph}, title = {StickMusic: Using Haptic Feedback with a Phase Vocoder}, pages = {203--204}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176671}, url = {http://www.nime.org/proceedings/2004/nime2004_203.pdf}, keywords = {haptic feedback, gestural control, performance, joystick, mouse} }
Thierry Coduys, Cyrille Henry, and Arshia Cont. 2004. TOASTER and KROONDE: High-Resolution and High- Speed Real-time Sensor Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 205–206. http://doi.org/10.5281/zenodo.1176587
Abstract
Download PDF DOI
High capacity of transmission lines (Ethernet in particular) is much higher than what imposed by MIDI today. So it is possible to use capturing interfaces with high-speed and high-resolution, thanks to the OSC protocol, for musical synthesis (either in realtime or non real-time). These new interfaces offer many advantages, not only in the area of musical composition with use of sensors but also in live and interactive performances. In this manner, the processes of calibration and signal processing are delocalized on a personal computer and augments possibilities of processing. In this demo, we present two hardware interfaces developed in La kitchen with corresponding processing to achieve a high-resolution, high-speed sensor processing for musical applications.
@inproceedings{Coduys2004, author = {Coduys, Thierry and Henry, Cyrille and Cont, Arshia}, title = {TOASTER and KROONDE: High-Resolution and High- Speed Real-time Sensor Interfaces}, pages = {205--206}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176587}, url = {http://www.nime.org/proceedings/2004/nime2004_205.pdf}, keywords = {Interface, Sensors, Calibration, Precision, OSC, Pure Data, Max/MSP.} }
Suguru Goto and Takahiko Suzuki. 2004. The Case Study of Application of Advanced Gesture Interface and Mapping Interface, Virtual Musical Instrument "Le SuperPolm" and Gesture Controller "BodySuit". Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 207–208. http://doi.org/10.5281/zenodo.1176605
Abstract
Download PDF DOI
We will discuss the case study of application of the Virtual Musical Instrument and Sound Synthesis. Doing this application, the main subject is advanced Mapping Interface in order to connect these. For this experiment, our discussion also refers to Neural Network, as well as a brief introduction of the Virtual Musical Instrument "Le SuperPolm" and Gesture Controller "BodySuit".
@inproceedings{Goto2004, author = {Goto, Suguru and Suzuki, Takahiko}, title = {The Case Study of Application of Advanced Gesture Interface and Mapping Interface, Virtual Musical Instrument "Le SuperPolm" and Gesture Controller "BodySuit"}, pages = {207--208}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176605}, url = {http://www.nime.org/proceedings/2004/nime2004_207.pdf}, keywords = {Virtual Musical Instrument, Gesture Controller, Mapping Interface} }
Sook Y. Won, Humane Chan, and Jeremy Liu. 2004. Light Pipes: A Light Controlled MIDI Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 209–210. http://doi.org/10.5281/zenodo.1176685
Abstract
Download PDF DOI
In this paper, we describe a new MIDI controller, the Light Pipes. The Light Pipes are a series of pipes that respond to incident light. The paper will discuss the design of the instrument, and the prototype we built. A piece was composed for the instrument using algorithms designed in Pure Data.
@inproceedings{Won2004, author = {Won, Sook Y. and Chan, Humane and Liu, Jeremy}, title = {Light Pipes: A Light Controlled {MIDI} Instrument}, pages = {209--210}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176685}, url = {http://www.nime.org/proceedings/2004/nime2004_209.pdf}, keywords = {Controllers, MIDI, light sensors, Pure Data.} }
Takuro M. Lippit. 2004. Realtime Sampling System for the Turntablist, Version 2: 16padjoystickcontroller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 211–212. http://doi.org/10.5281/zenodo.1176633
Abstract
Download PDF DOI
In this paper, I describe a realtime sampling system for theturntablist, and the hardware and software design of the secondprototype, 16padjoystickcontroller.
@inproceedings{Lippit2004, author = {Lippit, Takuro M.}, title = {Realtime Sampling System for the Turntablist, Version 2: 16padjoystickcontroller}, pages = {211--212}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176633}, url = {http://www.nime.org/proceedings/2004/nime2004_211.pdf}, keywords = {DJ, Turntablism, Realtime Sampling, MAX/MSP, Microchip PIC microcontroller, MIDI} }
Michael E. Sharon. 2004. The Stranglophone: Enhancing Expressiveness In Live Electronic Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 213–214. http://doi.org/10.5281/zenodo.1176661
Abstract
Download PDF DOI
This paper describes the design and on-going development of an expressive gestural MIDI interface and how this couldenhance live performance of electronic music.
@inproceedings{Sharon2004, author = {Sharon, Michael E.}, title = {The Stranglophone: Enhancing Expressiveness In Live Electronic Music}, pages = {213--214}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176661}, url = {http://www.nime.org/proceedings/2004/nime2004_213.pdf}, keywords = {gestural control, mapping, Pure Data (pd), accelerometers, MIDI, microcontrollers, synthesis, musical instruments} }
Tomoko Hashida, Yasuaki Kakehi, and Takeshi Naemura. 2004. Ensemble System with i-trace. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 215–216. http://doi.org/10.5281/zenodo.1176607
Abstract
Download PDF DOI
This paper proposes an interface for improvisational ensemble plays which synthesizes musical sounds and graphical images on the floor from people’s act of "walking". The aim of this paper is to develop such a system that enables nonprofessional people in our public spaces to play good contrapuntal music without any knowledge of music theory. The people are just walking. This system is based on the i-trace system [1] which can capture the people’s behavior and give some visual feedback.
@inproceedings{Hashida2004, author = {Hashida, Tomoko and Kakehi, Yasuaki and Naemura, Takeshi}, title = {Ensemble System with i-trace}, pages = {215--216}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2004}, address = {Hamamatsu, Japan}, issn = {2220-4806}, doi = {10.5281/zenodo.1176607}, url = {http://www.nime.org/proceedings/2004/nime2004_215.pdf}, keywords = {Improvisational Ensemble Play, Contrapuntal Music, Human Tracking, Traces, Spatially Augmented Reality} }
2003
Cormac Cannon, Stephen Hughes, and Sile O’Modhrain. 22AD. EpipE: Exploration of the Uilleann Pipes as a Potential Controller for Computer-based Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 3–8. http://doi.org/10.5281/zenodo.1176497
Abstract
Download PDF DOI
In this paper we present a design for the EpipE, a newexpressive electronic music controller based on the IrishUilleann Pipes, a 7-note polyphonic reeded woodwind. Thecore of this proposed controller design is a continuouselectronic tonehole-sensing arrangement, equally applicableto other woodwind interfaces like those of the flute, recorder orJapanese shakuhachi. The controller will initially be used todrive a physically-based synthesis model, with the eventualgoal being the development of a mapping layer allowing theEpipE interface to operate as a MIDI-like controller of arbitrarysynthesis models.
@inproceedings{Cannon2003, author = {Cannon, Cormac and Hughes, Stephen and O'Modhrain, Sile}, title = {EpipE: Exploration of the Uilleann Pipes as a Potential Controller for Computer-based Music}, pages = {3--8}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176497}, url = {http://www.nime.org/proceedings/2003/nime2003_003.pdf}, keywords = {Controllers, continuous woodwind tonehole sensor, uilleann pipes, Irish bagpipe, physical modelling, double reed, conical bore, tonehole. } }
Diana Young and Georg Essl. 22AD. HyperPuja: A Tibetan Singing Bowl Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 9–14. http://doi.org/10.5281/zenodo.1176577
Abstract
Download PDF DOI
HyperPuja is a novel controller that closely mimicks the behavior of a Tibetan Singing Bowl rubbed with a "puja" stick. Our design hides the electronics from the performer to maintain the original look and feel of the instrument and the performance. This is achieved by using wireless technology to keep the stick un-tethered as well as burying the electronics inside the the core of the stick. The measured parameters closely resemble the input parameters of a related physical synthesis model allowing for convenient mapping of sensor parameters to synthesis input. The new controller allows for flexible choice of sound synthesis while fully maintaining the characteristics of the physical interaction of the original instrument.
@inproceedings{Young2003, author = {Young, Diana and Essl, Georg}, title = {HyperPuja: A Tibetan Singing Bowl Controller}, pages = {9--14}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176577}, url = {http://www.nime.org/proceedings/2003/nime2003_009.pdf} }
Gary Scavone. 22AD. THE PIPE: Explorations with Breath Control. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 15–18. http://doi.org/10.5281/zenodo.1176557
Abstract
Download PDF DOI
The Pipe is an experimental, general purpose music input device designed and built in the form of a compact MIDI wind controller. The development of this device was motivated in part by an interest in exploring breath pressure as a control input. The Pipe provides a variety of common sensor types, including force sensing resistors, momentary switches, accelerometers, potentiometers, and an air pressure transducer, which allow maximum flexibility in the design of a sensor mapping scheme. The Pipe uses a programmable BASIC Stamp 2sx microprocessor which outputs control messages via a standard MIDI jack.
@inproceedings{Scavone2003, author = {Scavone, Gary}, title = {THE PIPE: Explorations with Breath Control}, pages = {15--18}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176557}, url = {http://www.nime.org/proceedings/2003/nime2003_015.pdf}, keywords = {MIDI Controller, Wind Controller, Breath Control, Human Computer Interaction. } }
Marije A. Baalman. 22AD. The STRIMIDILATOR: a String Controlled MIDI-Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 19–23. http://doi.org/10.5281/zenodo.1176486
Abstract
Download PDF DOI
The STRIMIDILATOR is an instrument that uses the deviation and the vibration of strings as MIDI-controllers. Thismethod of control gives the user direct tactile force feedbackand allows for subtle control. The development of the instrument and its different functions are described.
@inproceedings{Baalman2003, author = {Baalman, Marije A.}, title = {The {STRIMIDILATOR}: a String Controlled {MIDI}-Instrument}, pages = {19--23}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176486}, url = {http://www.nime.org/proceedings/2003/nime2003_019.pdf}, keywords = {MIDI controllers, tactile force feedback, strings. Figure The STRIMIDILATOR } }
Scott Wilson, Michael Gurevich, Bill Verplank, and Pascal Stang. 22AD. Microcontrollers in Music HCI Instruction: Reflections on our Switch to the Atmel AVR Platform. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 24–29. http://doi.org/10.5281/zenodo.1176571
Abstract
Download PDF DOI
Over the past year the instructors of the Human ComputerInteraction courses at CCRMA have undertaken a technology shift to a much more powerful teaching platform. Wedescribe the technical features of the new Atmel AVR basedplatform, contrasting it with the Parallax BASIC Stampplatform used in the past. The successes and failures ofthe new platform are considered, and some student projectsuccess stories described.
@inproceedings{Wilson2003, author = {Wilson, Scott and Gurevich, Michael and Verplank, Bill and Stang, Pascal}, title = {Microcontrollers in Music HCI Instruction: Reflections on our Switch to the Atmel AVR Platform}, pages = {24--29}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176571}, url = {http://www.nime.org/proceedings/2003/nime2003_024.pdf}, keywords = {Microcontrollers, Music Controllers, Pedagogy, Atmel AVR, BASIC Stamp.} }
Tue H. Andersen. 22AD. Mixxx : Towards Novel DJ Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 30–35. http://doi.org/10.5281/zenodo.1176484
Abstract
Download PDF DOI
The Disc Jockey (DJ) software system Mixxx is presented.Mixxx makes it possible to conduct studies of new interaction techniques in connection with the DJ situation, by itsopen design and easy integration of new software modulesand MIDI connection to external controllers. To gain a better understanding of working practices, and to aid the designprocess of new interfaces, interviews with two contemporarymusicians and DJ’s are presented. In contact with thesemusicians development of several novel prototypes for DJinteraction have been made. Finally implementation detailsof Mixxx are described.
@inproceedings{Andersen2003, author = {Andersen, Tue H.}, title = {Mixxx : Towards Novel DJ Interfaces}, pages = {30--35}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176484}, url = {http://www.nime.org/proceedings/2003/nime2003_030.pdf}, keywords = {DJ, software, interaction, visualization, controllers, augmented reality.} }
Nicola Orio, Serge Lemouton, and Diemo Schwarz. 22AD. Score Following: State of the Art and New Developments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 36–41. http://doi.org/10.5281/zenodo.1176547
Abstract
Download PDF DOI
Score following is the synchronisation of a computer with a performer playing a known musical score. It now has a history of about twenty years as a research and musical topic, and is an ongoing project at Ircam. We present an overview of existing and historical score following systems, followed by fundamental definitions and terminology, and considerations about score formats, evaluation of score followers, and training. The score follower that we developed at Ircam is based on a Hidden Markov Model and on the modeling of the expected signal received from the performer. The model has been implemented in an audio and a Midi version, and is now being used in production. We report here our first experiences and our first steps towards a complete evaluation of system performances. Finally, we indicate directions how score following can go beyond the artistic applications known today.
@inproceedings{Orio2003, author = {Orio, Nicola and Lemouton, Serge and Schwarz, Diemo}, title = {Score Following: State of the Art and New Developments}, pages = {36--41}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176547}, url = {http://www.nime.org/proceedings/2003/nime2003_036.pdf}, keywords = {Score following, score recognition, real time audio alignment, virtual accompaniment.} }
Caroline Traube, Philippe Depalle, and Marcelo M. Wanderley. 22AD. Indirect Acquisition of Instrumental Gesture Based on Signal , Physical and Perceptual Information. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 42–47. http://doi.org/10.5281/zenodo.1176567
Abstract
Download PDF DOI
In this paper, we describe a multi-level approach for the extraction of instrumental gesture parameters taken from the characteristics of the signal captured by a microphone and based on the knowledge of physical mechanisms taking place on the instrument. We also explore the relationships between some features of timbre and gesture parameters, taking as a starting point for the exploration the timbre descriptors commonly used by professional musicians when they verbally describe the sounds they produce with their instrument. Finally, we present how this multi-level approach can be applied to the study of the timbre space of the classical guitar.
@inproceedings{Traube2003, author = {Traube, Caroline and Depalle, Philippe and Wanderley, Marcelo M.}, title = {Indirect Acquisition of Instrumental Gesture Based on Signal , Physical and Perceptual Information}, pages = {42--47}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176567}, url = {http://www.nime.org/proceedings/2003/nime2003_042.pdf}, keywords = {Signal analysis, indirect acquisition of instrumental gesture, guitar} }
Yoichi Nagashima. 22AD. Bio-Sensing Systems and Bio-Feedback Systems for Interactive Media Arts. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 48–53. http://doi.org/10.5281/zenodo.1176539
Abstract
Download PDF DOI
This is a report of research and some experimental applications of human-computer interaction in multi-media performing arts. The human performer and the computer systems perform computer graphic and computer music interactively in real-time. In general, many sensors are used for the interactive communication as interfaces, and the performer receives the output of the system via graphics, sounds and physical reactions of interfaces like musical instruments. I have produced many types of interfaces, not only with physical/electrical sensors but also with biological/physiological sensors. This paper is intended as an investigation of some special approaches: (1) 16-channel electromyogram sensor called “MiniBioMuse-III” and its application work called “BioCosmicStorm-II” performed in Paris, Kassel and Hamburg in 2001, (2) sensing/reacting with “breathing” in performing arts, (3) 8-channel electric-feedback system and its experiments of “body-hearing sounds” and “body-listening to music”.
@inproceedings{Nagashima2003, author = {Nagashima, Yoichi}, title = {Bio-Sensing Systems and Bio-Feedback Systems for Interactive Media Arts}, pages = {48--53}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176539}, url = {http://www.nime.org/proceedings/2003/nime2003_048.pdf} }
Ali Momeni and David Wessel. 22AD. Characterizing and Controlling Musical Material Intuitively with Geometric Models. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 54–62. http://doi.org/10.5281/zenodo.1176535
Abstract
Download PDF DOI
In this paper, we examine the use of spatial layouts of musicalmaterial for live performance control. Emphasis is given tosoftware tools that provide for the simple and intuitivegeometric organization of sound material, sound processingparameters, and higher-level musical structures.
@inproceedings{Momeni2003, author = {Momeni, Ali and Wessel, David}, title = {Characterizing and Controlling Musical Material Intuitively with Geometric Models}, pages = {54--62}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176535}, url = {http://www.nime.org/proceedings/2003/nime2003_054.pdf}, keywords = {Perceptual Spaces, Graphical Models, Real-time Instruments, Dimensionality Reduction, Multidimensional Scaling, Live Performance, Gestural Controllers, Live Interaction, High-level Control.} }
Matthew Burtner. 22AD. Composing for the (dis)Embodied Ensemble : Notational Systems in (dis)Appearances. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 63–69. http://doi.org/10.5281/zenodo.1176492
Abstract
Download PDF DOI
This paper explores compositional and notational approaches for working with controllers. The notational systems devised for the composition (dis)Appearances are discussed in depth in an attempt to formulate a new approach to composition using ensembles that navigates a performative space between reality and virtuality.
@inproceedings{Burtner2003, author = {Burtner, Matthew}, title = {Composing for the (dis)Embodied Ensemble : Notational Systems in (dis)Appearances}, pages = {63--69}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176492}, url = {http://www.nime.org/proceedings/2003/nime2003_063.pdf}, keywords = {Composition, notation systems, virtual reality, controllers, physical modeling, string, violin.} }
Sergi Jordà. 22AD. Sonigraphical Instruments: From FMOL to the reacTable*. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 70–76. http://doi.org/10.5281/zenodo.1176519
Abstract
Download PDF DOI
This paper first introduces two previous software-based musicinstruments designed by the author, and analyses the crucialimportance of the visual feedback introduced by theirinterfaces. A quick taxonomy and analysis of the visualcomponents in current trends of interactive music software isthen proposed, before introducing the reacTable*, a newproject that is currently under development. The reacTable* isa collaborative music instrument, aimed both at novices andadvanced musicians, which employs computer vision andtangible interfaces technologies, and pushes further the visualfeedback interface ideas and techniques aforementioned.
@inproceedings{Jorda2003, author = {Jord\`{a}, Sergi}, title = {Sonigraphical Instruments: From {FM}OL to the reacTable*}, pages = {70--76}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176519}, url = {http://www.nime.org/proceedings/2003/nime2003_070.pdf}, keywords = {Interactive music instruments, audio visualization, visual interfaces, visual feedback, tangible interfaces, computer vision, augmented reality, music instruments for novices, collaborative music.} }
Motohide Hatanaka. 22AD. Ergonomic Design of A Portable Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 77–82. http://doi.org/10.5281/zenodo.1176509
Abstract
Download PDF DOI
A handheld electronic musical instrument, named the BentoBox, was developed. The motivation was to develop aninstrument which one can easily carry around and play inmoments of free time, for example when riding public transportation or during short breaks at work. The device wasdesigned to enable quick learning by having various scalesprogrammed for different styles of music, and also beexpressive by having hand controlled timbral effects whichcan be manipulated while playing. Design analysis anditeration lead to a compact and ergonomic device. This paperfocuses on the ergonomic design process of the hardware.
@inproceedings{Hatanaka2003, author = {Hatanaka, Motohide}, title = {Ergonomic Design of A Portable Musical Instrument}, pages = {77--82}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176509}, url = {http://www.nime.org/proceedings/2003/nime2003_077.pdf}, keywords = {MIDI controller, electronic musical instrument, musical instrument design, ergonomics, playability, human computer interface. } }
Hiroko Shiraiwa, Rodrigo Segnini, and Vivian Woo. 22AD. Sound Kitchen: Designing a Chemically Controlled Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 83–86. http://doi.org/10.5281/zenodo.1176561
Abstract
Download PDF DOI
This paper presents a novel use of a chemical experiments’ framework as a control layer and sound source in a con- cert situation. Signal fluctuations from electrolytic batteries made out of household chemicals, and acoustic samples obtained from an acid/base reaction are used for musical purposes beyond the standard data sonification role. The batteries are controlled in handy ways such as warming, stirring and pouring that are also visually engaging. Audio mappings include synthetic and sampled sounds completing a recipe that concocts a live performance of computer music.
@inproceedings{Shiraiwa2003, author = {Shiraiwa, Hiroko and Segnini, Rodrigo and Woo, Vivian}, title = {Sound Kitchen: Designing a Chemically Controlled Musical Performance}, pages = {83--86}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176561}, url = {http://www.nime.org/proceedings/2003/nime2003_083.pdf}, keywords = {Chemical music, Applied chemistry, Battery Controller.} }
Joel Ryan and Christopher L. Salter. 22AD. TGarden: Wearable Instruments and Augmented Physicality. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 87–90. http://doi.org/10.5281/zenodo.1176555
Abstract
Download PDF DOI
This report details work on the interdisciplinary mediaproject TGarden. The authors discuss the challengesencountered while developing a responsive musicalenvironment for the general public involving wearable,sensor-integrated clothing as the central interface and input device. The project’s dramaturgical andtechnical/implementation background are detailed toprovide a framework for the creation of a responsive hardwareand software system that reinforces a tangible relationshipbetween the participant’s improvised movement and musicalresponse. Finally, the authors take into consideration testingscenarios gathered from public prototypes in two Europeanlocales in 2001 to evaluate user experience of the system.
@inproceedings{Ryan2003, author = {Ryan, Joel and Salter, Christopher L.}, title = {TGarden: Wearable Instruments and Augmented Physicality}, pages = {87--90}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176555}, url = {http://www.nime.org/proceedings/2003/nime2003_087.pdf}, keywords = {Gesture, interaction, embodied action, enaction, physical model, responsive environment, interactive musical systems, affordance, interface, phenomenology, energy, kinetics, time constant, induced ballistics, wearable computing, accelerometer, audience participation, dynamical system, dynamic compliance, effort, wearable instrument, augmented physicality. } }
David Ventura and Kenji Mase. 22AD. Duet Musical Companion: Improvisational Interfaces for Children. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 91–94. http://doi.org/10.5281/zenodo.1176569
Abstract
Download PDF DOI
We present a sensor-doll interface as a musical outlet forpersonal expression. A doll serves the dual role of being bothan expressive agent and a playmate by allowing solo andaccompanied performance. An internal computer and sensorsystem allow the doll to receive input from the user and itssurroundings, and then respond accordingly with musicalfeedback. Sets of musical timbres and melodies may bechanged by presenting the doll with a series of themed clothhats, each suggesting a different style of play. The doll mayperform by itself and play a number of melodies, or it maycollaborate with the user when its limbs are squeezed or bent.Shared play is further encouraged by a basic set of aural tonesmimicking conversation.
@inproceedings{Ventura2003, author = {Ventura, David and Mase, Kenji}, title = {Duet Musical Companion: Improvisational Interfaces for Children}, pages = {91--94}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176569}, url = {http://www.nime.org/proceedings/2003/nime2003_091.pdf}, keywords = {Musical improvisation, toy interface agent, sensor doll, context awareness. } }
David M. Howard, Stuart Rimell, and Andy D. Hunt. 22AD. Force Feedback Gesture Controlled Physical Modelling Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 95–98. http://doi.org/10.5281/zenodo.1176515
Abstract
Download PDF DOI
A physical modelling music synthesis system known as ‘Cymatic’ is described that enables ‘virtual instruments’ to be controlled in real-time via a force-feedback joystick and a force-feedback mouse. These serve to provide the user with gestural controllers whilst in addition giving tactile feedback to the user. Cymatic virtual instruments are set up via a graphical user interface in a manner that is highly intuitive. Users design and play these virtual instruments by interacting directly with their physical shape and structure in terms of the physical properties of basic objects such as strings, membranes and solids which can be interconnected to form complex structures. The virtual instrument can be excited at any point mass by the following: bowing, plucking, striking, sine/square/sawtooth/random waveform, or an external sound source. Virtual microphones can be placed at any point masses to deliver the acoustic output. This paper describes the underlying structure and principles upon which Cymatic is based, and illustrates its acoustic output.
@inproceedings{Howard2003, author = {Howard, David M. and Rimell, Stuart and Hunt, Andy D.}, title = {Force Feedback Gesture Controlled Physical Modelling Synthesis}, pages = {95--98}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176515}, url = {http://www.nime.org/proceedings/2003/nime2003_095.pdf}, keywords = {Physical modeling, haptic controllers, gesture control, force feedback.} }
Reynald Hoskinson, Kees van den Doel, and Sidney S. Fels. 22AD. Real-time Adaptive Control of Modal Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 99–103. http://doi.org/10.5281/zenodo.1176513
Abstract
Download PDF DOI
We describe the design and implementation of an adaptive system to map control parameters to modal audio synthesis parameters in real-time. The modal parameters describe the linear response of a virtual vibrating solid, which is played as a musical instrument by a separate interface. The system uses a three layer feedforward backpropagation neural network which is trained by a discrete set of input-output examples. After training, the network extends the training set, which functions as the specification by example of the controller, to a continuous mapping allowing the real-time morphing of synthetic sound models. We have implemented a prototype application using a controller which collects data from a hand-drawn digital picture. The virtual instrument consists of a bank of modal resonators whose frequencies, dampings, and gains are the parameters we control. We train the system by providing pictorial representations of physical objects such as a bell or a lamp, and associate high quality modal models obtained from measurements on real objects with these inputs. After training, the user can draw pictures interactively and “play” modal models which provide interesting (though unrealistic) interpolations of the models from the training set in real-time.
@inproceedings{Hoskinson2003, author = {Hoskinson, Reynald and van den Doel, Kees and Fels, Sidney S.}, title = {Real-time Adaptive Control of Modal Synthesis}, pages = {99--103}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176513}, url = {http://www.nime.org/proceedings/2003/nime2003_099.pdf} }
Diana Young and Stefania Serafin. 22AD. Playability Evaluation of a Virtual Bowed String Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 104–108. http://doi.org/10.5281/zenodo.1176579
Abstract
Download PDF DOI
Driving a bowed string physical model using a bow controller, we explore the potentials of using the real gestures of a violinist to simulate violin sound using a virtual instrument. After a description of the software and hardware developed, preliminary results and future work are discussed.
@inproceedings{Young2003a, author = {Young, Diana and Serafin, Stefania}, title = {Playability Evaluation of a Virtual Bowed String Instrument}, pages = {104--108}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176579}, url = {http://www.nime.org/proceedings/2003/nime2003_104.pdf} }
Lalya Gaye, Ramia Mazé, and Lars E. Holmquist. 22AD. Sonic City: The Urban Environment as a Musical Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 109–115. http://doi.org/10.5281/zenodo.1176507
Abstract
Download PDF DOI
In the project Sonic City, we have developed a system thatenables users to create electronic music in real time by walkingthrough and interacting with the urban environment. Weexplore the use of public space and everyday behaviours forcreative purposes, in particular the city as an interface andmobility as an interaction model for electronic music making.A multi-disciplinary design process resulted in theimplementation of a wearable, context-aware prototype. Thesystem produces music by retrieving information aboutcontext and user action and mapping it to real-time processingof urban sounds. Potentials, constraints, and implications ofthis type of music creation are discussed.
@inproceedings{Gaye2003, author = {Gaye, Lalya and Maz\'{e}, Ramia and Holmquist, Lars E.}, title = {Sonic City: The Urban Environment as a Musical Interface}, pages = {109--115}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176507}, url = {http://www.nime.org/proceedings/2003/nime2003_109.pdf}, keywords = {Interactive music, interaction design, urban environment, wearable computing, context-awareness, mobility} }
Michael J. Lyons, Michael Haehnel, and Nobuji Tetsutani. 22AD. Designing, Playing, and Performing with a Vision-based Mouth Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 116–121. http://doi.org/10.5281/zenodo.1176529
Abstract
Download PDF DOI
The role of the face and mouth in speech production as well asnon-verbal communication suggests the use of facial action tocontrol musical sound. Here we document work on theMouthesizer, a system which uses a headworn miniaturecamera and computer vision algorithm to extract shapeparameters from the mouth opening and output these as MIDIcontrol changes. We report our experience with variousgesture-to-sound mappings and musical applications, anddescribe a live performance which used the Mouthesizerinterface.
@inproceedings{Lyons2003, author = {Lyons, Michael J. and Haehnel, Michael and Tetsutani, Nobuji}, title = {Designing, Playing, and Performing with a Vision-based Mouth Interface}, pages = {116--121}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176529}, url = {http://www.nime.org/proceedings/2003/nime2003_116.pdf}, keywords = {Video-based interface; mouth controller; alternative input devices. } }
Donna Hewitt and Ian Stevenson. 22AD. E-mic: Extended Mic-stand Interface Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 122–128. http://doi.org/10.5281/zenodo.1176511
Abstract
Download PDF DOI
This paper describes work in progress for the development of a gestural controller interface for contemporary vocal performance and electronic processing. The paper includes a preliminary investigation of the gestures and movements of vocalists who use microphones and microphone stands. This repertoire of gestures forms the foundation of a well-practiced ‘language’ and social code for communication between performers and audiences and serves as a basis for alternate controller design principles. A prototype design, based on a modified microphone stand, is presented along with a discussion of possible controller mapping strategies and identification of directions for future research.
@inproceedings{Hewitt2003, author = {Hewitt, Donna and Stevenson, Ian}, title = {E-mic: Extended Mic-stand Interface Controller}, pages = {122--128}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176511}, url = {http://www.nime.org/proceedings/2003/nime2003_122.pdf}, keywords = {Alternate controller, gesture, microphone technique, vocal performance, performance interface, electronic music. } }
Tina Blaine and Sidney S. Fels. 22AD. Contexts of Collaborative Musical Experiences. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 129–134. http://doi.org/10.5281/zenodo.1176490
Abstract
Download PDF DOI
We explore a variety of design criteria applicable to thecreation of collaborative interfaces for musical experience. Themain factor common to the design of most collaborativeinterfaces for novices is that musical control is highlyrestricted, which makes it possible to easily learn andparticipate in the collective experience. Balancing this tradeoff is a key concern for designers, as this happens at theexpense of providing an upward path to virtuosity with theinterface. We attempt to identify design considerationsexemplified by a sampling of recent collaborative devicesprimarily oriented toward novice interplay. It is our intentionto provide a non-technical overview of design issues inherentin configuring multiplayer experiences, particularly for entrylevel players.
@inproceedings{Blaine2003, author = {Blaine, Tina and Fels, Sidney S.}, title = {Contexts of Collaborative Musical Experiences}, pages = {129--134}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176490}, url = {http://www.nime.org/proceedings/2003/nime2003_129.pdf}, keywords = {Design, collaborative interface, musical experience, multiplayer, novice, musical control. } }
Andy D. Hunt and Ross Kirk. 22AD. MidiGrid: Past, Present and Future. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 135–139. http://doi.org/10.5281/zenodo.1176517
Abstract
Download PDF DOI
MidiGrid is a computer-based musical instrument, primarilycontrolled with the computer mouse, which allows liveperformance of MIDI-based musical material by mapping 2dimensional position onto musical events. Since itsinvention in 1987, it has gained a small, but enthusiastic,band of users, and has become the primary instrument forseveral people with physical disabilities. This paper reviewsits development, uses and user interface issues, and highlightsthe work currently in progress for its transformation intoMediaGrid.
@inproceedings{Hunt2003, author = {Hunt, Andy D. and Kirk, Ross}, title = {MidiGrid: Past, Present and Future}, pages = {135--139}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176517}, url = {http://www.nime.org/proceedings/2003/nime2003_135.pdf}, keywords = {Live performance, Computer-based musical instruments, Human Computer Interaction for Music} }
Kessous Loı̈c and Daniel Arfib. 22AD. Bimanuality in Alternate Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 140–145. http://doi.org/10.5281/zenodo.1176523
Abstract
Download PDF DOI
This paper presents a study of bimanual control applied tosound synthesis. This study deals with coordination,cooperation, and abilities of our hands in musical context. Wedescribe examples of instruments made using subtractivesynthesis, scanned synthesis in Max/MSP and commercialstand-alone software synthesizers via MIDI communicationprotocol. These instruments have been designed according to amulti-layer-mapping model, which provides modular design.They have been used in concerts and performanceconsiderations are discussed too.
@inproceedings{Kessous2003, author = {Kessous, Lo\''{\i}c and Arfib, Daniel}, title = {Bimanuality in Alternate Musical Instruments}, pages = {140--145}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176523}, url = {http://www.nime.org/proceedings/2003/nime2003_140.pdf}, keywords = {Gesture control, mapping, alternate controllers, musical instruments. } }
Paul Modler, Tony Myatt, and Michael Saup. 22AD. An Experimental Set of Hand Gestures for Expressive Control of Musical Parameters in Realtime. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 146–150. http://doi.org/10.5281/zenodo.1176533
Abstract
Download PDF DOI
This paper describes the implementation of Time Delay NeuralNetworks (TDNN) to recognize gestures from video images.Video sources are used because they are non-invasive and do notinhibit performer’s physical movement or require specialistdevices to be attached to the performer which experience hasshown to be a significant problem that impacts musiciansperformance and can focus musical rehearsals and performancesupon technical rather than musical concerns (Myatt 2003).We describe a set of hand gestures learned by an artificial neuralnetwork to control musical parameters expressively in real time.The set is made up of different types of gestures in order toinvestigate:-aspects of the recognition process-expressive musical control-schemes of parameter mapping-generalization issues for an extended set for musicalcontrolThe learning procedure of the Neural Network is describedwhich is based on variations by affine transformations of imagesequences of the hand gestures.The whole application including the gesture capturing isimplemented in jMax to achieve real time conditions and easyintegration into a musical environment to realize differentmappings and routings of the control stream.The system represents a practice-based research using actualmusic models like compositions and processes of compositionwhich will follow the work described in the paper.
@inproceedings{Modler2003, author = {Modler, Paul and Myatt, Tony and Saup, Michael}, title = {An Experimental Set of Hand Gestures for Expressive Control of Musical Parameters in Realtime}, pages = {146--150}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176533}, url = {http://www.nime.org/proceedings/2003/nime2003_146.pdf}, keywords = {Gesture Recognition, Artificial Neural Network, Expressive Control, Real-time Interaction } }
Teresa M. Nakra. 22AD. Immersion Music: a Progress Report. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 151–152. http://doi.org/10.5281/zenodo.1176541
Abstract
Download PDF DOI
This paper describes the artistic projects undertaken at ImmersionMusic, Inc. (www.immersionmusic.org) during its three-yearexistence. We detail work in interactive performance systems,computer-based training systems, and concert production.
@inproceedings{Nakra2003, author = {Nakra, Teresa M.}, title = {Immersion Music: a Progress Report}, pages = {151--152}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176541}, url = {http://www.nime.org/proceedings/2003/nime2003_151.pdf}, keywords = {Interactive computer music systems, gestural interaction, Conductor's Jacket, Digital Baton } }
Matthew Wright, Adrian Freed, and Ali Momeni. 22AD. OpenSound Control: State of the Art 2003. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 153–159. http://doi.org/10.5281/zenodo.1176575
Abstract
Download PDF DOI
OpenSound Control (“OSC”) is a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology. OSC has achieved wide use in the field of computer-based new interfaces for musical expression for wide-area and local-area networked distributed music systems, inter-process communication, and even within a single application.
@inproceedings{Wright2003, author = {Wright, Matthew and Freed, Adrian and Momeni, Ali}, title = {OpenSound Control: State of the Art 2003}, pages = {153--159}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176575}, url = {http://www.nime.org/proceedings/2003/nime2003_153.pdf}, keywords = {OpenSound Control, Networking, client/server communication} }
Christopher Dobrian and Frédéric Bevilacqua. 22AD. Gestural Control of Music Using the Vicon 8 Motion Capture System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 161–163. http://doi.org/10.5281/zenodo.1176503
Abstract
Download PDF DOI
This article reports on a project that uses unfettered gestural motion for expressive musical purposes. The project involves the development of, and experimentation with, software to receive data from a Vicon motion capture system, and to translate and map that data into data for the control of music and other media such as lighting. In addition to the commercially standard MIDI-which allows direct control of external synthesizers, processors, and other devices-other mappings are used for direct software control of digital audio and video. This report describes the design and implementation of the software, discusses specific experiments performed with it, and evaluates its application in terms of aesthetic pros and cons.
@inproceedings{Dobrian2003, author = {Dobrian, Christopher and Bevilacqua, Fr\'{e}d\'{e}ric}, title = {Gestural Control of Music Using the Vicon 8 Motion Capture System}, pages = {161--163}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176503}, url = {http://www.nime.org/proceedings/2003/nime2003_161.pdf}, keywords = {Motion capture, gestural control, mapping. } }
Kazushi Nishimoto, Chika Oshima, and Yohei Miyagawa. 22AD. Why Always Versatile? Dynamically Customizable Musical Instruments Facilitate Expressive Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 164–169. http://doi.org/10.5281/zenodo.1176545
Abstract
Download PDF DOI
In this paper, we discuss a design principle for the musical instruments that are useful for both novices and professional musicians and that facilitate musically rich expression. We believe that the versatility of conventional musical instruments causes difficulty in performance. By dynamically specializing a musical instrument for performing a specific (genre of) piece, the musical instrument could become more useful for performing the piece and facilitates expressive performance. Based on this idea, we developed two new types of musical instruments, i.e., a "given-melody-based musical instrument" and a "harmonic-function-based musical instrument". From the experimental results using two prototypes, we demonstrate the efficiency of the design principle.
@inproceedings{Nishimoto2003, author = {Nishimoto, Kazushi and Oshima, Chika and Miyagawa, Yohei}, title = {Why Always Versatile? Dynamically Customizable Musical Instruments Facilitate Expressive Performances}, pages = {164--169}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176545}, url = {http://www.nime.org/proceedings/2003/nime2003_164.pdf}, keywords = {Musical instruments, expression, design principle, degree of freedom, dynamic specialization} }
Henry Newton-Dunn, Hiroaki Nakano, and James Gibson. 22AD. Block Jam: A Tangible Interface for Interactive Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 170–177. http://doi.org/10.5281/zenodo.1176543
Abstract
Download PDF DOI
In this paper, we introduce Block Jam, a Tangible UserInterface that controls a dynamic polyrhythmic sequencerusing 26 physical artifacts. These physical artifacts, that wecall blocks, are a new type of input device for manipulatingan interactive music system. The blocks’ functional andtopological statuses are tightly coupled to an ad hocsequencer, interpreting the user’s arrangement of the blocksas meaningful musical phrases and structures.We demonstrate that we have created both a tangible andvisual language that enables both the novice and musicallytrained users by taking advantage of both their explorativeand intuitive abilities. The tangible nature of the blocks andthe intuitive interface promotes face-to-face collaborationand social interaction within a single system. The principleof collaboration is further extended by linking two BlockJam systems together to create a network.We discuss our project vision, design rational, relatedworks, and the implementation of Block Jam prototypes.Figure 1. A cluster of blocks, note the mother block on thebottom right
@inproceedings{NewtonDunn2003, author = {Newton-Dunn, Henry and Nakano, Hiroaki and Gibson, James}, title = {Block Jam: A Tangible Interface for Interactive Music}, pages = {170--177}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176543}, url = {http://www.nime.org/proceedings/2003/nime2003_170.pdf}, keywords = {Tangible interface, modular system, polyrhythmic sequencer. VISION We believe in a future where music will no longer be considered a linear composition, but a dynamic structure, and musical composition will extend to interaction. We also believe that through the } }
Sukandar Kartadinata. 22AD. The Gluiph: a Nucleus for Integrated Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 180–183. http://doi.org/10.5281/zenodo.1176521
Abstract
Download PDF DOI
In this paper I present the gluiph, a single-board computer thatwas conceived as a platform for integrated electronic musicalinstruments. It aims to provide new instruments as well asexisting ones with a stronger identity by untethering themfrom the often lab-like stage setups built around general purpose computers. The key additions to its core are a flexiblesensor subsystem and multi-channel audio I/O. In contrast toother stand-alone approaches it retains a higher degree offlexibility by supporting popular music programming languages, with Miller Puckette’s pd [1] being the current focus.
@inproceedings{Kartadinata2003, author = {Kartadinata, Sukandar}, title = {The Gluiph: a Nucleus for Integrated Instruments}, pages = {180--183}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176521}, url = {http://www.nime.org/proceedings/2003/nime2003_180.pdf}, keywords = {Musical instrument, integration, single-board computer (SBC), embedded system, stand-alone system, pd, DSP, sensor, latency, flexibility, coherency.} }
Jean-Michel Couturier and Daniel Arfib. 22AD. Pointing Fingers: Using Multiple Direct Interactions with Visual Objects to Perform Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 184–187. http://doi.org/10.5281/zenodo.1176501
Abstract
Download PDF DOI
In this paper, we describe a new interface for musicalperformance, using the interaction with a graphical userinterface in a powerful manner: the user directly touches ascreen where graphical objects are displayed and can useseveral fingers simultaneously to interact with the objects. Theconcept of this interface is based on the superposition of thegesture spatial place and the visual feedback spatial place; i tgives the impression that the graphical objects are real. Thisconcept enables a huge freedom in designing interfaces. Thegesture device we have created gives the position of fourfingertips using 3D sensors and the data is performed in theMax/MSP environment. We have realized two practicalexamples of musical use of such a device, using PhotosonicSynthesis and Scanned Synthesis.
@inproceedings{Couturier2003, author = {Couturier, Jean-Michel and Arfib, Daniel}, title = {Pointing Fingers: Using Multiple Direct Interactions with Visual Objects to Perform Music}, pages = {184--187}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176501}, url = {http://www.nime.org/proceedings/2003/nime2003_184.pdf}, keywords = {HCI, touch screen, multimodality, mapping, direct interaction, gesture devices, bimanual interaction, two-handed, Max/MSP. } }
Eric Singer, Kevin Larke, and David Bianciardi. 22AD. LEMUR GuitarBot: MIDI Robotic String Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 188–191. http://doi.org/10.5281/zenodo.1176565
Abstract
Download PDF DOI
This paper describes the LEMUR GuitarBot, a robotic musical instrument composed of four independent MIDI controllable single-stringed movable bridge units. Design methodology, development and fabrication process, control specification and results are discussed.
@inproceedings{Singer2003a, author = {Singer, Eric and Larke, Kevin and Bianciardi, David}, title = {{LEMUR} GuitarBot: {MIDI} Robotic String Instrument}, pages = {188--191}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176565}, url = {http://www.nime.org/proceedings/2003/nime2003_188.pdf}, keywords = {Robotics, interactive, performance, MIDI, string instrument.} }
Chad Peiper, David Warden, and Guy Garnett. 22AD. An Interface for Real-time Classification of Articulations Produced by Violin Bowing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 192–196. http://doi.org/10.5281/zenodo.1176553
Abstract
Download PDF DOI
We introduce a software system for real-time classification of violin bow strokes (articulations). The system uses an electromagnetic motion tracking system to capture raw gesture data. The data is analyzed to extract stroke features. These features are provided to a decision tree for training and classification. Feedback from feature and classification data is presented visually in an immersive graphic environment.
@inproceedings{Peiper2003, author = {Peiper, Chad and Warden, David and Garnett, Guy}, title = {An Interface for Real-time Classification of Articulations Produced by Violin Bowing}, pages = {192--196}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176553}, url = {http://www.nime.org/proceedings/2003/nime2003_192.pdf} }
Zack Settel and Cort Lippe. 22AD. Convolution Brother’s Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 197–200. http://doi.org/10.5281/zenodo.1176559
Abstract
Download PDF DOI
The subject of instrument design is quite broad. Much work has been done at Ircam, MIT, CNMAT, Stanford and elsewhere in the area. In this paper we will present our own developed approach to designing and using instruments in composition and performance for the authors’ “Convolution Brothers” pieces. The presentation of this paper is accompanied by a live Convolution Brothers demonstration.
@inproceedings{Settel2003, author = {Settel, Zack and Lippe, Cort}, title = {Convolution Brother's Instrument Design}, pages = {197--200}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176559}, url = {http://www.nime.org/proceedings/2003/nime2003_197.pdf} }
Insook Choi. 22AD. A Component Model of Gestural Primitive Throughput. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 201–204. http://doi.org/10.5281/zenodo.1176499
Abstract
Download PDF DOI
This paper suggests that there is a need for formalizing acomponent model of gestural primitive throughput in musicinstrument design. The purpose of this model is to construct acoherent and meaningful interaction between performer andinstrument. Such a model has been implicit in previous researchfor interactive performance systems. The model presented heredistinguishes gestural primitives from units of measure ofgestures. The throughput model identifies symmetry betweenperformance gestures and musical gestures, and indicates a rolefor gestural primitives when a performer navigates regions ofstable oscillations in a musical instrument. The use of a highdimensional interface tool is proposed for instrument design, forfine-tuning the mapping between movement sensor data andsound synthesis control data.
@inproceedings{Choi2003, author = {Choi, Insook}, title = {A Component Model of Gestural Primitive Throughput}, pages = {201--204}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176499}, url = {http://www.nime.org/proceedings/2003/nime2003_201.pdf}, keywords = {Performance gestures, musical gestures, instrument design, mapping, tuning, affordances, stability. } }
Cléo Palacio-Quintin. 22AD. The Hyper-Flute. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 206–207. http://doi.org/10.5281/zenodo.1176549
Abstract
Download PDF DOI
The Hyper-Flute is a standard Boehm flute (the model used is a Powell 2100, made in Boston) extended via electronic sensors that link it to a computer, enabling control of digital sound processing parameters while performing. The instrument’s electronic extensions are described in some detail, and performance applications are briefly discussed.
@inproceedings{PalacioQuintin2003, author = {Palacio-Quintin, Cl\'{e}o}, title = {The Hyper-Flute}, pages = {206--207}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176549}, url = {http://www.nime.org/proceedings/2003/nime2003_206.pdf}, keywords = {Digital sound processing, flute, hyper-instrument, interactive music, live electronics, performance, sensors.} }
Jesse T. Allison and Timothy Place. 22AD. SensorBox: Practical Audio Interface for Gestural Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 208–210. http://doi.org/10.5281/zenodo.1176482
Abstract
Download PDF DOI
SensorBox is a low cost, low latency, high-resolutioninterface for obtaining gestural data from sensors for use inrealtime with a computer-based interactive system. Wediscuss its implementation, benefits, current limitations, andcompare it with several popular interfaces for gestural dataacquisition.
@inproceedings{Allison2003, author = {Allison, Jesse T. and Place, Timothy}, title = {SensorBox: Practical Audio Interface for Gestural Performance}, pages = {208--210}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176482}, url = {http://www.nime.org/proceedings/2003/nime2003_208.pdf}, keywords = {Sensors, gestural acquisition, audio interface, interactive music, SensorBox. } }
Kevin C. Baird. 22AD. Multi-Conductor: An Onscreen Polymetrical Conducting and Notation Display System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 211–212. http://doi.org/10.5281/zenodo.1176488
Abstract
Download PDF DOI
This software tool, developed in Max/MSP, presentsperformers with image files consisting of traditional notationas well as conducting in the form of video playback. Theimpetus for this work was the desire to allow the musicalmaterial for each performer of a given piece to differ withregard to content and tempo.
@inproceedings{Baird2003, author = {Baird, Kevin C.}, title = {Multi-Conductor: An Onscreen Polymetrical Conducting and Notation Display System}, pages = {211--212}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176488}, url = {http://www.nime.org/proceedings/2003/nime2003_211.pdf}, keywords = {Open form, notation, polymeter, polytempi, Max/MSP. } }
William Kleinsasser. 22AD. Dsp.rack: Laptop-based Modular, Programmable Digital Signal Processing and Mixing for Live Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 213–215. http://doi.org/10.5281/zenodo.1176525
Abstract
Download PDF DOI
This document describes modular software supporting livesignal processing and sound file playback within theMax/MSP environment. Dsp.rack integrates signalprocessing, memory buffer recording, and pre-recordedmulti-channel file playback using an interconnected,programmable signal flow matrix, and an eight-channel i/oformat.
@inproceedings{Kleinsasser2003, author = {Kleinsasser, William}, title = {Dsp.rack: Laptop-based Modular, Programmable Digital Signal Processing and Mixing for Live Performance}, pages = {213--215}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176525}, url = {http://www.nime.org/proceedings/2003/nime2003_213.pdf}, keywords = {Digital signal processing, Max/MSP, computer music performance, matrix routing, live performance processing. } }
Mat Laibowitz. 22AD. BASIS: A Genesis in Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 216–217. http://doi.org/10.5281/zenodo.1176527
Abstract
Download PDF DOI
This paper is a demo proposal for a new musical interfacebased on a DNA-like double-helix and concepts in charactergeneration. It contains a description of the interface,motivations behind developing such an interface, variousmappings of the interface to musical applications, and therequirements to demo the interface.
@inproceedings{Laibowitz2003, author = {Laibowitz, Mat}, title = {BASIS: A Genesis in Musical Interfaces}, pages = {216--217}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176527}, url = {http://www.nime.org/proceedings/2003/nime2003_216.pdf}, keywords = {Performance, Design, Experimentation, DNA, Big Five. } }
David Merrill. 22AD. Head-Tracking for Gestural and Continuous Control of Parameterized Audio Effects. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 218–219. http://doi.org/10.5281/zenodo.1176531
Abstract
Download PDF DOI
This paper describes a system which uses the output fromhead-tracking and gesture recognition software to drive aparameterized guitar effects synthesizer in real-time.
@inproceedings{Merrill2003, author = {Merrill, David}, title = {Head-Tracking for Gestural and Continuous Control of Parameterized Audio Effects}, pages = {218--219}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176531}, url = {http://www.nime.org/proceedings/2003/nime2003_218.pdf}, keywords = {Head-tracking, gestural control, continuous control, parameterized effects processor. } }
Eric Singer. 22AD. Sonic Banana: A Novel Bend-Sensor-Based MIDI Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 220–221. http://doi.org/10.5281/zenodo.1176563
Abstract
Download PDF DOI
This paper describes the Sonic Banana, a bend-sensor based alternative MIDI controller.
@inproceedings{Singer2003, author = {Singer, Eric}, title = {Sonic Banana: A Novel Bend-Sensor-Based {MIDI} Controller}, pages = {220--221}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176563}, url = {http://www.nime.org/proceedings/2003/nime2003_220.pdf}, keywords = {Interactive, controller, bend, sensors, performance, MIDI.} }
David Muth and Ed Burton. 22AD. Sodaconductor. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 222–224. http://doi.org/10.5281/zenodo.1176537
Abstract
Download PDF DOI
Sodaconductor is a musical interface for generating OSCcontrol data based on the dynamic physical simulation toolSodaconstructor as it can be seen and heard onhttp://www.sodaplay.com.
@inproceedings{Muth2003, author = {Muth, David and Burton, Ed}, title = {Sodaconductor}, pages = {222--224}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176537}, url = {http://www.nime.org/proceedings/2003/nime2003_222.pdf}, keywords = {Sodaconstrucor, Soda, Open Sound Control, Networked Performance, Physical Simulation, Generative Composition, Java Application, Non-Linear Sequencing.} }
Emmanuel Fléty and Marc Sirguy. 22AD. EoBody : a Follow-up to AtoMIC Pro’s Technology. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 225–226. http://doi.org/10.5281/zenodo.1176505
Abstract
Download PDF DOI
Ircam has been deeply involved into gesture analysis and sensingfor about four years now, as several artistic projects demonstrate.Ircam has often been solicited for sharing software and hardwaretools for gesture sensing, especially devices for the acquisition andconversion of sensor data, such as the AtoMIC Pro [1][2]. Thisdemo-paper describes the recent design of a new sensor to MIDIinterface called EoBody1
@inproceedings{Flety2003, author = {Fl\'{e}ty, Emmanuel and Sirguy, Marc}, title = {EoBody : a Follow-up to AtoMIC Pro's Technology}, pages = {225--226}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176505}, url = {http://www.nime.org/proceedings/2003/nime2003_225.pdf}, keywords = {Gestural controller, Sensor, MIDI, Computer Music. } }
Joseph A. Paradiso. 22AD. Dual-Use Technologies for Electronic Music Controllers: A Personal Perspective. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 228–234. http://doi.org/10.5281/zenodo.1176551
Abstract
Download PDF DOI
Several well-known alternative musical controllers were inspired by sensor systems developed in other fields, often coming to their musical application via surprising routes. Correspondingly, work on electronic music controllers has relevance to other applications and broader research themes. In this article, I give a tour though several controller systems that I have been involved with over the past decade and outline their connections with other areas of inquiry.
@inproceedings{Paradiso2003, author = {Paradiso, Joseph A.}, title = {Dual-Use Technologies for Electronic Music Controllers: A Personal Perspective}, pages = {228--234}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176551}, url = {http://www.nime.org/proceedings/2003/nime2003_228.pdf} }
Claude Cadoz, Annie Luciani, Jean-Loup Florens, and Nicolas Castagné. 22AD. ACROE — ICA Artistic Creation and Computer Interactive Multisensory Simulation Force Feedback Gesture Transducers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 235–246. http://doi.org/10.5281/zenodo.1176494
BibTeX
Download PDF DOI
@inproceedings{Cadoz2003, author = {Cadoz, Claude and Luciani, Annie and Florens, Jean-Loup and Castagn\'{e}, Nicolas}, title = {{AC}ROE --- {ICA} Artistic Creation and Computer Interactive Multisensory Simulation Force Feedback Gesture Transducers}, pages = {235--246}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2003}, date = {22-24 May, 2003}, address = {Montreal, Canada}, issn = {2220-4806}, doi = {10.5281/zenodo.1176494}, url = {http://www.nime.org/proceedings/2003/nime2003_235.pdf} }
2002
Daniel Arfib and Jacques Dudon. 24AD. A Digital Emulator of the Photosonic Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 1–4. http://doi.org/10.5281/zenodo.1176388
Abstract
Download PDF DOI
In this paper we describe the digital emulation of a optical photosonic instrument. First we briefly describe theoptical instrument which is the basis of this emulation.Then we give a musical description of the instrument implementation and its musical use and we concludewith the "duo" possibility of such an emulation.
@inproceedings{Arfib2002, author = {Arfib, Daniel and Dudon, Jacques}, title = {A Digital Emulator of the Photosonic Instrument}, pages = {1--4}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176388}, url = {http://www.nime.org/proceedings/2002/nime2002_001.pdf}, keywords = {Photosonic synthesis, digital emulation, Max-Msp, gestural devices.} }
Alain Baumann and Rosa Sánchez. 24AD. Interdisciplinary Applications of New Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 5–9. http://doi.org/10.5281/zenodo.1176390
Abstract
Download PDF DOI
In this paper we will have a short overview of some of the systems we have been developing as an independent company over the last years. We will focus especially on our latest experiments in developing wireless gestural systems using the camera as an interactive tool to generate 2D and 3D visuals and music.
@inproceedings{Baumann2002, author = {Baumann, Alain and S\'{a}nchez, Rosa}, title = {Interdisciplinary Applications of New Instruments}, pages = {5--9}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176390}, url = {http://www.nime.org/proceedings/2002/nime2002_005.pdf}, keywords = {interdisciplinary applications of new instruments, mixed media instruments} }
David Bernard. 24AD. Experimental Controllers for Live Electronic Music Performance (vs. Copyright). Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 10–11. http://doi.org/10.5281/zenodo.1176392
Abstract
Download PDF DOI
This paper describes the design and development of several musical instruments and MIDI controllers built byDavid Bernard (as part of The Sound Surgery project:www.thesoundsurgery.co.uk) and used in club performances around Glasgow during 1995-2002. It argues that changing technologies and copyright are shifting ourunderstanding of music from "live art" to "recorded medium" whilst blurring the boundaries between sound and visual production.
@inproceedings{Bernard2002, author = {Bernard, David}, title = {Experimental Controllers for Live Electronic Music Performance (vs. Copyright).}, pages = {10--11}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176392}, url = {http://www.nime.org/proceedings/2002/nime2002_010.pdf}, keywords = {Live electronic music, experimental instruments, MIDI controllers, audio-visual synchronisation, copyright, SKINS digital hand drum.} }
Tina Blaine and Clifton Forlines. 24AD. JAM-O-WORLD: Evolution of the Jam-O-Drum Multi-player Musical Controller into the Jam-O-Whirl Gaming Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 12–17. http://doi.org/10.5281/zenodo.1176394
Abstract
Download PDF DOI
This paper discusses the Jam-O-Drum multi-player musical controller and its adaptation into a gaming controller interface known as the Jam-O-Whirl. The Jam-O-World project positioned these two controller devices in a dedicated projection environment that enabled novice players to participate in immersive musical gaming experiences. Players’ actions, detected via embedded sensors in an integrated tabletop surface, control game play, real-time computer graphics and musical interaction. Jam-O-World requires physical and social interaction as well as collaboration among players.
@inproceedings{Blaine2002, author = {Blaine, Tina and Forlines, Clifton}, title = {JAM-O-WORLD: Evolution of the Jam-O-Drum Multi-player Musical Controller into the Jam-O-Whirl Gaming Interface}, pages = {12--17}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176394}, url = {http://www.nime.org/proceedings/2002/nime2002_012.pdf}, keywords = {Collaboration, computer graphics, embedded sensors, gaming controller, immersive musical gaming experiences, musical controller, multi-player, novice, social interaction.} }
Bert Bongers and Yolande Harris. 24AD. A Structured Instrument Design Approach: The Video-Organ. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 18–23. http://doi.org/10.5281/zenodo.1176396
Abstract
Download PDF DOI
The Video-Organ is an instrument for the live performance of audio-visual material. To design an interface we apply a modular approach, in an attempt to split up the complex task of finding physical interfaces and mappings to control sound and video as generated by the computer. Generally, most modules, or instrumentlets as they are called, consist of a human interface element mapped to a certain effect. To describe the instrumentlets a design space is used consisting of the parameters degrees of freedom, range and precision. This paper is addressing the notion that traditional approaches to composition are challenged and changed in this situation, where the material is both audio and visual, and where the design and development of an instrument becomes involved in the process of performing and composing.
@inproceedings{Bongers2002, author = {Bongers, Bert and Harris, Yolande}, title = {A Structured Instrument Design Approach: The Video-Organ}, pages = {18--23}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176396}, url = {http://www.nime.org/proceedings/2002/nime2002_018.pdf} }
Matthew Burtner. 24AD. Noisegate 67 for Metasaxophone: Composition and Performance Considerations of a New Computer Music Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 24–29. http://doi.org/10.5281/zenodo.1176398
Abstract
Download PDF DOI
Noisegate 67 was the first fully interactive composition written for the Computer Metasaxophone, a new computer controller interface for electroacoustic music. The Metasaxophone is an acoustic tenor saxophone retrofitted with an onboard computer microprocessor and an array of sensors that convert performance data into MIDI control messages. While maintaining full acoustic functionality the Metasaxophone is a versatile MIDI controller. This paper discusses the compositionally driven technical and aesthetic concerns that went into building the Metasaxophone, and the resulting aesthetic implementations in Noisegate 67. By juxtaposing the compositional approach to the saxophone before and after the electronic enhancements an attempt is made to expose working paradigms of composition for metainstruments.
@inproceedings{Burtner2002, author = {Burtner, Matthew}, title = {Noisegate 67 for Metasaxophone: Composition and Performance Considerations of a New Computer Music Controller}, pages = {24--29}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176398}, url = {http://www.nime.org/proceedings/2002/nime2002_024.pdf} }
Antonio Camurri, Riccardo Trocca, and Gualtiero Volpe. 24AD. Interactive Systems Design: A KANSEI-based Approach. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 30–37. http://doi.org/10.5281/zenodo.1176400
Abstract
Download PDF DOI
This paper presents some our recent research on computational models and algorithms for real-time analysis of full-body human movement. The focus here is on techniques to extract in real-time expressive cues relevant to KANSEI and emotional content in human expressive gesture, e.g., in dance and music performances. Expressive gesture can contribute to new perspectives for the design of interactive systems. The EyesWeb open software platform is a main concrete result from our research work. EyesWeb is used in interactive applications, including music and other artistic productions, museum interactive exhibits, therapy and rehabilitation, based on the paradigm of expressive gesture. EyesWeb is freely available from www.eyesweb.org.
@inproceedings{Camurri2002, author = {Camurri, Antonio and Trocca, Riccardo and Volpe, Gualtiero}, title = {Interactive Systems Design: A KANSEI-based Approach}, pages = {30--37}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176400}, url = {http://www.nime.org/proceedings/2002/nime2002_030.pdf} }
Joel Chadabe. 24AD. The Limitations of Mapping as a Structural Descriptive in Electronic Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 38–42. http://doi.org/10.5281/zenodo.1176402
Abstract
Download PDF DOI
Mapping, which describes the way a performer’s controls are connected to sound variables, is a useful concept when applied to the structure of electronic instruments modelled after traditional acoustic instruments. But mapping is a less useful concept when applied to the structure of complex and interactive instruments in which algorithms generate control information. This paper relates the functioning and benefits of different types of electronic instruments to the structural principles on which they are based. Structural models of various instruments will be discussed and musical examples played.
@inproceedings{Chadabe2002, author = {Chadabe, Joel}, title = {The Limitations of Mapping as a Structural Descriptive in Electronic Instruments}, pages = {38--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176402}, url = {http://www.nime.org/proceedings/2002/nime2002_038.pdf}, keywords = {mapping fly-by-wire algorithmic network interactivity instrument deterministic indeterministic} }
Jean-Michel Couturier. 24AD. A Scanned Synthesis Virtual Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 43–45. http://doi.org/10.5281/zenodo.1176404
Abstract
Download PDF DOI
This paper describes a virtual musical instrument based on the scanned synthesis technique and implemented in Max-Msp. The device is composed of a computer and three gesture sensors. The timbre of the produced sound is rich and changing. The instrument proposes an intuitive and expressive control of the sound thanks to a complex mapping between gesture and sound.
@inproceedings{Couturier2002, author = {Couturier, Jean-Michel}, title = {A Scanned Synthesis Virtual Instrument}, pages = {43--45}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176404}, url = {http://www.nime.org/proceedings/2002/nime2002_043.pdf}, keywords = {graphics tablet, meta-parameters, multi-touch tactile surface, scanned synthesis} }
Gideon D’Arcangelo. 24AD. Creating a Context for Musical Innovation: A NIME Curriculum. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 46–49. http://doi.org/10.5281/zenodo.1176406
Abstract
Download PDF DOI
This paper presents the approaches and expectations of a recently launched course at New York University (NYU) in the design and development of musical controllers. The framework for the course, which is also entitled "New Interfaces for Musical Expression," is largely based on the proceedings of the first NIME workshop held in Seattle, WA in April 2001.
@inproceedings{DArcangelo2002, author = {D'Arcangelo, Gideon}, title = {Creating a Context for Musical Innovation: A NIME Curriculum}, pages = {46--49}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176406}, url = {http://www.nime.org/proceedings/2002/nime2002_046.pdf}, keywords = {creative expression, input devices, musical controllers} }
Sidney S. Fels and Florian Vogt. 24AD. Tooka: Explorations of Two Person Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 50–55. http://doi.org/10.5281/zenodo.1176408
Abstract
Download PDF DOI
In this paper we describe three new music controllers, each designed to be played by two players. As the intimacy between two people increases so does their ability to anticipate and predict the other’s actions. We hypothesize that this intimacy between two people can be used as a basis for new controllers for musical expression. Looking at ways people communicate non-verbally, we are developing three new instruments based on different communication channels. The Tooka is a hollow tube with a pressure sensor and buttons for each player. Players place opposite ends in their mouths and modulate the pressure in the tube with their tongues and lungs, controlling sound. Coordinated button presses control the music as well. The Pushka, yet to be built, is a semirigid rod with strain gauges and position sensors to track the rod’s position. Each player holds opposite ends of the rod and manipulates it together. Bend, end point position, velocity and acceleration and torque are mapped to musical parameters. The Pullka, yet to be built, is simply a string attached at both ends with two bridges. Tension is measured with strain gauges. Players manipulate the string tension at each end together to modulate sound. We are looking at different musical mappings appropriate for two players.
@inproceedings{Fels2002, author = {Fels, Sidney S. and Vogt, Florian}, title = {Tooka: Explorations of Two Person Instruments}, pages = {50--55}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176408}, url = {http://www.nime.org/proceedings/2002/nime2002_050.pdf}, keywords = {Two person musical instruments, intimacy, human-human communication, cooperative music, passive haptic interface} }
Kieran Ferris and Liam Bannon. 24AD. The Musical Box Garden. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 56–58. http://doi.org/10.5281/zenodo.1176410
Abstract
Download PDF DOI
The Cardboard Box Garden (CBG) originated from a dissatisfaction with current computer technology as it is presented to children. This paper shall briefly review the process involved in the creation of this installation, from motivation through to design and subsequent implementation and user experience with the CBG. Through the augmentation of an everyday artefact, namely the standard cardboard box, a simple yet powerful interactive environment was created that has achieved its goal of stirring childrens imagination judging from the experience of our users.
@inproceedings{Ferris2002, author = {Ferris, Kieran and Bannon, Liam}, title = {The Musical Box Garden}, pages = {56--58}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176410}, url = {http://www.nime.org/proceedings/2002/nime2002_056.pdf}, keywords = {Education, play, augmented reality, pervasive computing, disappearing computer, assembly, cardboard box} }
Emmanuel Fléty. 24AD. AtoMIC Pro: a Multiple Sensor Acquisition Device. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 59–64. http://doi.org/10.5281/zenodo.1176412
Abstract
Download PDF DOI
Research and musical creation with gestural-oriented interfaces have recently seen a renewal of interest and activity at Ircam [1][2]. In the course of several musical projects, undertaken by young composers attending the one-year Course in Composition and Computer Music or by guests artists, Ircam Education and Creation departments have proposed various solutions for gesture-controlled sound synthesis and processing. In this article, we describe the technical aspects of AtoMIC Pro, an Analog to MIDI converter proposed as a re-usable solution for digitizing several sensors in different contexts such as interactive sound installation or virtual instruments.The main direction of our researches, and of this one in particular, is to create tools that can be fully integrated into an artistic project as a real part of the composition and performance processes.
@inproceedings{Flety2002, author = {Fl\'{e}ty, Emmanuel}, title = {AtoMIC Pro: a Multiple Sensor Acquisition Device}, pages = {59--64}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176412}, url = {http://www.nime.org/proceedings/2002/nime2002_059.pdf}, keywords = {Gestural controller, Sensor, MIDI, Music. Solution for Multi-sensor Acquisition} }
Ashley Gadd and Sidney S. Fels. 24AD. MetaMuse: Metaphors for Expressive Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 65–70. http://doi.org/10.5281/zenodo.1176414
Abstract
Download PDF DOI
We explore the role that metaphor plays in developing expressive devices by examining the MetaMuse system. MetaMuse is a prop-based system that uses the metaphor of rainfall to make the process of granular synthesis understandable. We discuss MetaMuse within a framework we call ”transparency” that can be used as a predictor of the expressivity of musical devices. Metaphor depends on a literature,or cultural basis, which forms the basis for making transparent device mappings. In this context we evaluate the effect of metaphor in the MetaMuse system.
@inproceedings{Gadd2002, author = {Gadd, Ashley and Fels, Sidney S.}, title = {MetaMuse: Metaphors for Expressive Instruments}, pages = {65--70}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176414}, url = {http://www.nime.org/proceedings/2002/nime2002_065.pdf}, keywords = {Expressive interface, transparency, metaphor, prop-based controller, granular synthesis.} }
Niall J. Griffith, Sean O’Leary, Donagh O’Shea, Ed Hammond, and Sile O’Modhrain. 24AD. Circles and Seeds: Adapting Kpelle Ideas about Music Performance for Collaborative Digital Music performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 71–72. http://doi.org/10.5281/zenodo.1176416
Abstract
Download PDF DOI
The use of free gesture in making music has usually been confined to instruments that use direct mappings between movement and sound space. Here we demonstrate the use of categories of gesture as the basis of musical learning and performance collaboration. These are used in a system that reinterprets the approach to learning through performance that is found in many musical cultures and discussed here through the example of Kpelle music.
@inproceedings{Griffith2002, author = {Griffith, Niall J. and O'Leary, Sean and O'Shea, Donagh and Hammond, Ed and O'Modhrain, Sile}, title = {Circles and Seeds: Adapting Kpelle Ideas about Music Performance for Collaborative Digital Music performance}, pages = {71--72}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176416}, url = {http://www.nime.org/proceedings/2002/nime2002_071.pdf}, keywords = {Collaboration, Performance, Metaphor, Gesture} }
Eric Gunther, Glorianna Davenport, and Sile O’Modhrain. 24AD. Cutaneous Grooves: Composing for the Sense of Touch. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 73–79. http://doi.org/10.5281/zenodo.1176418
Abstract
Download PDF DOI
This paper presents a novel coupling of haptics technology and music, introducing the notion of tactile composition or aesthetic composition for the sense of touch. A system that facilitates the composition and perception of intricate, musically structured spatio-temporal patterns of vibration on the surface of the body is described. An initial test of the system in a performance context is discussed. The fundamental building blocks of a compositional language for touch are considered.
@inproceedings{Gunther2002, author = {Gunther, Eric and Davenport, Glorianna and O'Modhrain, Sile}, title = {Cutaneous Grooves: Composing for the Sense of Touch}, pages = {73--79}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176418}, url = {http://www.nime.org/proceedings/2002/nime2002_073.pdf}, keywords = {multi-modal,music,tactile composition,vibrotactile} }
Tim Hankins, David Merrill, and Jocelyn Robert. 24AD. Circular Optical Object Locator. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 80–81. http://doi.org/10.5281/zenodo.1176420
Abstract
Download PDF DOI
The Circular Optical Object Locator is a collaborative and cooperative music-making device. It uses an inexpensive digital video camera to observe a rotating platter. Opaque objects placed on the platter are detected by the camera during rotation. The locations of the objects passing under the camera are used to generate music.
@inproceedings{Hankins2002, author = {Hankins, Tim and Merrill, David and Robert, Jocelyn}, title = {Circular Optical Object Locator}, pages = {80--81}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176420}, url = {http://www.nime.org/proceedings/2002/nime2002_080.pdf}, keywords = {Input devices, music controllers, collaborative, real-time score manipulation.} }
Leila Hasan, Nicholas Yu, and Joseph A. Paradiso. 24AD. The Termenova : A Hybrid Free-Gesture Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 82–87. http://doi.org/10.5281/zenodo.1176422
Abstract
Download PDF DOI
We have created a new electronic musical instrument, referred to as the Termenova (Russian for "daughter of Theremin") that combines a free-gesture capacitive sensing device with an optical sensing system that detects the reflection of a hand when it intersects a beam of an array of red lasers. The laser beams, which are made visible by a thin layer of theatrical mist, provide visual feedback and guidance to the performer to alleviate the difficulties of using a non-contact interface as well as adding an interesting component for the audience to observe. The system uses capacitive sensing to detect the proximity of the player’s hands; this distance is mapped to pitch, volume, or other continuous effect. The laser guide positions are calibrated before play with position controlled servo motors interfaced to a main controller board; the location of each beam corresponds to the position where the performer should move his or her hand to achieve a pre-specified pitch and/or effect. The optical system senses the distance of the player’s hands from the source of each laser beam, providing an additional dimension of musical control.
@inproceedings{Hasan2002, author = {Hasan, Leila and Yu, Nicholas and Paradiso, Joseph A.}, title = {The Termenova : A Hybrid Free-Gesture Interface}, pages = {82--87}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176422}, url = {http://www.nime.org/proceedings/2002/nime2002_082.pdf}, keywords = {Theremin, gesture interface, capacitive sensing, laser harp, optical proximity sensing, servo control, musical controller} }
Andy D. Hunt, Marcelo M. Wanderley, and Matthew Paradis. 24AD. The importance of Parameter Mapping in Electronic Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 88–93. http://doi.org/10.5281/zenodo.1176424
Abstract
Download PDF DOI
In this paper we challenge the assumption that an electronic instrument consists solely of an interface and a sound generator. We emphasise the importance of the mapping between input parameters and system parameters, and claim that this can define the very essence of an instrument.
@inproceedings{Hunt2002, author = {Hunt, Andy D. and Wanderley, Marcelo M. and Paradis, Matthew}, title = {The importance of Parameter Mapping in Electronic Instrument Design}, pages = {88--93}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176424}, url = {http://www.nime.org/proceedings/2002/nime2002_088.pdf}, keywords = {electronic musical instruments,human-computer interaction,mapping strategies} }
Robert Huott. 24AD. An Interface for Precise Musical Control. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 94–98. http://doi.org/10.5281/zenodo.1176428
Abstract
Download PDF DOI
This paper is a design report on a prototype musical controller based on fiberoptic sensing pads from Tactex Controls [8]. It will discuss elements of form factor, technical design, and tuning/sound generation systems tested while building the device I have dubbed ’the Ski’. The goal is the creation of a fine musical instrument with which a skilled performer can play music from standard repertoire as well as break sonic ground in modern forms.
@inproceedings{Huott2002, author = {Huott, Robert}, title = {An Interface for Precise Musical Control}, pages = {94--98}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176428}, url = {http://www.nime.org/proceedings/2002/nime2002_094.pdf}, keywords = {musical controller, Tactex, tactile interface, tuning systems} }
Thor Magnusson. 24AD. IXI software. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 101–101. http://doi.org/10.5281/zenodo.1176384
Abstract
Download PDF DOI
We are interested in exhibiting our programs at your demo section at the conference. We believe that the subject of your conference is precisely what we are experimenting with in our musical software.
@inproceedings{Magnusson2002, author = {Magnusson, Thor}, title = {IXI software}, pages = {101--101}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176384}, url = {http://www.nime.org/proceedings/2002/nime2002_101.pdf}, keywords = {Further info on our website http//www.ixi-software.net.} }
Sergi Jordà. 24AD. Afasia: the Ultimate Homeric One-man-multimedia-band. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 102–107. http://doi.org/10.5281/zenodo.1176432
Abstract
Download PDF DOI
In this paper we present Afasia, an interactive multimedia performance based in Homer’s Odyssey [2]. Afasia is a one-man digital theater play in which a lone performer fitted with a sensor-suit conducts, like Homer, the whole show by himself, controlling 2D animations, DVD video and conducting the music mechanically performed by a robot quartet. After contextualizing the piece, all of its technical elements, starting with the hardware input and output components, are described. A special emphasis is given to the interactivity strategies and the subsequent software design. Since its first version premiered in Barcelona in 1998, Afasia has been performed in many European and American countries and has received several international awards.
@inproceedings{Jorda2002, author = {Jord\`{a}, Sergi}, title = {Afasia: the Ultimate Homeric One-man-multimedia-band}, pages = {102--107}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176432}, url = {http://www.nime.org/proceedings/2002/nime2002_102.pdf}, keywords = {Multimedia interaction, musical robots, real-time musical systems.} }
Ajay Kapur, Georg Essl, Philip L. Davidson, and Perry R. Cook. 24AD. The Electronic Tabla Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 108–112. http://doi.org/10.5281/zenodo.1176434
Abstract
Download PDF DOI
This paper describes the design of an electronic Tabla controller. The E-Tabla controls both sound and graphics simultaneously. It allows for a variety of traditional Tabla strokes and new performance techniques. Graphical feedback allows for artistical display and pedagogical feedback.
@inproceedings{Kapur2002, author = {Kapur, Ajay and Essl, Georg and Davidson, Philip L. and Cook, Perry R.}, title = {The Electronic Tabla Controller}, pages = {108--112}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176434}, url = {http://www.nime.org/proceedings/2002/nime2002_108.pdf}, keywords = {Electronic Tabla, Indian Drum Controller, Physical Models, Graphical Feedback} }
Kessous Loı̈c. 24AD. Bi-manual Mapping Experimentation, with Angular Fundamental Frequency Control and Sound Color Navigation. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 113–114. http://doi.org/10.5281/zenodo.1176436
Abstract
Download PDF DOI
In this paper, we describe a computer-based solo musical instrument for live performance. We have adapted a Wacom graphic tablet equipped with a stylus transducer and a game joystick to use them as a solo expressive instrument. We have used a formant-synthesis model that can produce a vowel-like singing voice. This instrument allows multidimensional expressive fundamental frequency control and vowel articulation. The fundamental frequency angular control used here allows different mapping adjustments that correspond to different melodic styles.
@inproceedings{Kessous2002, author = {Kessous, Lo\''{\i}c}, title = {Bi-manual Mapping Experimentation, with Angular Fundamental Frequency Control and Sound Color Navigation}, pages = {113--114}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176436}, url = {http://www.nime.org/proceedings/2002/nime2002_113.pdf}, keywords = {Bi-manual, off-the-shelf input devices, fundamental frequency control, sound color navigation, mapping.} }
Tod Machover. 24AD. Instruments, Interactivity, and Inevitability. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 115–115. http://doi.org/10.5281/zenodo.1176438
Abstract
Download PDF DOI
It is astonishing to think that a mere twenty years ago, real-time music production and performance was not only in a fledgling state with only primitive (such as the IRCAM 4X machine) or limited (like the Synclavier) capabilities, but was also the subject of very heated debate. At IRCAM in the early 1980’s, for instance, some (such as Luciano Berio) questioned whether any digital technology could ever be truly "instrumental", while others (such as Jean-Claude Risset) doubted whether real-time activity of any sort would ever acquire the richness and introspection of composition.
@inproceedings{Machover2002, author = {Machover, Tod}, title = {Instruments, Interactivity, and Inevitability}, pages = {115--115}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176438}, url = {http://www.nime.org/proceedings/2002/nime2002_115.pdf} }
James Mandelis. 24AD. Adaptive Hyperinstruments: Applying Evolutionary Techniques to Sound Synthesis and Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 116–117. http://doi.org/10.5281/zenodo.1176440
Abstract
Download PDF DOI
This paper describes the Genophone [2], a hyperinstrument developed for Sound-Performance-Design using the evolutionary paradigm of selective breeding as the driving process. Sound design, and control assignments (performance mappings), on most current systems rely heavily on an intimate knowledge of the Sound Synthesis Techniques (SSTs) employed by the sound generator (hardware or software based). This intimate knowledge can only be achieved by investing long periods of time playing around with sounds and experimenting with how parameters change the nature of the sounds produced. This experience is also needed when control mappings are defined for performance purposes, so external stimuli can effect changes in SST parameters. Often such experience can be gained after years of interaction with one particular SST. The system presented here attempts to aid the user in designing performance sounds and mappings without the necessity for deep knowledge of the SSTs involved. This is achieved by a selective breeding process on populations of individual sounds and their mapping. The initial populations are made up of individuals of existing hand-coded sounds and their mapping. Initial populations never have randomly derived individuals (this is not an issue as man’s best friend was also not selectively bred from protozoa). The user previews the population then expresses how much individuals are liked by their relative repositioning on the screen (fitness). Some individuals are selected as parents to create a new population of offspring, through variable mutation and genetic recombination. These operators use the fitness as a bias for their function, and they were also successfully used in MutaSynth [1]. The offspring are then evaluated (as their parents were) and selected for breeding. This cycle continues until satisfactory sounds and their mapping are reached. Individuals can also be saved to disk for future "strain" development. The aim of the system is to encourage the creation of novel performance mappings and sounds with emphasis on exploration, rather than designs that satisfy specific a priori criteria.
@inproceedings{Mandelis2002, author = {Mandelis, James}, title = {Adaptive Hyperinstruments: Applying Evolutionary Techniques to Sound Synthesis and Performance}, pages = {116--117}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176440}, url = {http://www.nime.org/proceedings/2002/nime2002_116.pdf}, keywords = {adaptive interfaces, artificial life,expressivity, hyperinstruments, live performance, motion-to-sound mapping, selective breeding, sound meta-synthesis} }
Mark T. Marshall, Matthias Rath, and Breege Moynihan. 24AD. The Virtual Bodhran – The Vodhran. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 118–119. http://doi.org/10.5281/zenodo.1176442
Abstract
Download PDF DOI
This paper introduces a subtle interface, which evolved from the design of an alternative gestural controller in the development of a performance interface. The conceptual idea used is based on that of the traditional Bodhran instrument, an Irish frame drum. The design process was user-centered and involved professional Bodhran players and through prototyping and user testing the resulting Vodhran emerged.
@inproceedings{Marshall2002, author = {Marshall, Mark T. and Rath, Matthias and Moynihan, Breege}, title = {The Virtual Bodhran -- The Vodhran}, pages = {118--119}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176442}, url = {http://www.nime.org/proceedings/2002/nime2002_118.pdf}, keywords = {Virtual instrument, sound modeling, gesture, user-centered design} }
Graeme Mccaig and Sidney S. Fels. 24AD. Playing on Heart-Strings: Experiences with the 2Hearts System. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 120–125. http://doi.org/10.5281/zenodo.1176444
Abstract
Download PDF DOI
Here we present 2Hearts, a music system controlled bythe heartbeats of two people. As the players speak and touch, 2Hearts extracts meaningful variables from their heartbeat signals. These variables are mapped to musical parameters, conveying the changing patterns of tension and relaxation in the players’ relationship. We describe the motivation for creating 2Hearts, observations from the prototypes that have been built, and principles learnt in the ongoing development process.
@inproceedings{Mccaig2002, author = {Mccaig, Graeme and Fels, Sidney S.}, title = {Playing on Heart-Strings: Experiences with the 2{H}earts System}, pages = {120--125}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176444}, url = {http://www.nime.org/proceedings/2002/nime2002_120.pdf}, keywords = {Heart Rate, Biosensor, Interactive Music, Non-Verbal Communication, Affective Computing, Ambient Display} }
Lisa McElligott, Edward Dixon, and Michelle Dillon. 24AD. ‘PegLegs in Music’ Processing the Effort Generated by Levels of Expressive Gesturing in Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 126–130. http://doi.org/10.5281/zenodo.1176446
Abstract
Download PDF DOI
In this paper we discuss the possibility of augmenting existing musical performance by using a novel sensing device termed ’PegLeg’. This device interprets the movements and motions of a musician during play by allowing the musician to manipulate a sensor in three dimensions. A force sensitive surface allows us to detect, interpret and interface the subtle but integral element of physical "effort" in music playing. This device is designed to extend the musicians control over any given instrument, granting an additional means of ’playing’ that would previously have been impossible - granting an additional limb to extend their playing potential - a PegLeg...
@inproceedings{McElligott2002, author = {McElligott, Lisa and Dixon, Edward and Dillon, Michelle}, title = {`PegLegs in Music' Processing the Effort Generated by Levels of Expressive Gesturing in Music}, pages = {126--130}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176446}, url = {http://www.nime.org/proceedings/2002/nime2002_126.pdf}, keywords = {Gesture, weight distribution, effort, expression, intent, movement, 3D sensing pressure, force, sensor, resolution, control device, sound, music, input.} }
Kia Ng. 24AD. Interactive Gesture Music Performance Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 131–132. http://doi.org/10.5281/zenodo.1176448
Abstract
Download PDF DOI
This paper briefly describes a number of performance interfaces under the broad theme of Interactive Gesture Music (IGM). With a short introduction, this paper discusses the main components of a Trans-Domain Mapping (TDM) framework, and presents various prototypes developed under this framework, to translate meaningful activities from one creative domain onto another, to provide real-time control of musical events with physical movements.
@inproceedings{Ng2002, author = {Ng, Kia}, title = {Interactive Gesture Music Performance Interface}, pages = {131--132}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176448}, url = {http://www.nime.org/proceedings/2002/nime2002_131.pdf}, keywords = {Gesture, Motion, Interactive, Performance, Music.} }
Charles Nichols. 24AD. The vBow: Development of a Virtual Violin Bow Haptic Human-Computer Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 133–136. http://doi.org/10.5281/zenodo.1176450
Abstract
Download PDF DOI
This paper describes the development of a virtual violin bow haptic human-computer interface, which senses bow position with encoders, to drive bowed-string physical model synthesis, while engaging servomotors, to simulate the haptic feedback of a violin bow on a string. Construction of the hardware and programming of the software are discussed, as well as the motivation for building the instrument, and its planned uses.
@inproceedings{Nichols2002, author = {Nichols, Charles}, title = {The vBow: Development of a Virtual Violin Bow Haptic Human-Computer Interface}, pages = {133--136}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176450}, url = {http://www.nime.org/proceedings/2002/nime2002_133.pdf}, keywords = {bow, controller, haptic, hci, interface, violin} }
Roberto Oboe and Giovanni De Poli. 24AD. Multi-instrument Virtual Keyboard – The MIKEY Project. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 137–142. http://doi.org/10.5281/zenodo.1176452
Abstract
Download PDF DOI
The design of a virtual keyboard, capable of reproducing the tactile feedback of several musical instruments is reported. The key is driven by a direct drive motor, which allows friction free operations. The force to be generated by the motor is calculated in real time by a dynamic simulator, which contains the model of mechanisms’ components and constraints. Each model is tuned on the basis of measurements performed on the real system. So far, grand piano action, harpsichord and Hammond organ have been implemented successfully on the system presented here.
@inproceedings{Oboe2002, author = {Oboe, Roberto and De Poli, Giovanni}, title = {Multi-instrument Virtual Keyboard -- The MIKEY Project}, pages = {137--142}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176452}, url = {http://www.nime.org/proceedings/2002/nime2002_137.pdf}, keywords = {Virtual mechanisms, dynamic simulation} }
Garth Paine. 24AD. GESTATION. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 143–144. http://doi.org/10.5281/zenodo.1176454
Abstract
Download PDF DOI
Interactivity has become a major consideration in the development of a contemporary art practice that engages with the proliferation of computer based technologies. Keywords
@inproceedings{Paine2002, author = {Paine, Garth}, title = {GESTATION}, pages = {143--144}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176454}, url = {http://www.nime.org/proceedings/2002/nime2002_143.pdf}, keywords = {are your choice.} }
Laurel S. Pardue and Joseph A. Paradiso. 24AD. Musical Navigatrics: New Musical Interactions with Passive Magnetic Tags. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 145–147. http://doi.org/10.5281/zenodo.1176456
Abstract
Download PDF DOI
Passive RF Tagging can provide an attractive medium for development of free-gesture musical interfaces. This was initially explored in our Musical Trinkets installation, which used magnetically-coupled resonant LC circuits to identify and track the position of multiple objects in real-time. Manipulation of these objects in free space over a read coil triggered simple musical interactions. Musical Navigatrics builds upon this success with new more sensitive and stable sensing, multi-dimensional response, and vastly more intricate musical mappings that enable full musical exploration of free space through the dynamic use and control of arpeggiatiation and effects. The addition of basic sequencing abilities also allows for the building of complex, layered musical interactions in a uniquely easy and intuitive manner.
@inproceedings{Pardue2002, author = {Pardue, Laurel S. and Paradiso, Joseph A.}, title = {Musical Navigatrics: New Musical Interactions with Passive Magnetic Tags}, pages = {145--147}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176456}, url = {http://www.nime.org/proceedings/2002/nime2002_145.pdf}, keywords = {passive tag, position tracking, music sequencer interface} }
James Patten, Ben Recht, and Hiroshi Ishii. 24AD. Audiopad: A Tag-based Interface for Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 148–153. http://doi.org/10.5281/zenodo.1176458
Abstract
Download PDF DOI
We present Audiopad, an interface for musical performance that aims to combine the modularity of knob based controllers with the expressive character of multidimensional tracking interfaces. The performer’s manipulations of physical pucks on a tabletop control a real-time synthesis process. The pucks are embedded with LC tags that the system tracks in two dimensions with a series of specially shaped antennae. The system projects graphical information on and around the pucks to give the performer sophisticated control over the synthesis process.
@inproceedings{Patten2002, author = {Patten, James and Recht, Ben and Ishii, Hiroshi}, title = {Audiopad: A Tag-based Interface for Musical Performance}, pages = {148--153}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176458}, url = {http://www.nime.org/proceedings/2002/nime2002_148.pdf}, keywords = {RF tagging, MIDI, tangible interfaces, musical controllers, object tracking} }
Jordan Wynnychuk, Richard Porcher, Lucas Brajovic, Marko Brajovic, and Nacho Platas. 24AD. sutoolz 1.0 alpha : 3D Software Music Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 154–155. http://doi.org/10.5281/zenodo.1176478
Abstract
Download PDF DOI
The demo sutoolz 1.0 alpha is a 3D software interface for music performance. By navigating through a 3D virtual architecture the musician uses a set of 3D tools to interact with the virtual environment: gameplay zones, speaker volumes, speaker volume membranes, speaker navigation volumes and 3D multi-band FFT visualization systems.
@inproceedings{Wynnychuk2002, author = {Wynnychuk, Jordan and Porcher, Richard and Brajovic, Lucas and Brajovic, Marko and Platas, Nacho}, title = {sutoolz 1.0 alpha : {3D} Software Music Interface}, pages = {154--155}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176478}, url = {http://www.nime.org/proceedings/2002/nime2002_154.pdf}, keywords = {3D music interface, 3D sound, analogue input controllers, audio localization, audio visualization, digital architecture, hybrid environments, video game navigation} }
Norbert Schnell and Marc Battier. 24AD. Introducing Composed Instruments, Technical and Musicological Implications. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 156–160. http://doi.org/10.5281/zenodo.1176460
Abstract
Download PDF DOI
In this paper, we develop the concept of "composed instruments". We will look at this idea from two perspectives: the design of computer systems in the context of live performed music and musicological considerations. A historical context is developed. Examples will be drawn from recent compositions. Finally basic concepts from computer science will be examined for their relation ship to this concept.
@inproceedings{Schnell2002, author = {Schnell, Norbert and Battier, Marc}, title = {Introducing Composed Instruments, Technical and Musicological Implications}, pages = {156--160}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176460}, url = {http://www.nime.org/proceedings/2002/nime2002_156.pdf}, keywords = {Instruments, musicology, composed instrument, Theremin, Martenot, interaction, streams, MAX.} }
Tamara Smyth and Julius O. Smith. 24AD. Creating Sustained Tones with the Cicada’s Rapid Sequential Buckling Mechanism. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 24–27. http://doi.org/10.5281/zenodo.1176462
Abstract
Download PDF DOI
The cicada uses a rapid sequence of buckling ribs to initiate and sustain vibrations in its tymbal plate (the primary mechanical resonator in the cicada’s sound production system). The tymbalimba, a music controller based on this same mechanism, has a row of 4 convex aluminum ribs (ason the cicada’s tymbal) arranged much like the keys on a calimba. Each rib is spring loaded and capable of snapping down into a V-shape (a motion referred to as buckling), under the downward force of the user’s finger. This energy generated by the buckling motion is measured by an accelerometer located under each rib and used as the input to a physical model.
@inproceedings{Smyth2002, author = {Smyth, Tamara and Smith, Julius O.}, title = {Creating Sustained Tones with the Cicada's Rapid Sequential Buckling Mechanism}, pages = {24--27}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176462}, url = {http://www.nime.org/proceedings/2002/nime2002_161.pdf}, keywords = {Bioacoustics, Physical Modeling, Controllers, Cicada, Buckling mechanism.} }
Stanza. 24AD. Amorphoscapes & Soundtoys. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 165–166. http://doi.org/10.5281/zenodo.1176386
Abstract
Download PDF DOI
Amorphoscapes by Stanza are interactive, generative, audio visual, digital paintings and drawings created specifically for the internet. This is interactive art on the Internet, incorporating generative sounds and 3D imaging.
@inproceedings{Stanza2002, author = {Stanza}, title = {Amorphoscapes \& Soundtoys}, pages = {165--166}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176386}, url = {http://www.nime.org/proceedings/2002/nime2002_165.pdf} }
Taelman Johannes. 24AD. A Low-cost Sonar for Unobtrusive Man-machine Interfacing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 167–170. http://doi.org/10.5281/zenodo.1176430
Abstract
Download PDF DOI
This paper describes the hardware and the software of a computer-based doppler-sonar system for movement detection. The design is focused on simplicity and lowcost do-it-yourself construction.
@inproceedings{Johannes2002, author = {Johannes, Taelman}, title = {A Low-cost Sonar for Unobtrusive Man-machine Interfacing}, pages = {167--170}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176430}, url = {http://www.nime.org/proceedings/2002/nime2002_167.pdf}, keywords = {sonar} }
Atau Tanaka and Benjamin Knapp. 24AD. Multimodal Interaction in Music Using the Electromyogram and Relative Position Sensing. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 171–176. http://doi.org/10.5281/zenodo.1176464
Abstract
Download PDF DOI
This paper describes a technique of multimodal, multichannel control of electronic musical devices using two control methodologies, the Electromyogram (EMG) and relative position sensing. Requirements for the application of multimodal interaction theory in the musical domain are discussed. We introduce the concept of bidirectional complementarity to characterize the relationship between the component sensing technologies. Each control can be used independently, but together they are mutually complementary. This reveals a fundamental difference from orthogonal systems. The creation of a concert piece based on this system is given as example.
@inproceedings{Tanaka2002, author = {Tanaka, Atau and Knapp, Benjamin}, title = {Multimodal Interaction in Music Using the Electromyogram and Relative Position Sensing}, pages = {171--176}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176464}, url = {http://www.nime.org/proceedings/2002/nime2002_171.pdf}, keywords = {Human Computer Interaction, Musical Controllers, Electromyogram, Position Sensing, Sensor Instruments} }
Bill Verplank, Michael Gurevich, and Max Mathews. 24AD. THE PLANK: Designing a Simple Haptic Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 177–180. http://doi.org/10.5281/zenodo.1176466
Abstract
Download PDF DOI
Active force-feedback holds the potential for precise and rapid controls. A high performance device can be built from a surplus disk drive and controlled from an inexpensive microcontroller. Our new design,The Plank has only one axis of force-feedback with limited range of motion. It is being used to explore methods of feeling and directly manipulating sound waves and spectra suitable for live performance of computer music.
@inproceedings{Verplank2002, author = {Verplank, Bill and Gurevich, Michael and Mathews, Max}, title = {THE PLANK: Designing a Simple Haptic Controller.}, pages = {177--180}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176466}, url = {http://www.nime.org/proceedings/2002/nime2002_177.pdf}, keywords = {Haptics, music controllers, scanned synthesis.} }
Florian Vogt, Graeme Mccaig, Mir A. Ali, and Sidney S. Fels. 24AD. Tongue ‘n’ Groove: An Ultrasound based Music Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 181–185. http://doi.org/10.5281/zenodo.1176468
Abstract
Download PDF DOI
Here we propose a novel musical controller which acquires imaging data of the tongue with a two-dimensional medical ultrasound scanner. A computer vision algorithm extracts from the image a discrete tongue shape to control, in realtime, a musical synthesizer and musical effects. We evaluate the mapping space between tongue shape and controller parameters and its expressive characteristics.
@inproceedings{Vogt2002, author = {Vogt, Florian and Mccaig, Graeme and Ali, Mir A. and Fels, Sidney S.}, title = {Tongue `n' Groove: An Ultrasound based Music Controller}, pages = {181--185}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176468}, url = {http://www.nime.org/proceedings/2002/nime2002_181.pdf}, keywords = {Tongue model, ultrasound, real-time, music synthesis, speech interface} }
Gil Weinberg, Roberto Aimi, and Kevin Jennings. 24AD. The Beatbug Network -A Rhythmic System for Interdependent Group Collaboration. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 186–191. http://doi.org/10.5281/zenodo.1176470
Abstract
Download PDF DOI
The Beatbugs are hand-held percussive instruments that allow the creation, manipulation, and sharing of rhythmic motifs through a simple interface. When multiple Beatbugs are connected in a network, players can form large-scale collaborative compositions by interdependently sharing and developing each other’s motifs. Each Beatbug player can enter a motif that is then sent through a stochastic computerized "Nerve Center" to other players in the network. Receiving players can decide whether to develop the motif further (by continuously manipulating pitch, timbre, and rhythmic elements using two bend sensor antennae) or to keep it in their personal instrument (by entering and sending their own new motifs to the group.) The tension between the system’s stochastic routing scheme and the players’ improvised real-time decisions leads to an interdependent, dynamic, and constantly evolving musical experience. A musical composition entitled "Nerve" was written for the system by author Gil Weinberg. It was premiered on February 2002 as part of Tod Machover’s Toy Symphony [1] in a concert with the Deutsches Symphonie Orchester Berlin, conducted by Kent Nagano. The paper concludes with a short evaluative discussion of the concert and the weeklong workshops that led to it.
@inproceedings{Weinberg2002, author = {Weinberg, Gil and Aimi, Roberto and Jennings, Kevin}, title = {The Beatbug Network -A Rhythmic System for Interdependent Group Collaboration}, pages = {186--191}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176470}, url = {http://www.nime.org/proceedings/2002/nime2002_186.pdf}, keywords = {Interdependent Musical Networks, group playing, percussive controllers.} }
David Wessel, Matthew Wright, and John Schott. 24AD. Intimate Musical Control of Computers with a Variety of Controllers and Gesture Mapping Metaphors. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 192–194. http://doi.org/10.5281/zenodo.1176472
Abstract
Download PDF DOI
In this demonstration we will show a variety of computer-based musical instruments designed for live performance. Our design criteria include initial ease of use coupled with a long term potential for virtuosity, minimal and low variance latency, and clear and simple strategies for programming the relationship between gesture and musical result. We present custom controllers and unique adaptations of standard gestural interfaces, a programmable connectivity processor, a communications protocol called Open Sound Control (OSC), and a variety of metaphors for musical control.
@inproceedings{Wessel2002, author = {Wessel, David and Wright, Matthew and Schott, John}, title = {Intimate Musical Control of Computers with a Variety of Controllers and Gesture Mapping Metaphors}, pages = {192--194}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176472}, url = {http://www.nime.org/proceedings/2002/nime2002_192.pdf}, keywords = {Expressive control, mapping gestures to acoustic results, metaphors for musical control, Tactex, Buchla Thunder, digitizing tablets.} }
Carr Wilkerson, Stefania Serafin, and Carmen Ng. 24AD. The Mutha Rubboard Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 195–198. http://doi.org/10.5281/zenodo.1176474
Abstract
Download PDF DOI
The Mutha Rubboard is a musical controller based on the rubboard, washboard or frottoir metaphor commonly used in the Zydeco music genre of South Louisiana. It is not onlya metamorphosis of a traditional instrument, but a modern bridge of exploration into a rich musical heritage. It uses capacitive and piezo sensing technology to output MIDI and raw audio data.This new controller reads the key placement in two parallel planes by using radio capacitive sensing circuitry expanding greatly on the standard corrugated metal playing surface. The percussive output normally associated with the rubboard is captured through piezo contact sensors mounted directly on the keys (the playing implements). Additionally,mode functionality is controlled by discrete switching on the keys.This new instrument is meant to be easily played by both experienced players and those new to the rubboard. It lends itself to an expressive freedom by placing the control surface on the chest and allowing the hands to move uninhibited about it or by playing it in the usual way, preserving its musical heritage.
@inproceedings{Wilkerson2002, author = {Wilkerson, Carr and Serafin, Stefania and Ng, Carmen}, title = {The Mutha Rubboard Controller}, pages = {195--198}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176474}, url = {http://www.nime.org/proceedings/2002/nime2002_195.pdf}, keywords = {MIDI controllers, computer music, Zydeco music, interactive music, electronic musical instrument, human computer interface, Louisiana heritage, physical modeling, bowl resonators.} }
Todd Winkler. 24AD. Fusing Movement, Sound, and Video in Falling Up, an Interactive Dance/Theatre Production. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 199–200. http://doi.org/10.5281/zenodo.1176476
Abstract
Download PDF DOI
Falling Up is an evening-length performance incorporating dance and theatre with movement-controlled audio/video playback and processing. The solo show is a collaboration between Cindy Cummings (performance) and Todd Winkler(sound, video), first performed at the Dublin Fringe Festival,2001. Each thematic section of the work shows a different typeof interactive relationship between movement, video and sound. This demonstration explains the various technical configurations and aesthetic thinking behind aspects of the work.
@inproceedings{Winkler2002, author = {Winkler, Todd}, title = {Fusing Movement, Sound, and Video in Falling Up, an Interactive Dance/Theatre Production}, pages = {199--200}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176476}, url = {http://www.nime.org/proceedings/2002/nime2002_199.pdf}, keywords = {Dance, Video processing, Movement sensor, VNS, Very Nervous System} }
Diana Young. 24AD. The Hyperbow Controller: Real-Time Dynamics Measurement of Violin Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 201–206. http://doi.org/10.5281/zenodo.1176480
Abstract
Download PDF DOI
In this paper, the design and construction of a new violin interface, the Hyperbow, is discussed. The motivation driving the research of this instrument was the desire to create a violin bow capable of measuring the most intricate aspects of violin techniquethe subtle elements of physical gesture that immediately and directly impact the sound of the instrument while playing. In order to provide this insight into the subtleties of bow articulation, a sensing system has been integrated into a commercial carbon fiber bow to measure changes in position, acceleration, and the downward and lateral strains of the bow stick. The sensors were fashioned using an electromagnetic field sensing technique, commercial MEMS accelerometers, and foil strain gauges. The measurement techniques used in this work were found to be quite sensitive and yielded sensors that were easily controllable by a player using traditional right hand bowing technique.
@inproceedings{Young2002, author = {Young, Diana}, title = {The Hyperbow Controller: Real-Time Dynamics Measurement of Violin Performance}, pages = {201--206}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2002}, date = {24-26 May, 2002}, address = {Dublin, Ireland}, issn = {2220-4806}, doi = {10.5281/zenodo.1176480}, url = {http://www.nime.org/proceedings/2002/nime2002_201.pdf}, keywords = {Hyperbow, Hyperviolin, Hyperinstrument, violin, bow, position sensor, accelerometer, strain sensor} }
2001
Perry R. Cook. 1AD. Principles for Designing Computer Music Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 3–6. http://doi.org/10.5281/zenodo.1176358
Abstract
Download PDF DOI
This paper will present observations on the design, artistic, and human factors of creating digital music controllers. Specific projects will be presented, and a set of design principles will be supported from those examples.
@inproceedings{Cook2001, author = {Cook, Perry R.}, title = {Principles for Designing Computer Music Controllers}, pages = {3--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176358}, url = {http://www.nime.org/proceedings/2001/nime2001_003.pdf}, keywords = {Musical control, artistic interfaces.} }
Bill Verplank, Craig Sapp, and Max Mathews. 1AD. A Course on Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 7–10. http://doi.org/10.5281/zenodo.1176380
Abstract
Download PDF DOI
Over the last four years, we have developed a series of lectures, labs and project assignments aimed at introducing enough technology so that students from a mix of disciplines can design and build innovative interface devices.
@inproceedings{Verplank2001, author = {Verplank, Bill and Sapp, Craig and Mathews, Max}, title = {A Course on Controllers}, pages = {7--10}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176380}, url = {http://www.nime.org/proceedings/2001/nime2001_007.pdf}, keywords = {Input devices, music controllers, CHI technology, courses.} }
David Wessel and Matthew Wright. 1AD. Problems and Prospects for Intimate Musical Control of Computers. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 11–14. http://doi.org/10.5281/zenodo.1176382
Abstract
Download PDF DOI
In this paper we describe our efforts towards the development of live performance computer-based musical instrumentation. Our design criteria include initial ease of use coupled with a long term potential for virtuosity,minimal and low variance latency, and clear and simple strategies for programming the relationship between gesture and musical result. We present custom controllers and unique adaptations of standard gestural interfaces, a programmable connectivity processor, a communications protocol called Open Sound Control(OSC), and a variety of metaphors for musical control. We further describe applications of our technology to a variety of real musical performances and directions for future research.
@inproceedings{Wessel2001, author = {Wessel, David and Wright, Matthew}, title = {Problems and Prospects for Intimate Musical Control of Computers}, pages = {11--14}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176382}, url = {http://www.nime.org/proceedings/2001/nime2001_011.pdf}, keywords = {communications protocols,gestural controllers,latency,musical,reactive computing,signal processing} }
Nicola Orio, Norbert Schnell, and Marcelo M. Wanderley. 1AD. Input Devices for Musical Expression : Borrowing Tools from HCI. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 15–18. http://doi.org/10.5281/zenodo.1176370
Abstract
Download PDF DOI
This paper reviews the existing literature on input device evaluation and design in human-computer interaction (HCI)and discusses possible applications of this knowledge to the design and evaluation of new interfaces for musical expression. Specifically, a set of musical tasks is suggested to allow the evaluation of different existing controllers.
@inproceedings{Orio2001, author = {Orio, Nicola and Schnell, Norbert and Wanderley, Marcelo M.}, title = {Input Devices for Musical Expression : Borrowing Tools from HCI}, pages = {15--18}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176370}, url = {http://www.nime.org/proceedings/2001/nime2001_015.pdf}, keywords = {Input device design, gestural control, interactive systems} }
Curtis Bahn and Dan Trueman. 1AD. interface : Electronic Chamber Ensemble. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 19–23. http://doi.org/10.5281/zenodo.1176356
Abstract
Download PDF DOI
This paper presents the interface developments and music of the duo "interface," formed by Curtis Bahn and Dan Trueman. We describe gestural instrument design, interactive performance interfaces for improvisational music, spherical speakers (multi-channel, outward-radiating geodesic speaker arrays) and Sensor-Speaker-Arrays (SenSAs: combinations of various sensor devices with spherical speaker arrays). We discuss the concept, design and construction of these systems, and, give examples from several new published CDs of work by Bahn and Trueman.
@inproceedings{Bahn2001, author = {Bahn, Curtis and Trueman, Dan}, title = {interface : Electronic Chamber Ensemble}, pages = {19--23}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176356}, url = {http://www.nime.org/proceedings/2001/nime2001_019.pdf} }
Camille Goudeseune, Guy Garnett, and Timothy Johnson. 1AD. Resonant Processing of Instrumental Sound Controlled by Spatial Position. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 24–26. http://doi.org/10.5281/zenodo.1176362
Abstract
Download PDF DOI
We present an acoustic musical instrument played through a resonance model of another sound. The resonance model is controlled in real time as part of the composite instrument. Our implementation uses an electric violin, whose spatial position modifies filter parameters of the resonance model. Simplicial interpolation defines the mapping from spatial position to filter parameters. With some effort, pitch tracking can also control the filter parameters. The individual technologies – motion tracking, pitch tracking, resonance models – are easily adapted to other instruments.
@inproceedings{Goudeseune2001, author = {Goudeseune, Camille and Garnett, Guy and Johnson, Timothy}, title = {Resonant Processing of Instrumental Sound Controlled by Spatial Position}, pages = {24--26}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176362}, url = {http://www.nime.org/proceedings/2001/nime2001_024.pdf}, keywords = {multidimensionality, control, resonance, pitch tracking} }
Michael Gurevich and Stephan von Muehlen. 1AD. The Accordiatron : A MIDI Controller For Interactive Music. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 27–29. http://doi.org/10.5281/zenodo.1176364
Abstract
Download PDF DOI
The Accordiatron is a new MIDI controller for real-time performance based on the paradigm of a conventional squeeze box or concertina. It translates the gestures of a performer to the standard communication protocol ofMIDI, allowing for flexible mappings of performance data to sonic parameters. When used in conjunction with a realtime signal processing environment, the Accordiatron becomes an expressive, versatile musical instrument. A combination of sensory outputs providing both discrete and continuous data gives the subtle expressiveness and control necessary for interactive music.
@inproceedings{Gurevich2001, author = {Gurevich, Michael and von Muehlen, Stephan}, title = {The Accordiatron : A {MIDI} Controller For Interactive Music}, pages = {27--29}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176364}, url = {http://www.nime.org/proceedings/2001/nime2001_027.pdf}, keywords = {MIDI controllers, computer music, interactive music, electronic musical instruments, musical instrument design, human computer interface} }
Joseph A. Paradiso, Kai-yuh Hsiao, and Ari Benbasat. 1AD. Tangible Music Interfaces Using Passive Magnetic Tags. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 30–33. http://doi.org/10.5281/zenodo.1176374
Abstract
Download PDF DOI
The technologies behind passive resonant magnetically coupled tags are introduced and their application as a musical controller is illustrated for solo or group performances, interactive installations, and music toys.
@inproceedings{Paradiso2001, author = {Paradiso, Joseph A. and Hsiao, Kai-yuh and Benbasat, Ari}, title = {Tangible Music Interfaces Using Passive Magnetic Tags}, pages = {30--33}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176374}, url = {http://www.nime.org/proceedings/2001/nime2001_030.pdf}, keywords = {RFID, resonant tags, EAS tags, musical controller, tangible interface} }
Kenji Mase and Tomoko Yonezawa. 1AD. Body , Clothes , Water and Toys : Media Towards Natural Music Expressions with Digital Sounds. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 34–37. http://doi.org/10.5281/zenodo.1176368
Abstract
Download PDF DOI
In this paper, we introduce our research challenges for creating new musical instruments using everyday-life media with intimate interfaces, such as the self-body, clothes, water and stuffed toys. Various sensor technologies including image processing and general touch sensitive devices are employed to exploit these interaction media. The focus of our effort is to provide user-friendly and enjoyable experiences for new music and sound performances. Multimodality of musical instruments is explored in each attempt. The degree of controllability in the performance and the richness of expressions are also discussed for each installation.
@inproceedings{Mase2001, author = {Mase, Kenji and Yonezawa, Tomoko}, title = {Body , Clothes , Water and Toys : Media Towards Natural Music Expressions with Digital Sounds}, pages = {34--37}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176368}, url = {http://www.nime.org/proceedings/2001/nime2001_034.pdf}, keywords = {New interface, music controller, dance, image processing, water interface, stuffed toy} }
Dan Overholt. 1AD. The MATRIX : A Novel Controller for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 38–41. http://doi.org/10.5281/zenodo.1176372
Abstract
Download PDF DOI
The MATRIX (Multipurpose Array of Tactile Rods for Interactive eXpression) is a new musical interface for amateurs and professionals alike. It gives users a 3dimensional tangible interface to control music using their hands, and can be used in conjunction with a traditional musical instrument and a microphone, or as a stand-alone gestural input device. The surface of the MATRIX acts as areal-time interface that can manipulate the parameters of a synthesis engine or effect algorithm in response to a performer’s expressive gestures. One example is to have the rods of the MATRIX control the individual grains of a granular synthesizer, thereby "sonically sculpting" the microstructure of a sound. In this way, the MATRIX provides an intuitive method of manipulating sound with avery high level of real-time control.
@inproceedings{Overholt2001, author = {Overholt, Dan}, title = {The MATRIX : A Novel Controller for Musical Expression}, pages = {38--41}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176372}, url = {http://www.nime.org/proceedings/2001/nime2001_038.pdf}, keywords = {Musical controller, tangible interface, real-time expression, audio synthesis, effects algorithms, signal processing, 3-D interface, sculptable surface} }
Gideon D’Arcangelo. 1AD. Creating Contexts of Creativity : Musical Composition with Modular Components. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 42–45. http://doi.org/10.5281/zenodo.1176360
Abstract
Download PDF DOI
This paper describes a series of projects that explore the possibilities of musical expression through the combination of pre-composed, interlocking, modular components. In particular, this paper presents a modular soundtrack recently composed by the author for “Currentsof Creativity,” a permanent interactive video wall installation at the Pope John Paul II Cultural Center which is slated to open Easter 2001 in Washington, DC.
@inproceedings{DArcangelo2001, author = {D'Arcangelo, Gideon}, title = {Creating Contexts of Creativity : Musical Composition with Modular Components}, pages = {42--45}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176360}, url = {http://www.nime.org/proceedings/2001/nime2001_042.pdf} }
Sergi Jordà. 1AD. New Musical Interfaces and New Music-making Paradigms. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 46–50. http://doi.org/10.5281/zenodo.1176366
Abstract
Download PDF DOI
The conception and design of new musical interfaces is a multidisciplinary area that tightly relates technology and artistic creation. In this paper, the author first exposes some of the questions he has posed himself during more than a decade experience as a performer, composer, interface and software designer, and educator. Finally, he illustrates these topics with some examples of his work.
@inproceedings{Jorda2001, author = {Jord\`{a}, Sergi}, title = {New Musical Interfaces and New Music-making Paradigms}, pages = {46--50}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176366}, url = {http://www.nime.org/proceedings/2001/nime2001_046.pdf} }
Dominic Robson. 1AD. PLAY! : Sound Toys For the Non Musical. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 51–53. http://doi.org/10.5281/zenodo.1176376
Abstract
Download PDF DOI
This paper reviews a number of projects that explore building electronic musical things, interfaces and objects designed to be used and enjoyed by anybody but in particular those who do not see themselves as naturally musical. On reflecting on the strengths of these projects, interesting directions for similar work in the future are considered.
@inproceedings{Robson2001, author = {Robson, Dominic}, title = {PLAY! : Sound Toys For the Non Musical}, pages = {51--53}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176376}, url = {http://www.nime.org/proceedings/2001/nime2001_051.pdf} }
Ryan Ulyate and David Bianciardi. 1AD. The Interactive Dance Club : Avoiding Chaos In A Multi Participant Environment. Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 54–56. http://doi.org/10.5281/zenodo.1176378
Abstract
Download PDF DOI
In 1998 we designed enabling technology and a venue concept that allowed several participants to influence a shared musical and visual experience. Our primary goal was to deliver musically coherent and visually satisfying results from several participants’ input. The result, the Interactive Dance Club, ran for four nights at the ACM SIGGRAPH 98 convention in Orlando, Florida.In this paper we will briefly describe the Interactive Dance Club, our "10 Commandments of Interactivity", and what we learned from it’s premiere at SIGGRAPH 98.
@inproceedings{Ulyate2001, author = {Ulyate, Ryan and Bianciardi, David}, title = {The Interactive Dance Club : Avoiding Chaos In A Multi Participant Environment}, pages = {54--56}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2001}, date = {1-2 April, 2001}, address = {Seattle, WA}, issn = {2220-4806}, doi = {10.5281/zenodo.1176378}, url = {http://www.nime.org/proceedings/2001/nime2001_054.pdf} }
© NIME 2021