- Doug Van Nort, Distributed Listening in Electroacoustic Improvisation, Leonardo Music Journal, 2016.
This article considers the distributed role that listening plays for both performer and audience in the process of discovering musical meaning in the context of electroacoustic improvisation through examination of particular emergent practices.
- Doug Van Nort, Marcelo Wanderley and Philippe Depalle, Mapping Control Structures for Sound Synthesis: Functional and Topological Perspectives, Computer Music Journal, 38(3), 6-22, 2014.
This paper contributes a holistic conceptual framework for the notion of "mapping" that extends the classical view of mapping as parameter association. In presenting this holistic approach to mapping techniques, we apply the framework to existing works from the literature as well as to new implementations that consider this approach in their construction. As any mapping control structure for a given digital instrument is determined by the musical context in which it is used, we present musical examples that relate the relatively abstract realm of mapping design to the physically and perceptually grounded notions of control and sonic gesture. In making this connection, mapping can then be more clearly seen as a linkage between physical action and sonic result. In this sense, the purpose of this work is to translate the discussion on mapping so that it links an abstract and formalized approach - intended for representation and conceptualization - with a viewpoint that considers mapping in its role as a perceived correspondence between physical materials (i.e., those that act on controllers and transducers) and sonic events. This correspondence is, at its heart, driven by our cognitive and embodied understanding of the acoustic world.
- Doug Van Nort, Pauline Oliveros and Jonas Braasch, Electro/Acoustic Improvisation and Deeply Listening Machines, Journal of New Music Research, 42(4), pp. 303-324, December 2013.
In this paper we discuss our approach to designing improvising music systems whose intelligence is centered around careful listening, particularly to qualities related to timbre and texture. Our interest lies in systems that can make contextual decisions based on the overall character of the sound field, as well as the specific shape and contour created by each player. We describe the history and paradigm of "expanded instrument" systems, which has led to one instrumental system (GREIS) focused on manual sculpting of sound with machine assistance, and one improvising system (FILTER) which introduces the ability to listen, recognize and transform a performer's sound in a contextually relevant fashion. We describe the different modules of these improvising performance systems, as well as specific musical performances as examples of their use. We also describe our free improvisation trio, in order to describe the musical context that situates and informs our research.
- Doug Van Nort. A Collaborative Approach to Teaching Sound Sculpting, Embodied Listening and the Materiality of Sound. Organised Sound, 18(2), August 2013.
This paper presents recent work in engaging both students and working professionals from a variety of disciplines and backgrounds with the practice of collective and site-specific electroacoustic music creation. The emphasis is placed on embodied, deep listening in tandem with a manual approach to sonic art creation that bridges an understanding of the interplay between digital sound manipulation, larger composed structures, and the physical presentation of a work in a given space. Through a practice-oriented approach, participants gain insights into areas such as the abstract world of digital sound recording and representation, the extreme influence on this content enacted by a given sound delivery system and a given space, and the subjective experience of listening to sounds from a variety of orientations and postures, and with varying levels of understanding of the original source recordings. Finally, through a group approach to composing larger structures, participants begin to understand the often mysterious and unsaid processes involved in the normally solitary act of composing electroacoustic music.
- Doug Van Nort, Jonas Braasch and Pauline Oliveros. Sound Texture Recognition through Dynamical Systems Modeling of Empirical Mode Decomposition. Journal of the Acoustical Society of America, Vol. 132, issue 4, pp. 2734-2744, October 2012.
This paper describes a system for modeling, recognizing, and classifying sound textures. The described system translates contemporary approaches from video texture analysis, creating a unique approach in the realm of audio and music. The signal is first represented as a set of mode functions by way of the Empirical Mode Decomposition technique for time/frequency analysis, before expressing the dynamics of these modes as a linear dynamical system (LDS). Both linear and non- linear techniques are utilized in order to learn the system dynamics, which leads to a successful distinction between unique classes of textures. Five classes of sounds comprised a data set, consisting of crackling fire, typewriter action, rainstorms, carbonated beverages, and crowd applause, drawing on a variety of source recordings. Based on this data set the system achieved a classification accuracy of 90%, which outperformed both a Mel-Frequency Cepstral Coefficient based LDS-modeling approach from the literature, as well as one based on a standard Gaussian Mixture Model classifier.
- Doug Van Nort. Human:Machine:Human: Gesture, Sound and Embodiment. Kybernetes, vol. 40, issue 7/8, 2011.
Purpose - The purpose of this paper is to present an embodied view on human/machine co-creation in general, and musical improvisation in particular.
Design/methodology/approach - Questions and propositions are formed by examining personal work in intelligent, interactive system design.
Findings - Proper consideration of gestural representation and intentionality leads to enhanced potential for collective expression in human/machine interaction.
Originality/value - This approach extends ideas of conversation theory to improvisational contexts based on spontaneous, collective expression.
Keywords: Music, Improvisation, Creativity, Gesture, Embodied cognition, Cybernetics Paper type Conceptual paper
- Doug Van Nort. Multidimensional scratching, sound shaping and Triple Point. Leonardo Music Journal, vol. 20, December 2010.
The author discusses performance utilizing his GREIS software system, which is built around the principle of a "scrubbing" interaction with roots in the recording industry and the paradigm of scrubbing tape across a magnetic head.
- Doug Van Nort. Instrumental Listening: sonic gesture as design principle. Organised Sound 14(2):177-187, August 2009.
In the majority of discussions surrounding the design of digital instruments and real-time performance systems, notions such as control and mapping are seen from a classical systems point of view: the former is often seen as a variable from an input device or perhaps some driving signal, while the latter is considered as the liaison between input and output parameters. At the same time there is a large body of research regarding gesture in performance that is concerned with the expressive and communicative nature of musical performance. While these views are certainly central to a conceptual understanding of 'instrument', it can be limiting to consider them a priori as the only proper model, and to mediate one's conception of digital instrument design by fixed notions of control, mapping and gesture. As an example of an alternative way to view instrumental response, control structuring and mapping design, this paper discusses the concept of gesture from the point of view of the perception of human intentionality in sound and how one might consider this in interaction design.
- Doug Van Nort. Noise/music and representations systems. Organised Sound 11(2): 173-178, August 2006.
The word 'noise' has taken on various meanings throughout the course of twentieth-century music. Technology has had direct influence on the presence of noise, as phenomenon and as concept, both through its newfound ubiquity in modernity and through its use directly in music production - in electroacoustics. The creative use of technologies has lead to new representation systems for music, and noise - considered as that outside of a given representation - was brought into meaning. This paper examines several moments in which a change in representation brought noise into musical consideration - leading to a 'noise music' for its time before simply becoming understood as music.
- H. McDonough, B. Madore, C. Miller, A. Rogalski, D. Van Nort, J. Wood. Structure Theory for Finitely Generated Carry Groups. Pi Mu Epsilon Journal, vol. 12 no. 1, Fall 2004.
Carry Groups are composed of finite and infinite cyclic groups with a natural operation that involves a carry. They are simple to understand but their structure is not immediately obvious. The authors present a structure theorem for a class of these finitely generated groups, showing their equivalence to certain direct sums of the integers with finite cyclic groups.
- D. Van Nort and P Depalle. Adaptive Musical Control of Time-Frequency Representation. In Springer Handbook of Systematic Musicology. Springer Verlag, forthcoming February 2018.
- D. Van Nort. Listen to the Inner Complexities. In Jensenius, A. R. & Lyons, M. (Eds.) A NIME Reader: Fifteen Years of New Interfaces for Musical Expression. New York: Springer, 2017.
- J. Braasch, S. Bringsjord, N. Deshpande, P. Oliveros, D. Van Nort. An Intelligent Music System to Perform Different “Shapes of Jazz—To Come”. In Studies in Musical Acoustics and Psychoacoustics. Springer International Publishing, pgs 375-403, 2017.
In this chapter, we describe an intelligent music system approach that utilizes a joint bottom-up/top-down structure. The bottom-up structure is purely signal driven and calculates pitch, loudness, and information rate among other parameters using auditory models that simulate the functions of different parts of the brain. The top-down structure builds on a logic-based reasoning system and an ontology that was developed to reflect rules in jazz practice. Two instances of the agent have been developed to perform traditional and free jazz, and it is shown that the same general structure can be used to improvise different styles of jazz.
- J. Braasch, N. Peters, D. Van Nort, P. Oliveros, C. Chafe. A Spatial Auditory Display for Telematic Music Performances, in Principles and Applications of Spatial Hearing - Proceedings of the First International Workshop on IWPASH, (Y. Suzuki, D. Brungart, Y. Iwaya, K. Iida, D. Cabrera, H. Kato (eds.)) World Scientific Pub Co Inc, ISBN: 9814313874, 436-451, 2011.
- Doug Van Nort. 2 entries: "Mapping" and "Mapping, in Digital Musical Instruments", in A Luciani and C Cadoz (ed.) Enaction and Enactive Interfaces: a Handbook of Terms, Enactive Systems Books, Grenoble, 2007.
- Doug Van Nort, Noise to Signal: Deep Listening and the Windowed Line, in Deep Listening: A Composer's Sound Practice by Pauline Oliveros, iUniverse / Deep Listening Publications 2005.
note: (A) denotes published abstract or extended abstract. Otherwise the listing was published as a full paper.
- Van Nort, D., I. Jarvis and M. Palumbo. Towards a Mappable Database of Emergent Gestural Meaning, in Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), 2016.
This paper presents our work towards a database of per- formance activity that is grounded in an embodied view on meaning creation that crosses sense modalities. Our system design is informed by the philosphical and aesthestic intentions of the laboratory context within which it is designed, focused on distribution of performance activity across temporal and spatial dimensions, and expanded notions of the instrumental system as environmental performative agent. We focus here on design decisions that result from this over- arching worldview on digitally-mediated performance.
- Van Nort, D. [radical] signals from life: from muscle sensing to embodied machine listening/learning within a large-scale performance piece, in Proceedings of the 2nd International Workshop on Movement and Computing (MOCO), ACM, 2015.
This paper describes an approach to designing responsive, intelligent sonic interaction in a choreographed dance/media context in which biophysical signals from five dancers are mapped across multiple sense modalities in an immersive black box context. The sound of muscle activity was used as the sole driving signal to generate a musical composition, whose structure was determined by an intelligent agent, designed through an embodied cognitive view on machine listening and learning. This work was undertaken in the context of the [radical] signs of life piece, a collaborative project that involved the author as composer and interactive sound designer, in collaboration with an international team of artists and technologists.
- Van Nort, D. , Distributed Sonic Gesturality and Networks of Improvisation (A), Tuning Speculation II: Auralneirics and imaginary networked futures, Toronto, ON, November 2014.
The negotiation that is collective free improvisation - call it music, noise, sound art, etc. - is bound up in a process of distributing intention and gesturality through a practice of shared listening/sound. This process is marked by perpetual differencing, mimicry and relational modulations of sonic form. The collective attention of a given ensemble is conditioned (among other things) by the felt presence of a given space, the chosen instrumentation and perhaps by shared metaphors or rules of engagement. In our contemporary, post-deep listening context it is uncontroversial to conceive of structuring principles for this listening/sounding dialogue as a form of human algorithm. It likewise is uncontroversial in our post-electroacoustic improv context to regard the electronic system-as-instrument as a living, material agent in a nonlinear network of gestural engagement with other sounding bodies - whether this manifests in the instrumental sonic character of a no-input mixer or circuit-bent casio, or in the network of shared sonic gestural inflections in collectives such as the Evan Parker Electro/Acoustic ensemble, or MEV. In either case, there is a giving over of oneself, a folding into the sonic fabric wherein one's own sense of distinct voice can become lost in a state of productive confusion. In so doing, real, felt immersion is achieved through the collectivity of the engagement as it unfolds and resonates a given acoustic space, perhaps through a shared modulation of a mediating layer of electro/acoustic blending. The act of responding, diverging, augmenting within this fabric is certainly affective and unconscious, but also may be volitional - with actions informed by the trace of one's sonic memory. But what of the collapsing of the directed algorithm and the system-as-instrument that we find when interjecting digital instrumentation into this equation? How can the 'perfect' memory of the digital system represent and re-inject these sonic gestural inflections in a manner that is more like palimpsest than total recall, and how might such systems productively engage with the fabric of performance in a way that might extend collective intentionality, in-situ or tele-presently? Conversely, how do the limitations of real-time systems, of spectral representations and of the neatly-defined worlds of psychoacoustics and signal modeling act as barriers between the algorithm's 'lived' episodic memory and the performer's own phenomenology of time consciousness? This paper speculates on these questions through the lens of the author's research and performance practice in distributed agency and electro/acoustic improvisation, following its trajectory through a historicization of instrumental systems, machine improvisation and 'computer network music'.
- Van Nort, D. On the modeling of behavior in machine improvisation (A), in Cognitively Informed Music Information Retrieval Symposium (COGMIR), Toronto, ON, October 2014.
In the FILTER project, I have focused on developing a system that can learn "style" in the fundamental sense of coherent spectro-temporal patterning, and "sonic gesture" in the sense of phrases having an element of directionality. The listening begins with extraction of perceptually salient auditory features and proceeds to learn a variable state-based model - a temporally encoded graph - that captures the flow of musical improvisation. The approach is informed by a decade of practice as an electroacoustic improviser, and as such is focused on capturing the formation of coherent musical streams that are as likely to arise from "noise" and "texture" as much as from perceived rules of tonal While the system has seen quantitative and qualitative success, the higher-level behavior element is based on an intuitive interpretation of emergent structure, and uses evolutionary algorithms to this end. This presentation invites discussion on the question of how to develop cognitively-grounded behavior modules" within such a project, and reports on two current parallel approaches: the modeling of drives and goals through the CLARION cognitive architecture (in collaboration with Cognitive Scientists at RPI) and temporal encoding of information as inspired by the work of Cariani.
- Navab, N., Van Nort, D., Sha X.W. A Material Computation Perspective on Audio Mosaicing and Gestural Conditioning, Proc. of the International Conference on New Interfaces for Musical Expression 2014 (NIME 14), London, UK. July 2014.
This paper discusses an approach to instrument conception that is based on a careful consideration of the coupling of tactile and sonic gestural action across the layers of physical and computational material in coordinated dynamical variation. To this end we propose a design approach that not only considers the materiality of the instrument, but leverages it as a central part of the conception of the sonic quality, the control structure, and what generally falls under the umbrella of ”mapping”. This extended computational matter perspective scaffolds a holistic approach to understanding an ”instrument” as gestural engagement through physical material, sonic variation, and somatic activity. We present some concrete musical and installation performances that have benefited from this approach to instrument design.
- Van Nort, D. Approaches to Distributed Agency and Shared Musical Meaning in Electroacoustic Improvisation (A), in Electroacoustic Music Studies 2014 (EMS-14) International Conference, Berlin, DE, June 2014.
The world of electroacoustic music presents interesting challenges in regards to its reception by an audience: effectively expressing a given set of musical codes, the importance of spatial representation of musical forms, the choice of venue (e.g. proscenium vs. black box, immersion vs. localized sound sources). Many such challenges are shared by the world of free improvisation, whose "non-idiomatic" nature requires a similar active engagement by the listener in the construction of musical meaning. There is further a "spatial" or distributed nature to this genre through the way that musical gestures, as they are spontaneously constructed, are passed between performers, their meanings reinforced. In the meeting place between these two musical worlds, electroacoustic improvisation (EAI), we find very interesting strategies that have and continue to emerge in regards to the spontaneous evolution of musical codes, in ways that are inclusive of both performer and audience. While the approach of digital instrument-focused musics and NIMEs seek to introduce intimacy through visual expressions of gesture – a focus on source and the birth of sonic forms - many interesting strands in the world of EAI have been concerned instead with the process of sonic structures forming through the the sharing of musical expressions between performers. In this paper I will present examples that illustrate this phenomenon in two distinct categories: those which assume a distributed approach to the act of compositional structures, and those which share sonic gestural actions as they propagate through shared signals in the moment of performance. As examples, I will draw upon my own series of genetic orchestra pieces and work in several electroacoustic ensembles (Triple Point, Composers Inside Electronics, telematic performances with the FILTER system), as well as clear examples from the literature: The Hub, AMM, the Evan Parker Electro-Acoustic Ensemble and contemporary "laptop orchestra" practices. Through this exposition my intention is to articulate a set of practices that have arisen uniquely in the domain of EAI and which have proven effective in developing shared, emergent musical structures. In this process, I further hope to collectively speculate on the ways in which audiences are or are not invited into this shared construction of meaning as active participants.
- Braasch, J., Van Nort, D., Oliveros, P., Krueger, T. Telehaptic interfaces for interpersonal communication within a music ensemble, J. Acoust. Soc. Am. 133, 3256, 2013.
Visual communication is an important aspect of music performance, for example, to pick up temporal cues and find the right entries. Visual cues can also be instrumental to negotiate the solo order in improvised music or enable social exchange, for example, by signaling someone that her solo was well received. The problem with visual communication is that one has to catch someone else's attention, and visual cues outside someone's visual field cannot be detected, even more so if the addressee is busy reading a music score or closing his eyes in a Free Music session. Acoustic communication does not encounter these challenges, but of course someone does not want to disturb the music with other acoustic signals. The haptic modality has the advantage that it does not necessarily interfere with the acoustic signal and does not require attention. However, it allows interpersonal communication if both parties are within close proximity. Using telematic interfaces solves the problem of proximity by allowing participants to communicate over any physical distance. In the project presented here, haptic interfaces were explored in connection with an intelligent music system, CAIRA, to examine both the effect of human/machine and inter-human communication. [Work supported by the National Science Foundation, No. 1002851.]
- Jonas Braasch, Doug Van Nort, Pauline Oliveros, Selmer Bringsjord, Naveen Sundar Govindarajulu, Colin Kuebler, Anthony Parks. A creative artificially-intuitive and reasoning agent in the context of live music improvisation. Creativity at the Intersection of Music and Computation: Music, Minds and Invention Workshop. Ewing, NJ, 2012.
This paper reports on the architecture and performance of a creative artificially-intuitive and reasoning agent (CAIRA) as an improviser and conductor for improvised avant-garde music. The agent's listening skills are based on a music recognition system that simulates the human auditory periphery to perform an Auditory Scene Analysis (ASA). Its simulation of cognitive processes includes a cognitive calculus for reasoning and decision-making using logic based-reasoning. The agent is evaluated in live sessions with music ensembles.
- Doug Van Nort, Jonas Braasch and Pauline Oliveros. Mapping to Musical Action in the FILTER System. Proc. of the International Conference on New Interfaces for Musical Expression 2012 (NIME 12), Ann Arbor, MI, May 2012.
In this paper we discuss aspects of our work in developing performance systems that are geared towards human-machine co-performance with a particular emphasis on improvisation. We present one particular system, FILTER, which was created by the first author and tested by the three authors in the context of their electro-acoustic performance trio. We discuss how this timbrally rich and highly non-idiomatic musical context has challenged the design of the system, with particular emphasis on the mapping of machine listening parameters to higher level behaviors of the system in such a way that spontaneity and creativity are encouraged while maintaining a sense of novel dialogue.
- Egloff, D., Braasch, J., Robinson, P., Van Nort, D., Krueger, T. A vibrotactile music system based on sensory substitution (A), J. Acoust. Soc. Am. 129, 2582, 2011.
The idea of the project reported here was to design a system that builds on touch to enable people with severe hearing impairments to "listen" to music through a process called sensory substitution. The goal was to transform the auditory parameter space into one that is adequate for haptic perception. The approach reported here builds on (i) the design of a haptic display, a tabletop device with 8-24 actuators that can be driven individually, (ii) machine learning algorithms, and (iii) a psychophysical study to determine which music cues can be perceived through touch. The latter was necessary because vibrotactile perception is not yet well understood in the context of music perception. The doubleÃƒÆ’Ã†â€™Ãƒâ€šÃ‚Â¢ÃƒÆ’Ã‚Â¢ÃƒÂ¢Ã¢â€šÂ¬Ã…Â¡Ãƒâ€šÃ‚Â¬ÃƒÆ’Ã¢â‚¬Å¡Ãƒâ€šÃ‚Âblind study analyzes how vibrotactile stimuli contribute to the perception, cognition, and distinction of sounds in human participants who have been trained versus those who have not. In order to ensure that normal-hearing participants could not hear sounds radiated from the haptic display, sound isolating headphones were used to playback pink noise during the experiment
- S. Bringsjord, C. Kuebler, J. Taylor, G. Milsap, S. Austin, J. Braasch, P. Oliveros, D. Van Nort, A. Rosenkrantz, K. Hayden. Creativity and Conducting: Handle in the CAIRA Project, 8th ACM Conference on Creativity and Cognition (C&C 2011).
After providing some context via (i) earlier work on literary creativity carried out by Bringsjord et al., and (ii) an account of creativity espoused by Cope, which stands in rather direct opposition to Bringsjord's account, we summarize our nascent attempt to engineer an artificial conductor: Handle. Handle is a microcosmic version of part of a larger, much more ambitious system: CAIRA.
- J. Braasch, D. Van Nort, S. Bringsjord, P. Oliveros, A. Parks, C. Kuebler. CAIRA - a Creative Artificially-Intuitive and Reasoning Agent as conductor of telematic music improvisations, Proc. 131st AES Convention 2011, Oct. 20-23, New York, USA.
This paper reports on the architecture and performance of the Creative Artificially-Intuitive and Reasoning Agent Caira as a conductor for improvised avant-garde music. Caira listening skills are based on a music recognition system that simulates the human auditory periphery to perform an Auditory Scene Analysis (ASA). Its simulation of cognitive processes includes a comprehensive cognitive calculus for reasoning and decision-making using logic based-reasoning. Caira is used as conductor for live music performances with distributed ensembles, where the musicians are connected via the internet. Caira uses a visual score and directs the ensemble members based on tension arc estimations for the individual music performers.
- Doug Van Nort, Jonas Braasch and Pauline Oliveros, Sound Texture Analysis based on a Dynamical Systems Model and Empirical Mode Decomposition, Proceedings of the 129th Convention of the Audio Engineering Society, San Francisco, CA, November 2010.
This paper describes a system for separating a musical stream into sections having different textural qualities. This system translates several contemporary approaches to video texture analysis, creating a novel approach in the realm of audio and music. We first represent the signal as a set of mode functions by way of the Empirical Mode Decomposition (EMD) technique for time/frequency analysis, before expressing the dynamics of these modes as a linear dynamical system (LDS). We utilize both linear and nonlinear techniques in order to learn the system dynamics, which leads to a successful separation of the audio in time and frequency.
- Doug Van Nort, Extending the acoustic ensemble through spectral and temporal transformations in real-time (A), Journal of the Acoustical Society of America (JASA), Spring 2010.
The paradigm of live performance mixing acoustics and electronics has predominantly focused on simple background "tape music", human players performing highly structured sample-based music (e.g., using the ABLETON LIVE software), or reactive systems that respond to player qualities such as timing, pitch, and so on. In this talk I will present my approach to improvised "laptop performance" that focuses on the transformation of acoustic players in real-time. Rather than simply altering the acoustic content in the manner of an effect processor, the goal is to capture notes and phrases in short-term memory and to re-articulate the material so that it presents a new gestural inflection and timbral content that can be completely novel or suggestive of other players' sound. The system presented utilizes a hybrid system combining spectral analysis and feature extraction with block-based temporal processing and a feedback delay network. The interaction paradigm of "scrubbing" the intermediate time-frequency representation is used to generate the ultimate output. The result in an ensemble context is an extended palette that can "keep up" with the musical dialog while eliciting the subtle textural qualities of acoustic players.
- Jonas Braasch and Doug Van Nort, Instrumental analysis of extended saxophone techniques for live electronics (A), Journal of the Acoustical Society of America (JASA), Spring 2010.
The development of automated music transcription systems focuses predominantly on polyphonic musical instruments. At the same time, the analysis of a monophonic instrument is usually much simpler wherein pitch, loudness, and duration of individual notes may be tracked robustly. When using extended techniques, however, many more parameters than the aforementioned three can be meaningful for the performed music. This paper explores the challenges that extended techniques pose for music recognition systems using the example of the saxophone. The goal is to correctly identify extended techniques over the whole range of the instrument, including subtones, multiphonics, growl, and other voice-enhanced tones, as well as tones where the reed is supported by the lower teeth. The feature analysis is based on cepstrum, spectral moments, pitch, and roughness, among other features. A hidden Markov model is used to recognize the trajectory of the various extended techniques based on the given feature space. Finally, it is demonstrated how the recognizer can be integrated into an intelligent live electronics system to control its parameters. For example, the characteristics of a virtual acoustic enclosure (room size, reverberation time, etc.) can be adapted this way.
- Doug Van Nort, Pauline Oliveros and Jonas Braasch, Developing Systems for Improvisation based on Listening, Proc. of the 2010 International Computer Music Conference (ICMC 2010), New York, NY, June 1-5, 2010.Winner: ICMA "BEST PAPER AWARD" for 2010 ICMC Conference.
In this paper we discuss our approach to designing improvising music systems whose intelligence is centered around careful listening, particularly to qualities related to timbre and texture. Our interest lies in systems that can make contextual decisions based on the overall character of the en- tire sound field, as well as the specific shape and contour created by each player. We describe the paradigm of "expanded instrument" systems that we in turn build upon, endowing these with the ability to listen, recognize and transform performer's sound in a contextually relevant fashion. This musical context is defined by our free improvisation trio, which we briefly describe. The different modules of our current system are described, including a new tool for real-time sound analysis and method for sonic texture recognition.
- Jonas Braasch, Chris Chafe, Pauline Oliveros and Doug Van Nort, Mixing Console Design Considerations for Telematic Music Applications, Proceedings of the 127th Convention of the Audio Engineering Society, New York, NY, October 9-12, 2009.
This paper describes the architecture for a new mixing console that was especially designed for telematic live-music collaborations. The prototype mixer is software-based and programmed in Max/MSP. It has many traditional features but also a number of extra modules that are important for telematic projects: latency meter, remote data link, auralization unit, remote sound level calibration unit, remote monitoring, and a synchronized remote audio-recording unit.
- Doug Van Nort, Jonas Braasch and Pauline Oliveros, A System for Musical Improvisation Combining Sonic Gesture Recognition and Genetic Algorithms, Proc. of the 2009 International Conference of Sound and Music Computing (SMC 09), Porto, Portugal, July 23-25, 2009.
This paper describes a novel system that combines machine listening with evolutionary algorithms. The focus is on free improvisation, wherein the interaction between player, sound recognition and the evolutionary process provides an overall framework that guides the improvisation. The project is also distinguished by the close attention paid to the nature of the sound features, and the influence of their dynamics on the resultant sound output. The particular features for sound analysis were chosen in order to focus on timbral and textural sound elements, while the notion of "sonic gesture" is used as a framework for the note-level recognition of performer's sound output, using a Hidden Markov Model based approach. The paper discusses the design of the system, the underlying musical philosophy that led to its construction as well as the boundary between system and composition, citing a recent composition as an example application.
- Doug Van Nort, Creating Systems for Collaborative, Network-Based Digital Music Performance (A), J. Acoust. Soc. Am., 124(4):2489, 2008.
The internet has proven to be an important catalyst in bringing together musicians for remote collaboration and performance. Existing technologies for network audio streaming possess varying degrees of technological transparency in regards to allowable bandwidth, latency, software interface constraints among other factors. In another realm of digital audio, the performance of "laptop music" presents a set of challenges in regards to human-computer and inter-performer interaction - particularly in the context of improvisation. This paper discusses the limitations as well as newfound freedoms that can arise in the construction of musical performance systems that merge the paradigms of laptop music and network music. Several varied creative solutions are presented from personal work created in the past several years which consider the meaning of digital music collaboration, the experience of sound-making in remote physical spaces and the challenge of improvising across time and space with limited visual feedback. These examples include shared audio processing over high- speed networks, shared control of locally-generated sound synthesis, working with artifacts in low-bandwidth audio chat clients and the use of evolutionary algorithms to guide group improvisations.
- Doug Van Nort, David Gauthier, Sha Xin Wei and Marcelo Wanderley, Extraction of Gestural Meaning from a Fabric-Based Controller, in Proc. of the International Computer Music Conference 2007 (ICMC-07), Copenhagen, Denmark, August 2007.
This paper presents an approach to the analysis of gestural data and extraction of related features from a cloth-based instrument. Issues surrounding the meaning of gesture and intentionality in such a performance environment are discussed, and we present a solution to analyzing and extracting information in a way that leverages the inherent quality of the cloth-as-controller. Other factors are considered in the system design, including the performance context in which the goal is to elicit improvised play from participants who do not possess an a priori model of interaction or vocabulary of acceptable gestural input.
- Doug Van Nort, Texture Perception: Signal Modeling and Compositional Approaches (A), in Proc. of the 2007 Conference of the Society for Music Perception and Cognition (SMPC-07), Montreal, QC, August 2007.
There has been a considerable amount of research in the area of timbre perception, and specifically attempts at finding acoustic correlates that generate, influence or otherwise contribute to the various related perceptual attributes. Much attention has been paid in the literature to certain acoustic properties that have proven to be perceptually relevant, as has been shown through a combination of user studies, signal analysis/synthesis and various mathematical techniques - most notably multidimensional scaling. Contrary to this, the subjective notion of "texture" has seen little attention in the area of sound perception. A primary difficulty in dealing with sound texture is that it is not immediately clear how one defines it and further develops classification schemes for comparison and grouping. While timbre may not be ordinal, it has a clear perceptual structure that can lead to groupings and associations such as instrumental families and acoustical source properties. In a musical setting, I would argue, texture relates more to one's phenomenological experience of a sound result. In considering sonic qualities, texture can be seen to differ from timbre in terms of degree of separability: timbre being perceived as a unified sound object and texture perception relating to micro-variations in spectro-temporal sound properties over larger time scales. Coming from a perceptual signal processing point of view, I will present different models for texture analysis and synthesis that have been proposed in the literature. Rather than focusing on the deeper mathematics of such models, I will present the underlying assumptions and concepts dealing with questions of determinism vs. stochasticity, spectral vs. temporal models, time scales, etc. and how they relate to implicit models of perception. I will further discuss the manner in which texture can be related to both timbre and melodic intervals (differing in regards to degree of temporal separation), and will conclude with suggestions for how texture might be considered from the perspective of electroacoustic composition in constructing a sense of movement, density and tension. Existing examples from the EA literature will be employed to emphasize this use of texture in composition, and an analysis will be given that links with the associated discussion of perceptual/signal modeling.
- Doug Van Nort and Marcelo Wanderley, Control Strategies for Navigation of Complex Sonic Spaces, in Proc. of the International Conference on New Interfaces for Musical Expression 2007 (NIME-07), New York, NY, June 2007.
This paper describes musical experiments aimed at designing control structures for navigating complex and continuous sonic spaces. The focus is on sound processing techniques which contain a high number of control parameters, and which exhibit subtle and interesting micro-variations and textural qualities when controlled properly. The examples all use a simple low-dimensional controller - a standard graphics tablet - and the task of initimate and subtle textural manipulations is left to the design of proper mappings, created using a custom toolbox of mapping functions. This work further acts to contextualize past theoretical results by the given musical presentations, and arrives at some conclusions about the interplay between musical intention, control strategies and the process of their design.
- Doug Van Nort and Marcelo Wanderley, The LoM Mapping Toolbox for Max/Msp/Jitter, Proc. Of the 2006 International Computer Music Conference (ICMC 06), New Orleans, LA, Novermber, 2006.
This paper presents the Library of Maps toolbox to aid in the mapping of control parameters to sound synthesis parameters via strategies that result from a geometric representation of control. A set of objects have been created for Max/MSP and Jitter that allow the user to map arbitrary high-dimensional data from control to sound parameter space, and to visualize this through the use of Jitter and OpenGL. The mapping implementations are discussed and related to existing work.
- Doug Van Nort, Le Mappings Geometrique et Trajectoires Musicale (A), in L'interdisciplinarite dans les sciences et technologies de la musique colloquium, part of La Reunion 2006 de l'Association Francophone pour le Savoir (ACFAS), Montreal, QC, May 17, 2006.
Il a été démontré que l'association entre le donnés de sortie provenant d'un controller aux parametres de son, autrement dit "mapping", a un effet important sur le retour exercer par l'instrument electronique. Cette presentation examine le role du mapping comme facteur determinant dans l'expression musicale relié a de tel instruments. Certains aspects du mapping entre les paramêtres de contrôle en temps-reel et les paramêtres de la synthèse sonore sont abordés, notamment les notions d'interpolation et de representation fonctionelle. Plus specifiquement, l'interpretation geometrique de contrôle en temps-reel est etudiée. Plusieurs implementations sont presentées aussi bien que l'analyse des benefices et inconvenients pour chacun des cas. Finalement, je presente les resultats d'une etude qui compare les differents mappings basé sur l'interpolation. Les critères de comparisons étant les effet sur l'expression musicale, la possibilité de naviguer dans l'espace sonore, la reproducibilité et enfin leurs potentiel d'utilisation pour le feed-back visuel.
- Doug Van Nort and Philippe Depalle, A Stochastic State-Space Phase Vocoder for Synthesis of Roughness, Proc. Of the 2006 International Conference on Digital Audio Effects (DAFx 06), Montreal, QC, September, 2006.
This paper presents an implementation of the phase vocoder within a Gaussian state-space framework. Rather than formulate the problem as a deterministic evolution of frequencies centered around a given bin, this evolution is treated stochastically by introducing noise into the dynamics matrix of the recursive state equation. This produces effects on the roughness of the input sound, which vary depending on the position within the matrix where the noise is added, how it is propagated throughout the matrix and further by the variance of the noise input.
- Doug Van Nort and Marcelo Wanderley, Exploring the Effect of Mapping Trajectories on Musical Performance, in the International Conference of Sound and Music Computing (SMC 06), Marseille, France, May 18-20, 2006.
The role of mapping as determinant of expressivity is examined. Issues surrounding the mapping of real-time control parameters to sound synthesis parameters are discussed, including several representations of the problem. Finally a study is presented which examines the effect of mapping on musical expressivity, on the ability to navigate sonic exploration and on visual feedback.
- Doug Van Nort, The Contemporary Production of Noise and the Role of the System (A), in Electroacoustic Music Studies 2005 (EMS-05) International Conference, Universite de Montreal, Montreal, QC, October 19-22, 2005.
The term "noise" has taken on disparate meanings in its use as sonic referent. It is wholly subjective and context dependent, and yet noise as concept has been a recurrent thread in the development of experimental musics of the past 100 years in general and electroacoustic music in particular. Without attempting to define noise in absolute terms, this paper will examine its conceptualization in certain periods of 20th century music, and relate this to a more current trend that one might consider as a contemporary production of noise: the subversion of a musical system and the (mis)use of technology to achieve this.
- Stacy Denton and Doug Van Nort. Music, noise and the (de)socialization of sound (A),in In and Out of the Sound Studio Conference, Concordia University, Montreal, QC. July 25-29, 2005.
From before the time we are born, we are exposed to the infinite possibilities of our sound environment. Through our interaction with society at large, we begin to experience sound not as itself but as an object given meaning through a dominant representation that we call music. As a result, an analytic mode of listening is privileged over a holistic one, producing a hierarchy wherein different "levels" of listening exist. Not only do certain sounds and relations to sound get relegated to the background, but listening identities that do not rely on analysis become marginalized as well. This notion of "listening identity" refers here to one's unique way of listening that is informed by their larger social identity. We maintain that this privileged analytic mode of listening reflects a bourgeois ideal of individualism in contrast to holistic listening, which can be seen as contextual and relational. For the purpose of this presentation we will be focusing on gendered and classed identities as they relate to music, sound and modes of listening. One way to de-emphasize the aforementioned standard for interacting with a musical work is to introduce noise as an element of composition. Noise is not (and can not be) directly defined here, as it refers to a subjective quality that is rooted in one's process of listening. Nevertheless, as there exists a dominant musical representation there also exists an outside: one that cannot be understood through the symbols that define time and frequency (pitch) relationships. In the context of electroacoustic music, this noise is the de-structuring of the tools and systems of production themselves - the sound that results from the misuse of technology. Not only does this subvert the accepted means to a musical end, but it is subversive in that its reception does not rely on one established sonic viewpoint. Within this space of noise as music, the individual is allowed to navigate their own listening identity - be it analytic, holistic or otherwise - and create their own meaning that is not solely determined by dominant ideology. In this presentation we will address issues surrounding the role that noise can play in the formation of alternative listening identities, specifically in the context of electroacoustic music practice.
- Doug Van Nort, Marcelo M. Wanderley and Philippe Depalle. On the Choice of Mappings based on Geometric Properties. Proc. of the 2004 International Conference on New Interfaces for Musical Expression (NIME 04), Hamamatsu, Japan, June 3-5, 2004.
The choice of mapping strategies to effectively map controller variables to sound synthesis algorithms is examined. Drawing from underlying mathematical theory, this paper seeks to establish a framework through which these strategies can be compared, with the goal of achieving an appropriate match between mapping and musical performance context. This method of comparison is applied to existing work, while a suggestion is offered on how to integrate and extend this work. Specifically, existing implementations which give control and synthesis parameter spaces a geometric representation are the focus.