My research proposes a new pathway for the use of music in the development of new human-computer interfaces. Due to its demonstrated capacity to both communicate and modulate affective and physiological states, music has unique potential as a novel information channel between humans and computers, and between humans when communication is mediated by computers. I create new brain-body music technologies that translate between neurophysiological signals (e.g., electroencephalography (EEG), electrocardiography (ECG), electrodermal activity (EDA), respiration) and musical sound in real-time to allow communication of emotion between humans and computers. I develop the scientific methods needed to analyze physiological data to measure the therapeutic mechanisms behind beneficial brain-body music entrainment and optimize the translation of this physiological data into music for therapeutic means. In my experiments I engineer musical sounds to influence the rhythms that drive the physiological processes in the brain, heart, and central nervous system, and in my music, I explore the expressive potential of sonifying (converting into sound) the physiological signals that accompany affective communication.
Human-centered computing, Human-computer interfaces, electronic music, algorithmic composition, audio engineering, music perception and cognition, EEG, digital signal processing