Generating music with resting-state fMRI data

A9 Generating music with resting-state fMRI data Caroline Froehlich, Gil Dekel, Daniel S. Margulies, R. Cameron Craddock Computational Neuroimaging Lab, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Center for the Developing Brain, Child Mind Institute, New York, NY, USA; City University of New York-Hunter College, New York, NY, USA; Max Planck Research Group for Neuroanatomy & Connectivity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany Correspondence: Caroline Froehlich (cfrohlich@nki.rfmh.org) – Computational Neuroimaging Lab, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, USA GigaScience 2016, 5(Suppl 1):A9


Introduction
Resting-state fMRI (rsfMRI) data generates time courses with unpredictable hills and valleys. People with musical training may notice that, to some degree, it resemble the notes of a musical scale. Taking advantage of these similarities, and using only rsfMRI data as input, we use basic rules of music theory to transform the data into musical form. Our project is implemented in Python using the midiutil library [https://code.google.com/p/midiutil/].

Approach
Data We used open rsfMRI from the ABIDE dataset [1] preprocessed by the Preprocessed Connectomes Project [2]. We randomly chose 10 individual datasets preprocessed using C-PAC pipeline [3] with 4 different strategies. To reduce the data dimensionality, we used the CC200 atlas [4] to downsample voxels to 200 regions-of-interest. Processing: The 200 fMRI time courses were analyzed to extract pitch, tempo, and volume-3 important attributes for generating music. For pitch, we mapped the time course amplitudes to Musical Instrument Digital Interface (MIDI) values in the range of 36 to 84, corresponding to piano keys within a pentatonic scale. The key of the scale was determined by the global mean ROI value (calculated across all timepoints and ROIs) using the equation: (global signal % 49) + 36. The lowest tone that can be played in a certain key was calculated from (key % 12) + 36. The set of tones that could be played were then determined from the lowest tone using a scale. For example, the minor-pentatonic scale's set of were calculated by adding 0, 3, 5, 7, or 10 to its lowest tone, then skipping to the next octave, and then repeating the process until the value 84 was reached. An fMRI time course was mapped to these possible tones by scaling its amplitude to the range between the smallest and largest tones in the set. If a time point mapped to a tone that was not in the set, it was shifted to the closest allowable tone. An example of allowed set of tones is shown in Fig. 8. For tempo, we used first temporal derivative for calculating the length of notes, assuming we have 4 lengths (whole, half, quarter and eighth note). In the time course, if the modulus distance between time point t and t + 1 was large, we interpreted it as a fast note (eighth). However, if the distance between t and t + 1 was close to zero, we assumed it is a slow note (whole). Using this approach, we mapped all other notes in between. We used a naive approach for calculating volume in a way that tackles a problem we had with fast notes: their sound is cut off due to their short duration. A simple way to solve this is to decrease the volume of fast notes. Thus, the faster the note, the lower the volume. While a whole note has volume 100 ([0,100]), an eighth note has volume 50. Finally, we selected the brain regions that will play. Users complain when two similar brain regions play together. Apparently, the brain produces the same music twice. However, when the regions are distinct, the music is more pleasant. Thus, we used FastICA [5] for choosing brain regions with maximally uncorrelated time courses. Results A framework for generating music from fMRI data, based on music theory, was developed and implemented as a Python tool yielding several audio files. When listening to the results, we noticed that music differed across individual datasets. However, music generated by the same individual (4 preprocessing strategies) remained similar. Our results sound different from music obtained in a similar study using EEG and fMRI data [6] Conclusions In this experiment, we established a way of generating music with open fMRI data following some basic music theory principles. This resulted in a somewhat naïve but pleasant musical experience. Our results also demonstrate an interesting possibility for providing feedback from fMRI activity for neurofeedback experiments.

Availability of Supporting Data
More information about this project can be found at: https://github.com/carolFrohlich/brain-orchestra  Python-based software package for performing time-series analysis on neuroscience data. Implementation of useful time-series features into python, and potential integration with Nitime, would not only facilitate their use by the neuroscience community, but also their maintenance and development within an open source framework. Approach An illustration of the approach is shown in Fig. 9 Each time series is converted to a vector of thousands of informative features using the hctsa package; machine-learning methods can then be used to determine the most useful features (e.g., that best discriminate patient groups, and where in the brain the best discrimination occurs).
In this project, we wanted to demonstrate a feasible pathway for incorporating these useful features into the Nitime package.
Results I successfully implemented a handful of basic time-series analysis functions from Matlab into python using partials (a python function that freezes a given set of input arguments to a more general function).
The proof-of-principle implementation has full support for vectors of data stored in numpy arrays, and basic support for the Nitime data format (extracting the data vector from the Nitime TimeSeries class for evenly sampled data).

Conclusions
Our results demonstrate that time-series analysis methods, discovered using the hctsa package [https://github.com/benfulcher/hctsa], can be implemented natively in python in a systematic way, with basic support for the time-series format used in Nitime. This will help facilitate future work on time-series analysis to be incorporated straightforwardly into this open source environment.
Although there are no plans to reimplement the full hctsa feature library in python, our hope is that published work describing useful time-series features (discovered using the hctsa library) can also contribute to a Python implementation, to promote its use by the neuroscience community.
Availability of supporting data More information about this project can be found at: https://github. com/benfulcher/hctsa_python

Competing interests
None.
Author's contributions BF wrote the software and the report. The first 10 notes of the same ROI as sheet music. c All possible piano keys the brain can play, from 36 to 84 (in pink). We show in red all the possible tones for a C Minor-pentatonic scale, in the range of 36 to 84. In that case, the lowest key is 36.