AI Model Suggests Innate Musical Instinct

Overview:
Scientists identified a major finding using a synthetic neural network model, indicating that musical instinct may naturally emerge from the human brain. By examining various natural sounds using Google’s AudioSet, the group discovered that specific neurons in the network distinctively reacted to music, imitating the behavior of the auditory cortex in actual brains.

Music-selective neurons, spontaneously developed, suggest that our capability to process music could be an inborn cognitive function, shaped as an evolutionary adjustment to better process sounds from nature.

Important Points:

  1. The investigation utilized an artificial neural network to demonstrate that music-selective neurons can develop autonomously without the need for teaching music.
  2. These neurons exhibited comparable conduct to those in the human auditory cortex, responding distinctly to various types of music throughout different genres.
  3. The study suggests that musical ability may be an instinctual brain function, evolved to enhance the processing of natural sounds.

Source: KAIST

Music, frequently described as the common language, is known to be a shared component in all societies. So, could ‘musical instinct’ be a somewhat universal phenomenon even with the vast cultural variations?

On January 16, a team of researchers from KAIST, led by Professor Hawoong Jung from the Department of Physics, revealed having discovered the principle through which musical instincts emerge from the human brain naturally, without the need for specialized learning, using a synthetic neural network model.

AI Model Suggests Innate Musical Instinct
The neurons in the artificial neural network model showed similar reactive behaviours to those in the auditory cortex of a real brain. Credit: Neuroscience News

Earlier, numerous researchers aimed to identify similarities and dissimilarities amid music in various cultures and to unravel the origins of their universality.

A research paper published in Science in 2019 uncovered that music is created across ethnographically distinct cultures and that similar beat and tunes forms are employed. Neuroscientists have also previously deduced that a particular part of the human brain, known as the auditory cortex, is accountable for processing musical information.

Professor Jung’s team employed a synthetic neural network model to attest that cognitive functions for music emerge spontaneously as a consequence of processing auditory data received from nature without being explicitly taught music.

The research team adopted AudioSet, a vast collection of sound data provided by Google, and trained the artificial neural network to assimilate the various sounds. Interestingly, the research team noticed that particular neurons within the network model would discriminate music selectively.

Put simply, they noted the spontaneous formation of neurons that reacted minimally to various other sounds like those of animals, nature, or machines, but showed high levels of response to various forms of music including both instrumental and vocal.

The neurons in the artificial neural network model showed similar reactive behaviors to those in the auditory cortex of a real brain. For example, the artificial neurons responded less to the sound of music that was cropped into short intervals and were rearranged.

This implies that the music-selective neurons created spontaneously encode the temporal structure of music. This attribute was not confined to a specific genre of music but emerged across 25 different genres including classical, pop, rock, jazz, and electronic.

Moreover, it was discovered that suppressing the activity of the music-selective neurons substantially impeded the cognitive accuracy for other natural sounds. That is to say, the neural function that processes musical information helps in processing other sounds, and that ‘musical ability’ may be an inborn capability formed as a result of an evolutionary adjustment acquired to better process sounds from nature.

Professor Hawoong Jung, who advised the research, said, “The outcomes of our study suggest that evolutionary pressure has contributed to forming the universal basis for processing musical information in various cultures.”

Regarding the importance of the research, he explained, “We look forward for this artificially developed model with human-like musicality to become an authentic model for various applications including AI music generation, musical therapy, and for research in musical cognition.”

He also remarked on its limitations, adding, “This research, however, does not consider the developmental process that follows the learning of music, and it should be noted that this is a study on the foundation of processing musical information in early development.”

This research, undertaken by the first author Dr. Gwangsu Kim of the KAIST Department of Physics (current affiliation: MIT Department of Brain and Cognitive Sciences) and Dr. Dong-Kyum Kim (current affiliation: IBS), was published in Nature Communications under the title, “Spontaneous emergence of rudimentary music detectors in deep neural networks.”

Funding: This investigation was supported by the National Research Foundation of Korea.

About this AI and music research news

Author: Yoonju Hong
Source: KAIST
Contact: Yoonju Hong – KAIST
Image: The image is credited to Neuroscience News

Original Research: Open access.
“Spontaneous emergence of rudimentary music detectors in deep neural networks” by Hawoong Jeong et al. Nature Communications


Summary

Spontaneous emergence of rudimentary music detectors in deep neural networks

Music exists in almost every society, has universal acoustic features, and is processed by distinct neural circuits in humans even with no experience of musical training.

However, it remains unclear how these innate characteristics emerge and what functions they serve. Here, using an artificial deep neural network that models the auditory information processing of the brain, we show that units tuned to music can spontaneously emerge by learning natural sound detection, even without learning music.

The music-selective units encoded the temporal structure of music in multiple timescales, following the population-level response characteristics observed in the brain.

We found that the process of generalization is critical for the emergence of music-selectivity and that music-selectivity can work as a functional basis for the generalization of natural sound, thereby elucidating its origin.

These findings suggest that evolutionary adaptation to process natural sounds can provide an initial blueprint for our sense of music.




Leave a Reply

Your email address will not be published. Required fields are marked *