AI Produced Music: The Consequences to our Perception of Sound

Posted by Peter Rudin on 31. May 2024 in Essay

AI Music                 Credit:        


A shift to generative music produced by AI may influence current forms of musical culture, just as the era of recorded music saw the diminishing importance of orchestral music, which, until then, was the only way to hear music. As this new kind of music culture grows, we may see a reduced engagement in traditional music produced by artists and orchestras. While it is too early to tell what the impact will be, one should keep in mind that the effort to defend artists’ intellectual property is part of this equation. Government AI-policies may need to understand how music works socially in order to ensure that our musical cultures remain vibrant, sustainable and meaningful for individuals and communities as well.

Acoustics and the Fundamentals of Hearing and Perception

In physics, sound is a vibration that is propagated as an acoustic wave through a transmission medium such as a gas, liquid or solid. In human physiology sound defines the reception of such waves and their perception by the brain. Human hearing relies on the ability of the ear and the neural system to sense and process variations of sound. Accordingly, the act of hearing has both subconscious and conscious effects. Subconscious effects such as hearing loss, are due to prolonged exposure to high sound pressure levels. Conscious effects are a direct result of the ears’ acute response to a sound and how the cognitive part of the brain evaluates that sound. While the physical characteristics of sound are defined by the  pressure level, frequency and duration, the hearing characteristics are defined by loudness, pitch and duration. However, human hearing is a complex system and many acoustic sensations do not correlate directly to just one physical characteristic. Much of the challenge in the field of psycho-acoustics is related to the fact that different listeners perceive identical sounds differently. Age, gender, nationality and many other diversity factors affect human perception. In addition to this challenge dealing with a heterogeneous population, consumer expectations vary based on the different types of products they purchase. For example, a customer expects different sound characteristics from motorcycles, stereo equipment or personal computers. Therefore, sound quality evaluations are usually specific to the type of product and the consumers targeted. The goal of sound quality measurement is to develop a  preference model of the individual targeted. Because hearing is one of the integral processes through which humans receive information. As the sound of a product carries much information, ongoing research classifies hearing sensations and correlates them to the physical characteristics of the signal.  As a result, improved sound quality metrics, correlating  with neural processes, will better relate to human perception.

Types of Musical Instruments

There are five different categories of instruments: percussion, woodwind, string, brass and keyboard. The categories are based on their physical characteristics and how the sounds are produced. This method of classification is called the Hornbostel-Sachs system and was created by Erich Moritz von Hornbostel and Curt Sachs and first published in 1914.  The following provides a brief summary of three categories:

Percussion not only comprises drums, but also includes bells, cymbals, woodblocks, marimba and the xylophone. Percussion sounds are produced by striking the surface of the instrument, either by hand or with a stick, creating a vibration. Tuning or hitting different parts of the instrument with varying force produces different tones. Drum kits, similar to those found at a rock concert, each create a different range of notes, consisting of five to seven drums. Different countries’ cultures have developed their own drums such as the taiko drums from Japan, prominently featured in ceremonial events. Despite its name, Woodwinds can be made of metal or plastic not just wood and include flute, clarinet, oboe and the saxophone. All woodwinds produce sound with the musician blowing into a mouthpiece with a curved hole called a fipple which splits the air producing the sound. The four most commonly used instruments in the String family are the violin, the viola, the cello and the double (string) bass. They are all made by gluing pieces of wood together to form a hollow sound box. The quality of sound depends on its shape, the wood it is made of and the thickness of both the top and the back of the instrument. The complex music of Beethoven created the need for a conductor to keep the orchestra together, acting as interpreter for the musicians and the audiences of the orchestra.

How AI Music Generators  are Tuning Into Our Feelings

Music has always been a powerful medium to express and influence emotions. Now AI is taking this fact to the next level by providing music generators that can adapt to the listener’s emotional state. These systems use AI-algorithms to analyse voice sounds, facial expressions, physiological signals, and even contextual data to gauge the listener’s mood and respond with the perfect musical creation.

The following provides some information about the impact of AI-based music generators

Emotion Detection: Using advanced machine learning models, these AI-systems can interpret your emotional cues from your voice, facial expressions, or biometric data like heart rate and skin conductivity.

Learning and Adapting: AI music generators learn from interactions, refining their understanding of your musical preferences and emotional responses over time to provide increasingly personalized experiences as delivered by famous music streaming platforms, for example.

Biofeedback Integration: Incorporating biofeedback, such as heart rate variability or galvanic skin response, AI music generators can adjust music in real-time to help the listener achieve a desired emotional or physiological state. This application is particularly promising in therapeutic settings, where music can be used as a tool for emotional regulation or stress reduction.

By leveraging these advanced capabilities, AI music generators are not just reshaping our relationship with music. They are creating a new paradigm in which technology understands and interacts with our emotions in profound and meaningful ways. As with all AI-applications, ethical considerations around privacy, data security and emotional manipulation are paramount. Ensuring that these technologies are developed and used with care to respect user consent and data protection is crucial for their acceptance and success.

How Music unleashes Creativity

Music’s ability to foster creativity is a phenomenon that has been observed and studied for centuries. Researchers have discovered that engaging with music, whether through listening or active participation, stimulates multiple areas of the brain responsible for creativity and cognitive functioning. Here are some key aspects of this intriguing connection:

Brain activation and neuroplasticity: Music has been found to activate various regions of the brain, including those responsible for emotional processing, memory and creativity. This activation promotes the development of new neural connections, a process known as neuroplasticity. As a result, engaging with music has been shown to enhance cognitive abilities such as problem-solving, critical thinking and creative ideation.

Flow state and music: The psychological concept of “flow,” refers to a mental state of optimal concentration and absorption in an activity, leading to heightened creativity and productivity. Music can help individuals achieve this flow state as  an immersive and emotionally engaging experience. When musicians perform or compose, they often enter a flow state, allowing for the spontaneous generation of new ideas and artistic expression.

Emotional resonance and creative expression: Music’s ability to evoke strong emotions and connect with listeners on a deep level plays a significant role in its creative power. By triggering emotions and memories, music can serve as a catalyst for creative expression, inspiring individuals to create their own compositions.

Cognitive development: As music stimulates the brain and enhances cognitive abilities, individuals who regularly engage in music can experience improved memory, attention and problem-solving skills. This cognitive development can have a positive impact on various aspects of life, from academic performance to professional success.


The integration of AI in music generators responsive to our emotions, signifies a remarkable fusion of technology, art and psychology. As we continue to explore the capabilities of AI, emotion-responsive music generators stand out as a harmonious blend of innovation and empathy. The symbiotic relationship between music and creativity is a testament to the power of artistic expression and the boundless potential of the human imagination. By tapping into the creative energy that music provides, individuals can experience personal growth, cognitive enhancement and a deepened connection to the world around them as long as they are protected against malicious misuse by bad actors.

Leave a Reply

Your email address will not be published. Required fields are marked *