Arielle Euringer PSY 323 Chapter 10: Hearing in the Environment Summary Listeners use small differences, in time and intensity, across the two ears to learn the direction in the horizontal plane (azimuth) from which a sound comes. Time and intensity differences across the two ears are not sufficient to fully indicate the location from which a sound comes. In particular, time and intensity differences are not sufficient to indicate whether sounds come from the front or the back, or from higher or lower (elevation). The pinna, ear canal, head, and torso alter the intensities of different frequencies for sounds coming from different places in space, and listeners use these changes in intensity across frequency to identify the location from which a sound comes. Perception of auditory distance is similar to perception of visual depth because no single characteristic of the signal can inform a listener about how distant a sound source is. Listeners must combine intensity, spectral composition, and relative amounts of direct and reflected energy of sounds to estimate distance to a sound source. Many natural sounds, including musical instruments and human speech, have rich harmonic structure with energy at integer multiples of the fundamental frequency, and listeners are especially good at perceiving the pitch of harmonic sounds. Important perceptual qualities of complex sounds are fundamental, conveyed by the relative amounts of energy at different frequencies, and onset and offset properties of attack and decay, respectively. Because all the sounds in the environment are summed into a single waveform that reaches each ear, a major challenge for hearing is to separate sound sources in the combined signal. This general process is known as auditory scene analysis. Sound source segregation succeeds by using multiple characteristics of sounds, including spatial location, similarity in frequency and timbre, and onset properties. In everyday environments, sounds to which a person is listening often are interrupted by other, louder sounds. Perceptual restoration is a process by which missing or degraded acoustic signals are perceptually replaced. Key Terms Attack: The part of sound during which amplitude increases (onset). Auditory Stream Segregation: The perceptual organization of a complex acoustic signal into separate auditory events for which each stream is heard as a separate event. Azimuth: The angle of a sound source on the horizontal plane relative to a point in the center of the head between the ears. Measured in degrees, with 0 being straight ahead; the angle increases clockwise toward the right, with 180 degrees being directly behind. Cone of Confusion: The region of positions in space where all sounds produce the same time and level (intensity) differences (ITDs and ILDs). Decay: The part of a sound during which amplitude decreases (offset). Hand-Related Transfer Function (HRTF): A function that describes how the pinna, ear canal, head, and torso change the intensity of sounds with different frequencies that arrive at each ear from different locations in space (azimuth and elevation). Interaural Level Difference (ILD): The difference in level (intensity) between a sound arriving at one ear versus the other. Helps with process of sound localization. Interaural Time Difference (ITD): The difference in time between a sound arriving at one ear versus the other. ITD is important in order to localize sound. Inverse-Square Law: A principle stating that as distance from a source increases, intensity decreases faster such that decrease in intensity is the distance squared. This general law also applies to optics and other forms of energy. Lateral Superior Olive (LSO): A relay station in the brain stem where inputs from both ears contribute to the detection of the interaural level difference. Medial Superior Olive (MSO): A relay station in the brain stem where inputs from both ears contribute to detection of the interaural time difference. *Critical in order to localize sound. Source Segregation (Auditory Scene Analysis): Processing an auditory scene consisting of multiple sound sources into separate sound images. Timbre: The psychological sensation by which a listener can judge that two sounds with the same loudness and pitch are dissimilar. Timbre quality is conveyed by harmonics and other high frequencies. Sound Localization Q: What happens to sound information traveling to the ears after a single synapse in the cochlear nucleus? A: The information from each ear travels to both the medial superior olive and the lateral superior olive on each side of the brain. Q: Why is the cone of confusion confusing? A: The cone of confusion is the region of positions in space where all sounds produce the same time and level (intensity) differences. In such a situation it is difficult to localize sound, which is confusing to the listener. Q: How is the spectral composition of sounds a possible cue for auditory distance? A: The sound-absorbing qualities of air dampen high frequencies more than low frequencies, so when sound sources are far away, higher frequencies decrease in energy more than lower frequencies as the sound waves travel from the source to your ear. This variation between frequencies reaching the ear helps the listener estimate the distance between them and the sound source. Q: How do the relative amounts of direct vs. reverberant energy provide a cue for auditory distance? A: When a sound source is close to the listener, most of the energy reaching the ear is direct, whereas reverberant energy provides a greater proportion of the total when the sound source is farther away. Complex Sounds ?Missing Fundamental?: Phenomenon in which listeners will still hear the pitch of a missing fundamental frequency of a harmonic sound even if it is removed. Attack vs. Decay of a Sound: The attack of a sound is the part during which amplitude increases, whereas the decay of a sound is the part during which amplitude decreases. Auditory Scene Analysis Q: How does source segregation help us to distinguish various sounds in our environment? A: Source segregation is the processing of an auditory scene consisting of multiple sound sources into separate sound images. This process helps us ultimately distinguish between the different sound sources in our environment. Auditory Stream Segregation: The perceptual organization of a complex acoustic signal into separate auditory events for which each stream is heard as a separate event. Q: What happens when a sequence of notes that have increasing and decreasing frequencies is presented and tones deviate from the rising/falling pattern? A: These deviating tones are heard to ?pop out? of the sequence because they do not share the same timbre as the rest of the notes in the group. 3
Want to see the other 4 page(s) in Chapter 10?JOIN TODAY FOR FREE!