What does comb filter mean




















This section will explain and demonstrate the real-world situations that cause comb filtering. There are three main categories: reflections, multiple speakers, and multiple microphones. Any time a sound is created, it radiates outward from the source and bounces off of the surfaces in the room. When a sound takes multiple routes to a microphone, the delay caused by longer routes causes some frequencies to be out of phase. Imagine you are recording a snare drum. When the drummer hits the drum, the sound travels in a straight line from the drum to the microphone.

The sound of the drum also reflects off of the walls and then to the microphone. The reflected sound travels a further distance than the direct sound, and therefore arrives at the microphone later.

Both signals are the same, but one signal is delayed by a few milliseconds. This causes a comb filter, where some frequencies are cancelled and some are summed. If you are recording a podcast at a desk, the sound of your voice will not only travel directly to the microphone, but will also reflect off of the table and arrive at the microphone slightly later.

This is also a problem when recording guitar amplifiers on the floor. Any surface near the sound source or the microphone can create comb filtering in your recording. For example, musicians might place music stands in front of themselves for sheet music. A voice actor might be reading off of a script. These surfaces will create another pathway for the audio signal to travel, and this might have detrimental effects.

Here are a few tips to help you improve the sound quality recorded in a reflective recording space. Consider that the force of a soundwave decreases as it travels over distance. You can use this to your advantage. Try to place the microphone as close to the sound source as you can so that the direct sound level is significantly louder than the level of the reflected sounds.

The sounds we hear everyday are a mix of direct sound and reflected sounds. If you completely remove the reflections, it might sound unnatural. Focus on absorbing the early reflections, or the first reflections that reach the listener or microphone. Then try to break up, or diffuse reflections.

A diffusive surface reflects sound in many directions rather than focusing all of the energy into one particular direction. There are many ways to diffuse and absorb reflections. I wrote a whole article here on the Audio University website about the best ways to treat a room for better acoustics. Any time you send the same signal to multiple speakers, you run the risk of causing comb filtering.

In professional audio, we do this a lot. Here are a few examples of situations where comb filtering is caused by multiple speakers producing the same signal. Stereo is a very popular format for mixing music and sound for video. The signal is added to itself at the same level and with a delay of 0 ms, 1 ms, 10 ms, 20 ms, 50 ms, and ms respectively.

Notice that the sound is clear and well defined when the added sound is not delayed. However, at a 1 ms delay, the timbre of the sound is colored. When the delay is 50 ms, the ear begins to perceive the delayed sound as an echo, which is even more evident at ms delay. The file contains a male voice originally recorded in mono. The signal is added to itself at a reduced level 5 dB attenuation , and with a delay of 0 ms, 1 ms, 10 ms, 20 ms, 50 ms, and ms respectively. The coloration of the 1 ms delay is now less noticeable.

However, when the delay increases the effect of the delay is still very audible. The signal is added to itself at a reduced level 10 dB attenuation , and with a delay of 0 ms, 1 ms, 10 ms, 20 ms, 50 ms, and ms respectively. Notice that the rule provides a 10 dB reduction at the microphone that is three times further away. The coloration of the 1 ms delay is now almost unnoticeable. However, when the delay increases the effect of the delay is still audible.

The signal is added to itself at a reduced level 15 dB attenuation , and with a delay of 0 ms, 1 ms, 10 ms, 20 ms, 50 ms, and ms respectively. The coloration of the 1 ms delay is inaudible.

Even the 10 ms and 20 ms delays are hardly noticeable. However, when the delay increases to 50 and ms, the effect is still clearly heard. General rules from psychoacoustics We know from various psychoacoustic studies that any delayed sound that arrives within the first 15 ms after the direct sound for instance in the form of a reflection or recording by two microphones on one channel should be attenuated by 15 dB. Another rule from another study is that all reflections within the first 20 ms should be attenuated by at least 20 dB.

Why we only use a 10 dB reduction in the general microphone technique is due to the fact, that often other sounds sufficiently mask the coloration, especially in sound reinforcement.

Delay due to rule In the diagram below, you can see the delay between two microphones of which one is placed three times further away compared to the nearest microphone according to the rule.

How to use this diagram : Define the distance to the nearest microphone and find it — cm or inch — on the X-axis. From that point go vertically upwards until you hit the curve blue if you use cm or red if you use inches. The output is a linear combination of the direct and delayed signal. Figure 2. Note that the feedforward comb filter can implement the echo simulator of Fig.

Thus, it is is a computational physical model of a single discrete echo. This is one of the simplest examples of acoustic modeling using signal processing elements. Similarly, when air absorption needs to be simulated more accurately, the constant attenuation factor can be replaced by a linear, time-invariant filter giving a different attenuation at every frequency. Due to the physics of air absorption, is generally lowpass in character [ , p.

Feedback Comb Filters The feedback comb filter uses feedback instead of a feedforward signal , as shown in Fig. For stability , the feedback coefficient must be less than in magnitude, i. Otherwise, if , each echo will be louder than the previous echo, producing a never-ending, growing series of echoes.

Sometimes the output signal is taken from the end of the delay line instead of the beginning, in which case the difference equation becomes. The simplistic explanation of phase given so far describes what happens with sine waves, but typical music waveforms comprise a complex blend of frequencies.

If we examine the same scenario, in which two versions of a musical signal are summed with a slight delay, some frequencies will add, while others will cancel. A frequency-response plot would show a sequence of peaks and dips extending up the audio spectrum, their position depending on the time difference between the two waveforms.

That's how a flanger works: a delayed version of a signal is added to a non-delayed version of itself, deliberately to provoke this radical filtering effect, which, because of the haircomb-like appearance of its response curve, is affectionately known as 'comb filtering'. Varying the time delay makes the comb filter sweep through its frequency range, picking out different harmonics as it moves.

A less severe form of comb filtering occurs when the outputs from two microphones set up at different distances from a sound source are combined — a situation familiar to anyone who has miked up a drum kit, for example. Because the more distant mic receives less level than the close mic, the depth of the filtering isn't as pronounced as in our flanger example, but it can still compromise the overall sound.



0コメント

  • 1000 / 1000