From Fedora Project Wiki
m (page creation with intended revision in 10 minutes)
 
(revised to incorporate Rlandmann's comments (sent by email); added DocBook address; hid addresses)
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
These terms are used in many different audio contexts.  Understanding them is important to knowing how to operate audio equipment in general, whether computer-based or not.
<!--
Address: User:Crantila/FSC/Audio_Vocabulary
Address: User:Crantila/FSC/Audio_Vocabulary
DocBook: "Audio_Vocabulary.xml"
-->
=== MIDI Sequencer ===
A '''sequencer''' is a device or software program that produces signals that a synthesizer turns into sound.  You can also use a sequencer to arrange MIDI signals into music.  The Musicians' Guide covers two digital audio workstations (DAWs) that are primarily MIDI sequencers, Qtractor and Rosegarden.  All three DAWs in this guide use MIDI signals to control other devices or effects.


=== Busses, Master Bus, and Sub-master Bus ===
=== Busses, Master Bus, and Sub-master Bus ===
!! diagram required !!
<!-- [[File:FMG-bus.xcf]] -->
<!-- [[File:FMG-master_sub_bus.xcf]] -->
[[File:FMG-bus.png|200px|How audio busses work.]]
[[File:FMG-master_sub_bus.png|200px|The relationship between the master bus and sub-master busses.]]


Like a motorized bus, an '''audio bus''' can be seen as subjecting multiple different inputs to the same procedure.  Whereas on a motorized bus, many different passengers are transported to a different geographical location *in the same way*, an '''audio bus''' subjects many different audio signals to the same filtering, equalization, level-adjustment, panning, or other effects processMany different tracks of audio can enter a bus, through multiplexing, but only two channels will be outputtedThis allows adjustments to be made to multiple tracks at once.
An '''audio bus''' sends audio signals from one place to another.  Many different signals can be inputted to a bus simultaneously, and many different devices or applications can read from a bus simultaneouslySignals inputted to a bus are mixed together, and cannot be separated after entering a busAll devices or applications reading from a bus receive the same signal.


All audio being routed out of a program usually passes through the master bus.  The '''master bus''' collects and consolidates all audio and MIDI tracks, allowing for final level adjustments and for simpler mastering.  The primary purpose of the master bus is to mix all of the tracks into two channels.
All audio routed out of a program passes through the master bus.  The '''master bus''' combines all audio tracks, allowing for final level adjustments and simpler mastering.  The primary purpose of the master bus is to mix all of the tracks into two channels.


=== Level (Loudness) ===
A '''sub-master bus''' combines audio signals before they reach the master bus.  Using a sub-master bus is optional.  They allow you to adjust more than one track in the same way, without affecting all the tracks.
!! INCOMPLETE !!


The '''level''' of a track, region, or session is experienced by listeners as its "volume."  In professional audio situations, levels are measured in decibels (dB), but represented in a manner that is not at first obvious: 0 dB is the absolute maxmimum level desired. Although it is possible to reproduce higher levels, they may become distorted (REALLY??)
Audio busses are also used to send audio into effects processors.


Check:
=== Level (Volume/Loudness) ===
* [http://www.digido.com/level-practices-part-2-includes-the-k-system.html "Level Practices"]
The perceived '''volume''' or '''loudness''' of sound is a complex phenomenon, not entirely understood by experts.  One widely-agreed method of assessing loudness is by measuring the sound pressure level (SPL), which is measured in decibels (dB) or bels (B, equal to ten decibels).  In audio production communities, this is called "level."  The '''level''' of an audio signal is one way of measuring the signal's perceived loudness.  The level is part of the information stored in an audio file.
* [http://old.nabble.com/First-release-of-jkmeter-td18798950.html this], and
 
* [http://old.nabble.com/digital-volume-sounds-better-at-0-dB--ts28928647.html this]
There are many different ways to monitor and adjust the level of an audio signal, and there is no widely-agreed practice.  One reason for this situation is the technical limitations of recorded audio.  Most level meters are designed so that the average level is -6&nbsp;dB on the meter, and the maximum level is 0&nbsp;dB.  This practice was developed for analog audio.  We recommend using an external meter and the "K-system," described in a link below.  The K-system for level metering was developed for digital audio.
* consider adding a section about "jkmeter," but... well... consider it
 
* maybe [http://en.wikipedia.org/wiki/File:Optimum-mix-levels-for-mast.jpg this], too
In the Musicians' Guide, this term is called "volume level," to avoid confusion with other levels, or with perceived volume or loudness.
 
For more information, refer to these web pages:
* [http://www.digido.com/level-practices-part-2-includes-the-k-system.html "Level Practices"] (the type of meter described here is available in the "jkmeter" package from Planet CCRMA at Home).
* [http://en.wikipedia.org/wiki/K-system "K-system"]
* [http://en.wikipedia.org/wiki/Headroom_%28audio_signal_processing%29 "Headroom"]
* [http://en.wikipedia.org/wiki/Equal-loudness_contour "Equal-loudness contour"]
* [http://en.wikipedia.org/wiki/Sound_level_meter "Sound level meter"]
* [http://en.wikipedia.org/wiki/Listener_fatigue "Listener fatigue"]
* [http://en.wikipedia.org/wiki/Dynamic_range_compression "Dynamic range compression"]
* [http://en.wikipedia.org/wiki/Alignment_level "Alignment level"]


=== Panning and Balance ===
=== Panning and Balance ===
!! tricky section !!
[[File:FMG-Balance_and_Panning.png|200px|left|The difference between adjusting panning and adjusting balance.]]
<!-- [[File:FMG-Balance_and_Panning.xcf]] -->
 
'''Panning''' adjusts the portion of a channel's signal that is sent to each output channel.  In a stereophonic (two-channel) setup, the two channels represent the "left" and the "right" speakers.  Two channels of recorded audio are available in the DAW, and the default setup sends all of the "left" recorded channel to the "left" output channel, and all of the "right" recorded channel to the "right" output channel.  Panning sends some of the left recorded channel's level to the right output channel, or some of the right recorded channel's level to the left output channel.  Each recorded channel has a constant total output level, which is divided between the two output channels.
 
The default setup for a left recorded channel is for "full left" panning, meaning that 100% of the output level is output to the left output channel.  An audio engineer might adjust this so that 80% of the recorded channel's level is output to the left output channel, and 20% of the level is output to the right output channel.  An audio engineer might make the left recorded channel sound like it is in front of the listener by setting the panner to "center," meaning that 50% of the output level is output to both the left and right output channels.
 
Balance is sometimes confused with panning, even on commercially-available audio equipment.  Adjusting the '''balance''' changes the volume level of the output channels, without redirecting the recorded signal.  The default setting for balance is "center," meaning 0% change to the volume level.  As you adjust the dial from "center" toward the "full left" setting, the volume level of the right output channel is decreased, and the volume level of the left output channel remains constant.  As you adjust the dial from "center" toward the "full right" setting, the volume level of the left output channel is decreased, and the volume level of the right output channel remains constant.  If you set the dial to "20% left," the audio equipment would reduce the volume level of the right output channel by 20%, increasing the perceived loudness of the left output channel by approximately 20%.
 
You should adjust the balance so that you perceive both speakers as equally loud.  Balance compensates for poorly set up listening environments, where the speakers are not equal distances from the listener.  If the left speaker is closer to you than the right speaker, you can adjust the balance to the right, which decreases the volume level of the left speaker.  This is not an ideal solution, but sometimes it is impossible or impractical to set up your speakers correctly.  You should adjust the balance only at final playback.


!! diagram possibly useful !!
=== Time, Timeline and Time-Shifting ===
There are many ways to measure musical time.  The four most popular time scales for digital audio are:
* Bars and Beats: Usually used for MIDI work, and called "BBT," meaning "Bars, Beats, and Ticks."  A tick is a partial beat.
* Minutes and Seconds: Usually used for audio work.
* SMPTE Timecode: Invented for high-precision coordination of audio and video, but can be used with audio alone.
* Samples: Relating directly to the format of the underlying audio file, a sample is the shortest possible length of time in an audio file.  See [[User:Crantila/FSC/Sound_Cards#Sample_Rate|this section]] for more information on samples.


'''Panning''' is the process of adjusting how much of a track's sound level is sent to each output channel (see "Multichannel Audio").  Assuming a stereophonic setup, you could cause both of the track's channels to be routed to only one speaker.  When a track is set for "centre" panning, all of the left channel audio is heard from the "left" speaker, and all of the right channel audio is heard from the "right" speaker.  As you adjust towards "left" panning, all of the left channel audio remains heard through the "left" speaker, as well as an increasing portion of the right channel audio.  At the same time, the level of the right channel audio is lowered, so that the level of the right channel audio heard from the "left" speaker plus that heard from the "right" speaker is equal to the level of the right channel when the track is set to "centre" panning.  The opposite effect happens when a track is adjusted towards "right" panning.
Most audio software, particularly digital audio workstations (DAWs), allow the user to choose which scale they prefer.  DAWs use a '''timeline''' to display the progression of time in a session, allowing you to do '''time-shifting'''; that is, adjust the time in the timeline when a region starts to be played.


Balance seems to have a similar effect, but it is not quite the same as panning.  The two terms are sometimes confused on audio equipment and in popular usage.  As you adjust the balance towards the "left" setting, the level of the right channel is reduced, until it can no longer be heard.  By contrast, as you adjust the panning towards the "left" setting, the level of the right channel is reduced, but the right channel's audio can still be heard in the left channel.  In effect, balance allows you to "turn off" a channel of audio, or to compensate for non-equidistant speakers.  Panning, on the other hand, allows you to effectively combine two audio tracks into one, or to partially combine the tracks, so that the audio signal seems to appear somewhere closer to the centre than before adjustment.
Time is represented horizontally, where the leftmost point is the beginning of the session (zero, regardless of the unit of measurement), and the rightmost point is some distance after the end of the session.


=== Synchronization ===
=== Synchronization ===
'''Synchronization''' is exactly what it sounds like - synchronizing the operation of multiple tools.  Most often this is used to synchronize movement of the transport, and to control automation across applicationsThis sort of synchronization is typically achieved with MIDI channels that are not used directly for synthesis.
'''Synchronization''' is synchronizing the operation of multiple tools, frequently the movement of the transportSynchronization also controls automation across applications and devices.  MIDI signals are usually used for synchronization.
 
=== Routing and Multiplexing ===
[[File:FMG-routing_and_multiplexing.png|200px|left|Illustration of routing and multiplexing in the "Connections" window of the QjackCtl interface.]]
<!-- [[FMG-routing_and_multiplexing.xcf]] -->
 
'''Routing''' audio transmits a signal from one place to another - between applications, between parts of applications, or between devices.  On Linux systems, the JACK Audio Connection Kit is used for audio routing.  JACK-aware applications (and PulseAudio ones, if so configured) provide inputs and outputs to the JACK server, depending on their configuration.  The QjackCtl application can adjust the default connectionsYou can easily reroute the output of a program like FluidSynth so that it can be recorded by Ardour, for example, by using QjackCtl.


=== Time and Time-Shifting ===
'''Multiplexing''' allows you to connect multiple devices and applications to a single input or outputQjackCtl allows you to easily perform multiplexingThis may not seem important, but remember that only one connection is possible with a physical device like an audio interface. Before computers were used for music production, multiplexing required physical devices to split or combine the signals.
'''Time''' in a DAW is measured from 0, marking the start of playback, to an arbitrary place which marks the end of playbackThere are many different ways to measure musical time, and most DAWs allow the user to work in whichever happens to suit them bestThe purpose for all methods of measuring time is to precisely indicated when an event (whether audio or MIDI) should begin and end.
 
* Bars (Measures) and Beats: Usually used for MIDI work.
=== Multichannel Audio ===
* Minutes and Seconds: Also includes more precise levels; usually used for audio.
An '''audio channel''' is a single path of audio data.  '''Multichannel audio''' is any audio which uses more than one channel simultaneously, allowing the transmission of more audio data than single-channel audio.
* SMPTE Timecode: Invented for high-precision coordination of audio and video, but can be used with audio alone.
* Samples: Relating directly to the format of the underlying audio file, a sample is the shortest possible length of time in an audio file.  See THE DIGITAL AUDIO SECTION for more information.


When a region is '''time-shifted''', the time it is triggered in the DAW is changedThis can be used to "align" musical events that are supposed to be perceived as happening together.
Audio was originally recorded with only one channel, producing "monophonic," or "mono" recordings.  Beginning in the 1950s, stereophonic recordings, with two independent channels, began replacing monophonic recordingsSince humans have two independent ears, it makes sense to record and reproduce audio with two independent channels, involving two speakers.  Most sound recordings available today are stereophonic, and people have found this mostly satisfying.


Time is represented in the main DAW interface window as progressing horizontally, where the leftmost point to which the window scrolls is considered to be 0.
There is a growing trend toward five- and seven-channel audio, driven primarily by "surround-sound" movies, and not widely available for music.  Two "surround-sound" formats exist for music: DVD Audio (DVD-A) and Super Audio CD (SACD).  The development of these formats, and the devices to use them, is held back by the proliferation of headphones with personal MP3 players, a general lack of desire for improvement in audio quality amongst consumers, and the copy-protection measures put in place by record labels.  The result is that, while some consumers are willing to pay higher prices for DVD-A or SACD recordings, only a small number of recordings are available.  Even if you buy a DVD-A or SACD-capable player, you would need to replace all of your audio equipment with models that support proprietary copy-protection software.  Without this equipment, the player is often forbidden from outputting audio with a higher sample rate or sample format than a conventional audio CD.  None of these factors, unfortunately, seem like they will change in the near future.

Latest revision as of 05:55, 2 August 2010

These terms are used in many different audio contexts. Understanding them is important to knowing how to operate audio equipment in general, whether computer-based or not.

MIDI Sequencer

A sequencer is a device or software program that produces signals that a synthesizer turns into sound. You can also use a sequencer to arrange MIDI signals into music. The Musicians' Guide covers two digital audio workstations (DAWs) that are primarily MIDI sequencers, Qtractor and Rosegarden. All three DAWs in this guide use MIDI signals to control other devices or effects.

Busses, Master Bus, and Sub-master Bus

How audio busses work. The relationship between the master bus and sub-master busses.

An audio bus sends audio signals from one place to another. Many different signals can be inputted to a bus simultaneously, and many different devices or applications can read from a bus simultaneously. Signals inputted to a bus are mixed together, and cannot be separated after entering a bus. All devices or applications reading from a bus receive the same signal.

All audio routed out of a program passes through the master bus. The master bus combines all audio tracks, allowing for final level adjustments and simpler mastering. The primary purpose of the master bus is to mix all of the tracks into two channels.

A sub-master bus combines audio signals before they reach the master bus. Using a sub-master bus is optional. They allow you to adjust more than one track in the same way, without affecting all the tracks.

Audio busses are also used to send audio into effects processors.

Level (Volume/Loudness)

The perceived volume or loudness of sound is a complex phenomenon, not entirely understood by experts. One widely-agreed method of assessing loudness is by measuring the sound pressure level (SPL), which is measured in decibels (dB) or bels (B, equal to ten decibels). In audio production communities, this is called "level." The level of an audio signal is one way of measuring the signal's perceived loudness. The level is part of the information stored in an audio file.

There are many different ways to monitor and adjust the level of an audio signal, and there is no widely-agreed practice. One reason for this situation is the technical limitations of recorded audio. Most level meters are designed so that the average level is -6 dB on the meter, and the maximum level is 0 dB. This practice was developed for analog audio. We recommend using an external meter and the "K-system," described in a link below. The K-system for level metering was developed for digital audio.

In the Musicians' Guide, this term is called "volume level," to avoid confusion with other levels, or with perceived volume or loudness.

For more information, refer to these web pages:

Panning and Balance

The difference between adjusting panning and adjusting balance.
The difference between adjusting panning and adjusting balance.

Panning adjusts the portion of a channel's signal that is sent to each output channel. In a stereophonic (two-channel) setup, the two channels represent the "left" and the "right" speakers. Two channels of recorded audio are available in the DAW, and the default setup sends all of the "left" recorded channel to the "left" output channel, and all of the "right" recorded channel to the "right" output channel. Panning sends some of the left recorded channel's level to the right output channel, or some of the right recorded channel's level to the left output channel. Each recorded channel has a constant total output level, which is divided between the two output channels.

The default setup for a left recorded channel is for "full left" panning, meaning that 100% of the output level is output to the left output channel. An audio engineer might adjust this so that 80% of the recorded channel's level is output to the left output channel, and 20% of the level is output to the right output channel. An audio engineer might make the left recorded channel sound like it is in front of the listener by setting the panner to "center," meaning that 50% of the output level is output to both the left and right output channels.

Balance is sometimes confused with panning, even on commercially-available audio equipment. Adjusting the balance changes the volume level of the output channels, without redirecting the recorded signal. The default setting for balance is "center," meaning 0% change to the volume level. As you adjust the dial from "center" toward the "full left" setting, the volume level of the right output channel is decreased, and the volume level of the left output channel remains constant. As you adjust the dial from "center" toward the "full right" setting, the volume level of the left output channel is decreased, and the volume level of the right output channel remains constant. If you set the dial to "20% left," the audio equipment would reduce the volume level of the right output channel by 20%, increasing the perceived loudness of the left output channel by approximately 20%.

You should adjust the balance so that you perceive both speakers as equally loud. Balance compensates for poorly set up listening environments, where the speakers are not equal distances from the listener. If the left speaker is closer to you than the right speaker, you can adjust the balance to the right, which decreases the volume level of the left speaker. This is not an ideal solution, but sometimes it is impossible or impractical to set up your speakers correctly. You should adjust the balance only at final playback.

Time, Timeline and Time-Shifting

There are many ways to measure musical time. The four most popular time scales for digital audio are:

  • Bars and Beats: Usually used for MIDI work, and called "BBT," meaning "Bars, Beats, and Ticks." A tick is a partial beat.
  • Minutes and Seconds: Usually used for audio work.
  • SMPTE Timecode: Invented for high-precision coordination of audio and video, but can be used with audio alone.
  • Samples: Relating directly to the format of the underlying audio file, a sample is the shortest possible length of time in an audio file. See this section for more information on samples.

Most audio software, particularly digital audio workstations (DAWs), allow the user to choose which scale they prefer. DAWs use a timeline to display the progression of time in a session, allowing you to do time-shifting; that is, adjust the time in the timeline when a region starts to be played.

Time is represented horizontally, where the leftmost point is the beginning of the session (zero, regardless of the unit of measurement), and the rightmost point is some distance after the end of the session.

Synchronization

Synchronization is synchronizing the operation of multiple tools, frequently the movement of the transport. Synchronization also controls automation across applications and devices. MIDI signals are usually used for synchronization.

Routing and Multiplexing

Illustration of routing and multiplexing in the "Connections" window of the QjackCtl interface.
Illustration of routing and multiplexing in the "Connections" window of the QjackCtl interface.

Routing audio transmits a signal from one place to another - between applications, between parts of applications, or between devices. On Linux systems, the JACK Audio Connection Kit is used for audio routing. JACK-aware applications (and PulseAudio ones, if so configured) provide inputs and outputs to the JACK server, depending on their configuration. The QjackCtl application can adjust the default connections. You can easily reroute the output of a program like FluidSynth so that it can be recorded by Ardour, for example, by using QjackCtl.

Multiplexing allows you to connect multiple devices and applications to a single input or output. QjackCtl allows you to easily perform multiplexing. This may not seem important, but remember that only one connection is possible with a physical device like an audio interface. Before computers were used for music production, multiplexing required physical devices to split or combine the signals.

Multichannel Audio

An audio channel is a single path of audio data. Multichannel audio is any audio which uses more than one channel simultaneously, allowing the transmission of more audio data than single-channel audio.

Audio was originally recorded with only one channel, producing "monophonic," or "mono" recordings. Beginning in the 1950s, stereophonic recordings, with two independent channels, began replacing monophonic recordings. Since humans have two independent ears, it makes sense to record and reproduce audio with two independent channels, involving two speakers. Most sound recordings available today are stereophonic, and people have found this mostly satisfying.

There is a growing trend toward five- and seven-channel audio, driven primarily by "surround-sound" movies, and not widely available for music. Two "surround-sound" formats exist for music: DVD Audio (DVD-A) and Super Audio CD (SACD). The development of these formats, and the devices to use them, is held back by the proliferation of headphones with personal MP3 players, a general lack of desire for improvement in audio quality amongst consumers, and the copy-protection measures put in place by record labels. The result is that, while some consumers are willing to pay higher prices for DVD-A or SACD recordings, only a small number of recordings are available. Even if you buy a DVD-A or SACD-capable player, you would need to replace all of your audio equipment with models that support proprietary copy-protection software. Without this equipment, the player is often forbidden from outputting audio with a higher sample rate or sample format than a conventional audio CD. None of these factors, unfortunately, seem like they will change in the near future.