What Is Audio Signal Flow? The Full Beginners’ Guide

What Is Audio Signal Flow? The Full Beginners' Guide

Signal flow is one of the most important concepts to understand in music production and audio more generally. It's the basis of how all audio systems work. Whether you're just starting out or you want to deepen your knowledge of signal flow, you've come to the right place.

What is audio signal flow? Signal flow is effectively the path of an audio signal from its origin to its final output, taking into account the routing between different audio devices and sometimes the sound source and transducer(s). Understanding signal flow is critical for simple and complex systems alike.

In this article, we'll discuss audio in detail, how it flows, and how we can use our knowledge of signal flow to our advantage in a few basic examples.


What Is Audio?

Before getting into the depths of audio signal flow, let's clear up any confusion about what audio actually is.

Audio is most simply described as electrical energy (active or potential) that represents sound through either analog or digital means.

Analog audio represents sound as an electric AC voltage (whether active or potential).

Digital audio represents sound as a series of binary numbers.

Audio waveforms are analogous to sound waves. They're often complex waves with frequencies concentrated within the audible range of human hearing (20 Hz to 20,000 Hz).

Audio can be recorded via transducers and synthesis. Transducers (microphones, electrical pickups, etc.) effectively convert sound wave energy into analog audio. Synthesizers create audio signals (analog or digital), taking energy from a power source.

Audio can be stored via analog or digital means. Analog means include tape and vinyl, while digital means include compact discs (CDs) and streaming.

Audio can be played back via transducers such as speakers and headphones. Note that digital audio must be converted to analog audio in order to properly drive such transducers.

So, put differently, digital audio can be thought of as a discrete representation (having sample rate and bit depth) of continuous analog audio, which in turn is a representation of continuous sound waves.

Analog audio can be converted to digital via an analog-to-digital converter (ADC), just like digital audio can be converted to analog audio via a digital-to-analog converter (DAC).


What Is Signal Flow?

Signal flow, put simply, is the path an electrical signal will take from its source to its final output. We can get as clinical as the individual electrical components and wires the signal passes through or, as we'll focus on in this article, as big-picture as the devices the signal flows through. The important part here is that we get the order of the devices correct from the source to the final output.

Audio signal flow, then, is the path an audio signal takes from its source (a transducer, synthesizer or stored medium, etc.) to its final output (a transducer, a mixer output, a storage medium, etc.).

Signal flow can be relatively simple, like an electric guitar plugged into a combo amp, or very complex, like a live sound mixing console with a wide variety of inputs, outputs and auxiliary tracks. While I won't be able to show every single example of signal flow, we'll consider a few examples later in this article to gather some practical insight.

It's important to keep the phrase “from its source to its final output” in mind as we move through the basics. In complex systems, we'll have multiple sources and perhaps even multiple outputs or, in other words, multiple different signal paths within the overall signal flow of the system.


The Recording Chain

Starting off, we have the recording chain. In this instance, as the name would suggest, the “final output” is actually the printing or recording to analog or digital media.

We can opt to record many different audio signals from many different sources. Oftentimes the source will be a sound wave captured by a microphone or perhaps some other vibrations being captured by an electrical pickup. Other times it will be synthesized audio being recorded, and other times it will be pre-recorded material being re-recorded (think of sampling).

Let's break down these four common recording chains and what may or may not be necessary for the signal flow. I'll also discuss a bit about vinyl pressing in this section.

Recording Sound Waves: Signal Flow

Starting with recording sound waves, we typically use microphones for this purpose. A microphone acts as a transducer to convert sound waves into audio signals. This is achieved via a sensitive diaphragm(s) that oscillates according to the sound waves and produces a coinciding audio signal. Unless the microphone has a built-in ADC, it will output an analog signal.

But microphone level signals are too low for use with consumer or professional “line level” equipment. Hence, we need a preamplifier to add gain to the mic signal before it can be recorded effectively. Of course, we can record a mic level signal as is, though it will require gain at some point and applying preamplification is standard practice.

So, thus far, we have a sound source being picked up by a microphone, which is connected to a mic preamp via a cable.

A standard standalone mic preamp will output a line level signal that can be processed by outboard gear (compressors, EQs, etc.) before ultimately being recorded.

Note that modern audio interfaces will have built-in mic preamps in their inputs, which will add gain to the signal before the ADC.

If we're going the analog route, the pre-amplified audio signal will generally pass through a mixer channel (and any processing within the channel strip), through any internal mixer routing, and finally to its channel on tape on the tape machine, where it will be recorded.

If we're going the digital route, the signal will be digitized at the audio interface and passed to the computer or converted just after the channel mic input on a digital hardware mixer. From there, the signal will be assigned a digital channel where it can be recorded with or without inserted effects.

I understand this can be overwhelming. We'll be discussing routing, inserts and more in the upcoming sections for more clarity.

Here are a few diagrams to illustrate what I'm talking about here. Follow the arrows to follow the signal flow:

Diagram showing signal flow from a sound source, through a microphone, audio interface and into a computer.
Signal flow: Sound source recorded to Computer/DAW
Diagram showing signal flow from a sound source, through a microphone, microphone preamp, outboard gear (EQs, compressors, etc.), mixing console and into a tape machine.
Signal flow: Sound source recorded to tape

Recording Other Vibrations: Signal Flow

Really, what I'm getting to here is recording electric string instruments, commonly bass guitar and guitar.

In this case, an electromagnetic pickup converts the oscillating magnetic flux of a vibrating magnetic string into an electric (analog) audio signal.

From there, the signal can be amplified and have its impedance improved via internal circuitry (in the case of active pickups) or not (in the case of passive pickups).

The signal is then passed along the connected cable to either a guitar amplifier, a direct inject (DI) box or directly to an instrument (or line) input of an interface or mixer.

In the case of recording guitar amplifiers, we'd actually have a situation where we'd be converting the vibrations of magnetic strings into audio, amplifying that audio, converting it into sound waves, and then reconverting those sound waves back into audio. It's quite the process.

In the case of direct connections, including if we are using a line output from the guitar amplifier, we can effectively keep the audio signal as audio until it's recorded.

Additionally, in the case of electric instruments, we can utilize effects pedals in addition to our typical outboard gear. These pedals often accept instrument level and are best put in line before the preamp section of the guitar amplifier or in the effects loop that runs between the preamp and power amp of the guitar amplifier.

Let's consider a few more diagrams, this time of a few different ways we can record electric instruments. Notice the contrast in complexity between the two following examples:

Diagram showing signal flow from a guitar, straight into an audio interface and into a computer.
Signal flow: Electric instrument recorded to Computer/DAW
Diagram showing signal flow from a guitar into guitar amp (preamp section), into a series of guitar pedals, back into the guitar amp (power amp section), through the guitar cabinet/speakers, into a microphone, through a mic preamp, outboard gear (compressor, EQ, etc.), mixing console and into a tape machine.
Signal flow: Electric guitar cabinet recorded to tape

The idea here is that audio signal flows can be short or long, and there are many different methods for recording. I'm simply providing examples to help make my explanations easier to understand.

Recording Synthesized Audio: Signal Flow

Synthesized audio is another common source for recording. Depending on their design, synthesizers can create either analog or digital audio signals for recording.

While it's certainly possible to send a synthesizer signal through a speaker and capture the sound with a microphone, synths are most commonly recorded direct. In other words, the audio signal will remain an audio signal (though some amount of analog-to-digital conversion, or vice versa, may take place) throughout the recording signal chain.

Software synthesizers (as virtual instruments) will have a virtual signal flow within the host digital audio workstation. A soft synth can be inputted (virtually) into a channel within a DAW. Oftentimes, these soft synths are controlled with MIDI information, and so to record these virtual instruments as audio, we'll often have to route the MIDI track to an audio track and route that audio track for recording (or bounce the MIDI track in place as audio).

Here's a small screenshot from Logic Pro X showing this basic setup. Notice the “Soft Synth” MIDI/Instrument track is being routed to Bus 1, and the Audio track's input is set as Bus 1:

A screenshot of Logic Pro X showing a virtual instrument synth being recorded onto an audio track.
Signal flow: Software synth recorded within DAW

I'll discuss mixer routing in more detail in the upcoming section on Routing Options.

Hardware synthesizers generally output instrument level signals that can be inputted directly into an audio interface or hardware mixer before being sent to where they need to go for recording.

For example, we could have the following signal flow diagram for recording an analog synth to tape:

Diagram showing signal flow from a synthesizer directly into a mixing console and then into a tape machine.
Signal flow: Hardware synth recorded to tape

Recording Pre-Recorded/Sampled Audio: Signal Flow

Sampling has been around for a long time, and we've been recording pre-recorded material for even longer when we consider duplication/reproduction (more on that in the next section).

Oftentimes, we can simply add samples into our digital audio workstation arrangement window and move on. Even in analog systems, we can insert samples with tape.

But if we're using dedicated samplers, we can send the audio output from the sample directly to an analog mixer or audio interface. See the diagrams above and replace the directly-inserted instrument with a sample in the case.

Recording/Pressing To Vinyl And Tape

Mastering is the final stage of the music production process, aimed at providing a single master digital file or analog tape for reproduction. The master is processed in a way to help improve the mix in terms of continuity between songs on an album, translatability across different playback systems, and overall fidelity.

Digital reproduction is easy. The same file can be reproduced an infinite number of times without any degradation, and the file can be converted to different formats if necessary.

In the analog world, a master tape will be recorded from the mixing console. This master will then be used to record duplicates, often either vinyl via a vinyl press or tape via secondary tape machines.

In this case, we will have a signal flow similar to recording samples, and the “final output” will be the media by which the final product is sold.


Audio Signal Flow: Inserts (The Effects Chain)

Now that we understand how to input different audio sources and record a variety of sound sources, let's dive into the central device for mixing, the mixer, and, most notably, inserts.

Before moving forward, I should mention that we aren't always using mixers to record. Oftentimes, the material has already been recorded, and we're simply mixing it. Other times, we're mixing live without recording anything. Consider these differences for yourself when thinking of the overall signal flow.

An insert, in mixers and digital audio workstations, is a patch point (real or virtual) after the input/preamp of a channel that allows us to insert a line level device (a hardware effect unit, processor or plugin) made up of both an output (from the channel) and input (back into the channel).

In other words, an insert is a processor that is inserted into the effects chain or signal flow path.

For example, we can have a microphone being inputted into Input 1 of our DAW and have an EQ, compressor and de-esser inserted directly on the channel. The signal flow within that channel would be as follows:

  • Pre-amplified microphone signal into the channel
  • Signal into the input of insert 1 (EQ)
  • Signal out of the output of insert 1 (EQ)
  • Signal into the input of insert 2 (compressor)
  • Signal out of the output of insert 2 (compressor)
  • Signal into the input of insert 3 (de-esser)
  • Signal out of the output of insert 3 (de-esser)
  • Signal out of the channel (to aux channels, subgroups, mix bus, etc.)

Here's a screenshot from Logic Pro X as an example:

Screenshot of Logic Pro X with an EQ (FabFIlter Pro-Q 3), compressor (Waves CLA-76) and DeEsser (Waves DeEsser) inserted back-to-back on a single mono audio track.
Signal flow: Virtual inserts on microphone channel in LPX
(FabFilter Pro-Q 3 EQ, Waves CLA-76 compressor, and Waves DeEsser)

Audio Signal Flow: Serial Processing

In the previous section, we discussed inserts and how the signal flows through the inserted processors in order. This is what is known as serial processing.

Sending an audio signal through different processes and gain stages in series is the most common way to record and mix music.

In terms of serial processing in a mixer, we could have the following signal path:

  • Signal from recorded audio inputted for playback on Channel 1
  • Signal into the input of insert 1 of Channel 1
  • Signal out of the output of insert 1 of Channel 1
  • Signal into the input of insert 2 of Channel 1
  • Signal out of the output of insert 2 of Channel 1
  • Signal into the input of insert 3 of Channel 1
  • Signal out of the output of insert 3 of Channel 1
  • Signal outputted from Channel 1 on Bus 1
  • Signal inputted to Subgroup 1 on Bus 1
  • Signal into the input of insert 1 of Subgroup 1
  • Signal out of the output of insert 1 of Subgroup 1
  • Signal outputted from Subgroup 1 on Bus 2
  • Signal inputted to Mix Bus on Bus 2
  • Signal into the input of insert 1 of Mix Bus
  • Signal out of the output of insert 1 of Mix Bus
  • Signal into the input of insert 2 of Mix Bus
  • Signal out of the output of insert 2 of Mix Bus
  • Signal outputted via the Stereo Output of the mixer

In this case, the signal passes through each processor and routing path one after another or, in other words, in series.

The following bullet points would look like this in the case of Logic Pro X (notice the inputs and outputs and the inserts of each labelled channel). Notice, too, that the inserts section in LPX is labelled as “Audio FX”:

A screenshot of Logic Pro X showing the basics of output routing. Channel 1 is outputted into Subgroup 1, which is outputted to the Mix Bus.
Signal flow: Virtual inserts and basic routing in series in Logic Pro X

Serial processing is fairly simple, so I'll leave it at that.

By the way, I'm using screenshots from Logic Pro X because it's my DAW of choice. You can learn why I choose LPX in this article and the video below:


Audio Signal Flow: Parallel Processing

If you've studied electronics in any capacity, you'll likely know that the other basic type of signal (electric) flow is parallel.

Parallel processing happens when we split the same signal into two different signal paths (often to converge later on in the overall signal flow) for different processing.

In mixers, this is most often done with the help of auxiliary tracks.

An aux track is a track/channel with a specified bus input that can take in audio from other channels within the mixer. We can route multiple tracks to a single aux channel via pre or post-fader sends (or via their outputs, as we saw in the previous section where we send Channel 1's output to Subgroup 1's input via Bus 1 and Subgroup 1's output to the Mix Bus via Bus 2). We can also route a single track to multiple aux channels via pre or post-fader sends (or via their outputs).

Additionally, each aux track output can then be routed elsewhere.

Each channel will have a send control (typically a potentiometer) for each possible aux track we can send to. This affords us control over how much level we send to a given aux track, along with our fader control on the given aux track.

Pre-fader send means that we're sending levels independently from the fader of the given track being sent. The level is taken as the output from the last insert in the channel rather than after the fader.

Post-fader, conversely, is dependent on the fader level of the track, so in addition to the send level control, we also adjust the level being sent based on the fader of the track being sent.

This lays the groundwork for parallel processing.

Consider sending a single track to an auxiliary track. We've effectively split the signal and sent it to two different places (the original channel and the aux channel). This is effectively parallel processing.

Now we can process the auxiliary channel differently than the original channel. The aux track (often referred to as the effects return) will typically have its own insert slots, send controls, fader, pan pot, and other functionality.

So, if we take parallel compression, for example (a common practice for parallel processing), we could have the following signal flow system:

  • Signal from recorded audio inputted for playback on Channel 1
  • Signal from Channel 1 is outputted from Channel 1 via Bus 1 to the Mix Bus
    • Signal is inputted into the Mix Bus via Bus 1
    • Signal is outputted from the Mix Bus via the Stereo Output of the mixer
  • Signal from Channel 1 is also sent from Channel 1 via Bus 2 to Return 1
    • Signal is inputted into Return 1 via Bus 2
    • Signal is inputted into the input of insert 1 of Return 1 (compressor)
    • Signal is outputted from the output of insert 1 of Return 1 (compressor)
    • Signal from Return 1 is outputted via Bus 1 to the Mix Bus
    • Signal is outputted from the Mix Bus via the Stereo Output of the mixer

This way, we effectively get a clear/original or “dry” version of the signal on Channel 1 and a “wet” version of the signal being compressed in parallel on Return 1.

Here's what this basic setup would look like in Logic Pro X. Note that the send to Bus 2 is post-fader and that the Return 1 has its own set of inserts, send options, output, pan pot and fader:

Screenshot of Logic Pro X showing an effects send and return channel. In this case, Channel 1 is sending audio to Return 1, and both channels are ultimately outputted to the Mix Bus.
Signal flow: Basic parallel processing in Logic Pro X

I talk about parallel processing in more detail in the following video:


Audio Signal Flow: Routing Options

The routing capabilities are immense in modern digital systems, including our fully-featured digital audio workstations. That means the signal flow can be extremely complex if we'd like it to be.

So, in this section, I'd like to quickly go over the various routing options we have available to further our understanding. Forgive me if I repeat myself in my explanation of the following routing options:

I have an in-depth video on routing options (embedded below) if you'd rather that format:

Routing Option: Inputs And Outputs

The most essential routing tools we have are our inputs and outputs. While a mixer (or audio interface connected to a DAW) will have its own inputs and outputs, we can think of the internal routing of these systems as having “inputs” and “outputs” as well.

An input is a connection point (physical or virtual) that takes in a signal. In other words, the signal flows into an input.

An output is a connection point (physical or virtual) that puts out a signal. In other words, a signal flows out of an output.

With inputs and outputs as our basis, we can route audio signals to and from different devices.

Routing Option: Inserts

An insert, as was discussed earlier, is a patch point (physical or virtual) after the input of a channel that allows us to insert a processor (a hardware effect unit, processor or plugin). An insert is made up of both an output (from the channel) and input (back into the channel), and channels typically offer multiple inserts in series.

By using inserts, we can process the audio of a channel with the processors/effects we want in order to mix the signal appropriately.

Routing Option: Buses

A bus, in audio, is a signal path (physical or virtual) that can carry audio from multiple sources from one place to another. Buses, themselves, are not channels. Rather, they are signal paths that can connect channels.

For example, we can route the outputs of our channels to specific busses and have those same busses set as the inputs of specified subgroups, submixes, outputs and the mix bus.

As another example, we can route the auxiliary sends of our channels to specific busses and have those same busses set as the inputs of specified auxiliary tracks (effects returns).

Let's consider the following screenshot from Logic Pro X, where we have the outputs of 7 drum tracks (Kick1, Kick In, Snr Top, Snr Btm, HH, OH and Room) being routed to the Drums subgroup via Bus 1 (pay attention to the Input and Output section of the mixer).

Furthermore, in the Sends section, we can see that the Snares (Snr Top and Snr Btm) are being bussed to Bus 11, which sends the snare signals to a plate reverb effects return channel (Snr Plate). We also see that every track except the Room is being sent through Bus 12 to a parallel compression return channel (Drum PC).

Beyond that, we see that the Drums subgroup and the Snr Plate and Drum PC return channels are outputted to the Stereo Out bus, which, in Logic Pro X, is the default mix bus.

Routing Option: Subgroups

Subgroups (also known as submixes) are groups of [typically] similar tracks/channels summed together. These tracks are bussed together on the same bus, and the subgroup is the channel with that bus as the input.

In the previous example pictured above, we saw several drum tracks being routed to a dedicated “Drums” subgroup. The outputs of the tracks labelled Kick1, Kick In, Snr Top, Snr Btm, HH, OH, and Room were all being routed via Bus 1 to the subgroup labelled Drums.

We can also see that, on the stereo Drums subgroup, there are a few inserts that act to process all incoming audio (a mix of each track being fed into the subgroup via Bus 1). The inserts, in this case, are the following plugins, which I'll commonly opt for on my drum subgroups:

Routing Option: Monitor Mixes

Monitor mixes are common in recording sessions and live performances where each musician wants their own dedicated mix. Suppose the hardware mixer, audio interface connected to the DAW, or any other system we're using has enough outputs and routing capabilities. In that case, we can likely offer multiple different mixes from the available signals within the system.

For example, we could have a simple 4-piece with a vocalist, guitarist, bassist and drummer. Each member may want their own monitor mix, whether the monitoring is done through foldback speakers, in-ears or standard studio headphones. The monitor mixes could look like this (in a relative, qualitative analysis):

Vocalist monitor mix:

  • More vocal
  • More guitar
  • Less bass
  • Less drums

Guitarist monitor mix:

  • More guitar
  • More bass
  • Less vocal

Bassist monitor mix:

  • More bass
  • More drums

Drummer monitor mix:

  • More drums
  • More bass

Of course, we can get into much finer detail than that mentioned above (on a track-by-track basis), but I think it makes the point.

Routing Option: Auxiliary Tracks

We've already discussed auxiliary tracks in some detail. Let's dive a bit deeper here.

An auxiliary channel or “aux track” is designed as a flexible routing option/bus destination to take “sends” or outputs from the tracks of the mixer.

These auxiliary tracks are most commonly used for parallel processing and effects returns, though they can also be used for monitor mixes as well.

Signal flow to and from auxiliary tracks is often referred to as “sending” and “returning” within the mixes (whether physical or virtual).

An aux send is a bus path that can send audio from a channel independently of that channel's output, either pre or post-fader. This bus feeds an auxiliary track. If that auxiliary track's output is routed back into the mix (rather than to separate mixer outputs), it is considered a “return” channel.

Returning to the most recent example of drum routing picture in the Buses section, we can see that auxiliary tracks are set up for a snare plate reverb effects return (via Bus 11) and parallel drum compression (via Bus 12).

Routing Option: VCAs

In mixing, a VCA (voltage-controlled amplifier) or VCA group is an independent fader that we can use to control the signal levels of multiple channels simultaneously without altering the channels' faders and without having to route the channels to their own subgroup.

A VCA is simply a single fader that can control the levels of multiple tracks at once. There are no inserts, pan pots, send options, or any other routing options from a VCA.


The Playback Chain

Now that we've considered the recording phase and the mixing phase, we should consider the audio signal flow in our playback chains.

Audio playback is the process of converting audio signals into sound waves so that we can experience them through our sense of hearing (and, to a lesser extent, our sense of touch). The playback chain may have a variety of different sources, though the “final output” is always a transducer, generally a loudspeaker or a headphone.

It's important to note that loudspeaker and headphone transducers convert analog audio to sound waves, so it's vital that we convert digital audio to analog audio before our speakers in the playback chain.

As for the source, the playback material can either be recorded to a digital or analog medium or be performed live.

Before we get to a few playback chain examples, let's consider a few factors that will determine the potential signal flow.

The first to consider is that most professional audio devices output line level signals. Speakers (including headphones) require stronger signals than line level. Therefore, we generally need an amplification stage.

Suppose the signal level is even lower than line level, like the typical phono outputs of vinyl record players (we'll get to those shortly). In that case, a preamplification stage will also be necessary. This preamp stage may require a separate preamp unit, or it may be built into the record player/turntable itself.

In the case of headphones, many headphone outputs will have built-in digital-to-analog converters (in digital systems) and headphone amplifiers to effectively drive the connected headphones with appropriate signal levels and impedance. In other cases, a dedicated headphone amplifier may be necessary.

This brings up a second point, which is the difference between passive and active speakers. Basically, some speakers will have built-in amplifiers to drive their drivers appropriately (these are referred to as active speakers). In contrast, other speakers will depend on external amplifiers with appropriate specifications to drive their drivers.

Another aspect worth consideration is the transmission medium/media. It's common for playback audio to be carried via cable, though we also have to think about wireless options, including radio frequencies and the Bluetooth protocol.

I believe those are the main factors worth mentioning here. Let's now move on to a few playback signal flow examples.

For simplicity's sake, let's take the signal flow from the output of whatever system the audio is being played back from. A few common devices that audio could be outputted from include, but are definitely not limited to:

Let's consider these playback devices and the potential signal flow from their outputs.

Playback Device: Mixers/Mixing Consoles

We've already covered a lot of what goes on within a mixer (physical or virtual). Depending on the setup, we also know that we can have multiple outputs (either from a hardware mixer or through an audio interface connected to a DAW).

So, depending on the capabilities of our gear and the requirements of the situation, we can make our playback/monitoring signal flow as simple or as complex as it needs to be.

In general, we'll at least have a main stereo output. This can be sent to our headphones, studio monitors, front-of-house PA system (perhaps summed to mono) or any other transducer for playback.

In addition to the main output, we may also have dedicated submixes, and we may even want to output our subgroups and aux channels to their own speakers (though this is a bit advanced).

To draw out a few examples, let's consider a live sound situation with a hardware mixer and a studio mix session situation with a digital audio workstation.

A complex diagram showing the mixing console as the centrepiece. It is outputting 4 wireless in-ear monitor mixes, 1 headphone output, 1 subwoofer output (which connects to subwoofer amps and subwoofer), 1 stereo main output (which connects to power amps and front-of-house speaker arrays), and 4 monitor mixes for powered foldback monitors.
Signal flow: Live sound (mixer outputs only)

In the live sound example pictured above, we can see the following:

  • The main outputs are being sent to power amplifiers that are driving two line-array speaker systems.
  • In this example, the theoretical mixer offers a dedicated sub output (in other systems, we may only have the main outputs and, therefore, be required to utilize a crossover network between the main output to split the frequencies between the line arrays and the subs).
  • We also have a pair of headphones connected to the mixer's headphone output (with a built-in headphone amp) for easy monitoring at the board.
  • Additionally, we have four different submixes being sent to four different foldback monitors (with built-in amps).
  • We also have four different submixes being sent wirelessly (at different radio frequency carriers) to four different pairs of in-ear monitors.

This is, of course, overly simplified (we skipped over all the inputs and internal routing within the mixer), but it shows the basics of how typical playback/monitoring signal flow may work in a live sound environment.

A complex diagram showing an audio interface as the centrepiece, connected to a computer. It outputs 1 headphone out, and two stereo monitor outputs (A and B).
Signal flow: Home studio (mixer outputs only)

In the home studio example pictured above, we can see the following:

  • The computer (and the DAW software) is connected to an audio interface with multiple outputs.
  • We're sending signal from the interface to a pair of headphones. The headphone output has its own DAC and built-in headphone amp.
  • We are also sending signal to two different pairs of powered studio monitors so that we can toggle between the two for different monitoring options (assuming the interface allows for switching between the two outputs).

In this case, we're sending the same “mix bus” output to the two pairs of monitors and the headphones, though we could opt to send different submixes if we wanted to (assuming the interface allowed for such capabilities).

Playback Device: Smartphones/Computers

Smartphones and modern computers typically have built-in speakers. In this case, the digital audio (from streaming or stored files) is effectively converted via the built-in DAC, amplified by a small power amplifier, and reproduced by the internal speakers.

Beyond that, there are many other ways to playback audio from our electronic devices that include external speakers (that typically offer greater fidelity and volume than the built-in options). While going over every possible scenario would be exhaustive, I'd like to cover two common setups: the consumer-grade computer speaker setup and the Bluetooth speaker setup.

A simple diagram showing signal flow from a computer to computer speakers.
Signal flow: Computer speakers

In the simple computer speaker example pictured above, we can see the following:

  • The digital audio from the computer is converted to analog audio via the built-in output DAC.
  • The analog audio is sent to the central speaker (often the subwoofer) of the powered computer speaker set — “powered speakers” feature a multi-channel amp that feeds multiple passive speakers.
  • The signal is amplified and split by the internal passive crossover, and the appropriate frequencies are reproduced by the appropriate speaker drivers.

This one's pretty simple but worth mentioning here.

A simple diagram showing signal flow from a smartphone to a Bluetooth speaker.
Signal flow: Bluetooth speakers

In the seemingly simple smartphone-to-Bluetooth-speaker example pictured above, we can see the following:

  • The smartphone and Bluetooth speaker are connected via Bluetooth.
  • The audio played back by the smartphone is encoded according to the Bluetooth protocol and transmitted on a short-range radio frequency carrier in the frequency band between 2.400 to 2.485 GHz.
  • The audio is received and decoded by the Bluetooth speaker and converted to analog audio via a built-in DAC.
  • The audio is then amplified to drive the speaker drivers appropriately.

Playback Device: Record Players/Turntables

Record players and turntables are designed to playback the audio stored on vinyl records. In this case, the audio information is pressed into the grooves of the record, and the cartridge (made up of the stylus, magnets, coils, cantilever, and body) effectively reads this information and converts it into electrical energy (analog audio signals).

This audio is at “phono level” and requires a dedicated phono preamp to bring the audio signal up to line level. From there, the audio can be amplified further by a power amp and reproduced as sound via speakers.

Note that some record players have all these individual components combined into a single unit. However, most high-end equipment is highly specialized, so putting together the full signal chain would typically require multiple units (as mentioned above).

To break down the full basic signal flow, let's consider the following diagram:

A diagram showing signal flow from a record player, through a phono preamp, power amp, and to a pair of passive speakers.
Signal flow: Record player playback

Again, the record player itself acts to spin the vinyl record and effectively convert the information within the vinyl record's grooves into an audio signal. This audio signal is then pre-amplified by a phono preamp before being amplified for playback through speakers (or headphones).

Note that the power amp stage could be built into the speaker if the speaker is active or powered.

Playback Device: Radios

Radios are a bit different, as the sound source of the playback signal will come from a seemingly disconnected device.

Basically, a radio receiver (what we generally think of as a radio) will be tuned to resonate at a specific radio frequency and will be designed to demodulate the radio signal appropriately to detect any audio signal being carried by that radio frequency.

We won't get into the nitty gritty of radio transmission here, but I will state that it's based on a carrier signal (the radio frequency) and a modulator signal (the audio, in our case). There are a few different styles of modulation, such as amplitude modulation (AM), frequency modulation (FM) and even digital options.

So the radio transmitter (often a “radio station” in the case of music) will emit an amplified radio frequency via an antenna. This modulated radio frequency travels through space, and radio receivers with antennae tuned to that frequency can receive the signal, and demodulation can take place.

Note that the radio must be able to be tuned to the appropriate radio frequency and demodulate the radio signal. However, AM and FM, for example, are transmitted in different bandwidths, so there isn't much of an issue here.

Once the receiver demodulates the audio signal from the radio signal, it can send the amplifier to a power amp, which can then drive speakers (or a headphone amp to drive headphones) for playback. As we may expect, in some cases, all of these devices are combined into a single device.

So, to break down the full basic signal flow of a radio, let's consider the following diagram:

A diagram showing signal flow from a radio transmitter to a radio receiver and through a power amp to a pair of passive stereo speakers.
Signal flow: Radio transmitter/receiver playback

To reiterate, a radio frequency is modulated by an audio signal, amplified and transmitted from the station transmitter. The transmitted radio signal is received by a properly tuned receiver and the audio signal (modulator) is taken from the carrier signal in the demodulation process.

From there, the audio signal is amplified (if the modulator is digital, then a DAC will be required before the amplification stage) and sent to drive the speakers of the system. Note that headphones would require a proper headphone amp to be driven appropriately.


Feedback Issues

Positive feedback loops happen when the output of a system (or part of a system) is fed back into its input, and the signal level rapidly increases to the point where it overloads the system (or part of the system).

Feedback generally sounds horrendous (though it can be used for effect in certain musical contexts) and has the potential to damage components and devices without our audio system. Therefore, it's typically best avoided.

So when we're patching our audio system (physically or virtually), it's critical that we pay attention not to cause feedback loops. Avoid looping any signal path back on itself within your routing if possible, and do your best to keep the different courses within a mixer (as an example) flowing toward the output.

Some systems will have some amount of protection against feedback loops, but it's always best practice to avoid them altogether.

For example, in the following screenshots from Logic Pro X, you can see that I created a simple feedback loop within my mixer. The FB Track, which has audio on it, is being outputted to the Stereo Out, but it's also being sent to the FB Aux via Bus 2. The FB Aux, which is being fed by the FB Track, is being output back into the input of the FB Track via Bus 1.

However, LPX has a safety measure where the feedback loop won't be completed unless I engage Input Monitoring (the orange “I” just above the Mute “M” and Solo “S” controls). You can see the safety in place in the left image and the feedback loop in full force in the right image:

A dual-screenshot from Logic Pro X, where the left side has safety against internal feedback issues and the right side does not.
Signal flow: Basic feedback loop in Logic Pro X (safety in left image, no safety in right image)

We also need to be aware of potential feedback loops between our transducers within a system. Any time we have a transducer that converts sound waves into audio signals (microphones, pickups, etc.) and a transducer that converts audio signals into sound waves (headphones, speakers, etc.), we need to be careful.

Take the typical live sound arrangement, for example. We'll generally have live microphones (for vocals and instrument reinforcement) and live speakers (for front of house and for monitoring on stage). If too much sound energy from the speakers is picked up by a microphone, we can get a positive feedback loop, which causes overloading in the system and the nasty squealing of mic feedback.

The same is true of guitar amplifiers, especially when there's a lot of gain applied to the circuit (for distortion, for example). The sound waves coming from the cabinet are strong enough to vibrate the strings of the guitar (thanks, in large part, to resonances). This resonant oscillation, if left alone, will send more signal to the amp and cabinet, which will cause the strings to vibrate more, which will lead to feedback.

Recap On Feedback

Be very careful, when working with potential feedback loops, to keep them under control within your audio systems and avoid overloading any signal paths.

Call To Action!

Conside the signal flow of your home studio and draw it out on a sheet of paper or in a computer document. Include your hardware and how signals are routed into and out of your computer's DAW.

Next, consider how signals are routed inside your DAW. Again, it can be highly beneficial to draw a schematic the represents how the signals flow within the DAW and through the audio interface. Pay special attention to the sends and return channels within you DAW.

Leave A Comment

Have any thoughts, questions or concerns? I invite you to add them to the comment section at the bottom of the page! I'd love to hear your insights and inquiries and will do my best to add to the conversation. Thanks!


What are the different types of audio signal levels? The different types of signal levels are based on analog voltage and impedance characteristics and include mic, line (pro and consumer), instrument and speaker level.

  • Mic level is generally between 1 to 100 mVRMS (-60 to -20 dBV)
  • Line level (professional) is nominally at 1.228 VRMS (+4 dBu)
  • Line level (consumer) is nominally at 316 mVRMS (-10 dBV)
  • Instrument level varies a lot but can be defined as nominally at 77.5 mVRMS (-20 dBu)
  • Speaker level also varies tremendously, often nominally between 10 VRMS (20 dBV) and 100 VRMS (40 dBV)

What is the difference between sound and audio? The key difference between sound and audio is their form of energy. Sound is mechanical wave energy (longitudinal sound waves) that propagate through a medium, causing variations in pressure within the medium. Audio is made of electrical energy (analog or digital signals) that represents sound electrically.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *