5 Airwindows Plugins for Sound Design

As I perpetually fight temptation to buy all kinds of software and hardware I really don’t need, I’ve found that one avenue for sating that particular urge is exploring software that generous people have been so kind to release on the internet for free.

A while back, I encountered a particularly prolific example of this: Chris Johnson of Airwindows, who has been releasing his catalog of interesting and often very quirky plugins for free with Patreon support. I believe I was first introduced to Airwindows in this post on The REAPER blog, if I’m not mistaken.

Airwindows plugins are a bit unique in that they don’t really have GUI designs as such, just sliders, some numbers. No meters or visualizations. No knobs. Indeed, some have no controls at all. Being a regular user of various ReaPlugs effects plugins, I’ve never been terribly attached to nice graphic design in audio plugins anyway. The sound is what is most important, and these plugins  sound very good.

As I think a lot of Airwindows users (and Chris himself) are mostly focused on applications for music mixing and mastering (like the post above), I thought I would cover a few plugins that I’ve found very useful specifically for sound design.


ADClip is a clipper and loudness maximizer. Pushing it hard works particularly well on impacts and really any atonal sound with sharp transients, but it can be very transparent on a more complex mix as well with more conservative settings. If you want something to be loud and punchy, this will do that.

It also features 3 different modes that allow you to really dial in exactly the desirable balance of loudness and distortion, and get a good sense of what the Boost, Soften, and Enhance controls are really doing.

Normal is the normal clipped output you would expect.

The Attenuated mode matches the output gain to the input gain and allows you to listen to the results without the increased loudness color your perception of the results.

Clip Only mode allows you to hear, well, only the clips. This allows you to get a very clear sense of the artifacts the clipping is introducing to your sound.


FathomFive is a powerful bass enhancer which generates a “controllable tape-like fullness and bass depth.” It features quite a few controls that allow you to really dial in an organic low-end presence to a specific sound or a full mix.

I’ve found it very useful for adding a bit more low end to sounds that might lack it or making up for other effects that might eat up the bass frequencies.


Point is a unique little transient designer and, really, you can never have too many of those. It only has 3 controls Input Trim, Point, and Reaction.

Unlike a lot of other transient designers, it doesn’t quite operate with straightforward Attack and Sustain controls. Instead, you have Point, which is roughly equivalent to attack, ranging from -1.0 to 1.0 (don’t set it to 1.0 unless you are prepared for it to get REALLY loud).

Where it gets interesting is with Reaction, which as far as I understand it determines where in your transients the Point effect engages. You can get extremely precise with this and find some very small windows with unexpected, but very interesting results.

Voice of the Starship

Voice of the Starship is a simple plugin with two controls for generating and filtering a wide range of algorithmic noise, that I’ve found to have very pleasing sonic characteristics. This could be used on its own for subtle ambience (such as starship engine rumble) or as a sound source at the beginning of a more complex effects chain.


StarChild is a very unusual delay/ambience effect with a distinctly and intentionally unnatural character. It’s unique and fun to use. Apparently inspired by the Ursa Major Space Station, this plugin doesn’t really try to create a sense of organic space, instead creating a weirdly tonal, lo-fi, otherworldly kind of sound.

Turn the Sustain up for something resembling a reverb effect, keep it low for a fast stereo delay.

The Grain control is interesting, and not really explained, but values below 0.5 result in a bit-crusher-like grit and choppiness to the sound, while values closer to 1.0 sound are much smoother.


That’s not all

This is really only a very small selection interesting Airwindows plugins I’ve used, and Chris has released dozens of useful and often very unique plugins, including a wide selection of tape, saturation, and analogue emulations.

There’s a lot to like over at Airwindows.

Sound Effect Variation in Unity

One of the most fundamental elements of satisfying interactive audio implementation is variation. When you play same sound effect repeatedly, it tends to draw attention to itself through its artificial precision. Natural sounds have all kinds of subtle inconsistencies that we take for granted.

To address this problem, games use multiple pre-recorded variants of most of their sound effects and some additional modifications on playback. In this post, I’m going to walk through implementing basic variations within native Unity audio, assuming you already have a set of sound effects to use.

Setting Up

First, we’ll start with simple round robin, or playing each variation in a predefined, looped order. Next, we’ll randomize the playing order and ensure that the same audio clip doesn’t play back-to-back. Finally, we’ll look at using pitch and volume randomization to add some more subtle differences every time these sounds are played.

Before we do any specific implementation, we need a game object with an Audio Source component attached to it. Create a new one or use an existing object in your game that needs to make some noise.

Round Robin

To play our variations, we need to store them in a container Unity can work with: an array of AudioClips.

public AudioClip[] clipArray;

We also need to define an Audio Source to the play our sound effects.

public AudioSource effectSource;

And an integer variable to keep track of our current array index.

private int clipIndex;

In the inspector, set Effect Source to your AudioSource component, set the Clip Array size based on the number of variations you’re using, and add your AudioClips to the array.

Now, for the playback itself. We’ll create a new method, PlayRoundRobin(). Every time this method is called, it will play the current index of the array then increment the clipIndex value. If the index is beyond the boundaries of the array, it is reset to 0 and the round robin continues.

void PlayRoundRobin() {

    if (clipIndex < clipArray.Length)

        clipIndex = 0;

Each time this method is called, it will play the next variation in whatever order you’ve established in the array, starting over when it reaches the end of the array.

Clip Randomization

Next, we’ll write another simple method in the same script to play the clips from the array in a (pseudo) random order.

void PlayRandom()
    clipIndex = Random.Range(0, clipArray.Length);

This will generate a random number between 0 and the last index of the array, then play the audio clip stored at that index.

Avoiding Repetition

While this barebones randomization is serviceable, it still has a clear limitation: there is nothing preventing it from generating the same random index multiple times in a row, exactly the kind of thing we’re trying to avoid with randomization.

Fortunately, the solution to this problem is pretty simple.

We’ll compare each new random number to the last index used. If they are equal, we’ll generate a new one. To do this, we’ll create a new method, RepeatCheck(), that takes the last used index and the random range (for our purposes, this is the array length).

int RepeatCheck(int previousIndex, int range)
    int index = Random.Range(0, range);

    while (index == previousIndex)
        index = Random.Range(0, range);
    return index;

This will continue to generate random numbers within the given range until it generates a value that is not equal to our previous index. It then returns that value. So, now we can plug this into our PlayRandom() method.

void PlayRandom2()
    clipIndex = RepeatCheck(clipIndex, clipArray.Length);

When this method is called, it will never play the same random variation back-to-back.

Pitch and Volume Randomization

Now, the last thing we’re going to do is randomize the pitch and volume of our Audio Source each time we play our sound effects. We’ll need a few more variables, so we can fine tune this effect easily in Unity.

public float pitchMin, pitchMax, volumeMin, volumeMax;

Set these values in the Inspector according to preference. This may vary depending on the specific sound effect, but in general very small ranges will sound more natural. Larger ranges sound jarring and unnatural (though that could be a useful effect in some cases). Set these values to 1 for no effect at all.

Add these two new lines to the beginning of the PlayRandom() and PlayRoundRobin() methods:

effectSource.pitch = Random.Range(pitchMin, pitchMax);
effectSource.volume = Random.Range(volumeMin, volumeMax);

Each variation will now playback at a random pitch and volume according to the ranges you set and can easily adjust in the Unity editor.

Testing it Out

If you want to make sure this is working, without assets or systems to go with it, you can just drop a game object with an Audio Source and the script attached into an empty scene. Add something like this to your Update() method.

void Update ()
    if (Input.GetButtonUp("Fire1")) PlayRoundRobin();
    if (Input.GetButtonUp("Fire2")) PlayRandom2();

Now you can trigger each method with the Fire1 and Fire2 bindings respectively, which are Mouse 0/Left Ctrl and Mouse 1/Left Alt by default.

Looping Music in Unity

Unity’s default audio capabilities have never been a particular strong point. Some much needed functionality has been added over its lifespan, like audio mixers, sends, and insert effects, but it’s still extremely limited compared to the feature sets found in widely used audio middleware and even other game engines.

In this post, I’m going to talk about 3 different potential approaches of gradually increasing complexity for looping music with native Unity audio.  Hopefully, there will be something useful here for a variety of experience levels.

First we’ll cover use Unity’s default loop functionality. Second, we’ll use a bit of audio editing and AudioClip PlayScheduled() to create a seamless loop. Lastly, we’ll calculate a loop point given the beats per minute (BPM), beats per measure (Time Signature), and the total number of measures in the track and create a simple custom looping system, again using PlayScheduled().

Before starting, it should be noted the mp3 format is ill-suited for this application for technical reasons beyond the scope of this post and should be avoided. Ogg and Wav are good options that handle seamless looping well in Unity.

1. Default Loop

This is the simplest option, requiring no scripting at all, but it’s also the most limited. For music that with no reverb or tail to speak of or music that doesn’t need to restart exactly on measure, this can be serviceable. A quick fade at the end of the last bar can work for less ideal music tracks, but it will result in an obvious and unnatural loop point.

Create a new object in your scene, or use one that already exists. Whatever is appropriate.

Add an AudioSource component to it and set the AudioClip to your music file, either from the menu to the right of the field or by dragging and dropping it from the project browser.

Make sure that “Play On Awake” and “Loop” are enabled. Your music will play when you start the scene and loop at the end of the audio file.

2. Manual Tail/Release Overlap

This method requires some work outside of Unity with an audio editor or Digital Audio Workstation (DAW). Here we’ll still use Unity’s default looping functionality, after playing and introductory variation of the looped track.

Before doing anything in Unity you need two separate versions of the music track in question, one with the tail cut at the exact end time of the last bar/measure, and another with that tail transposed to the beginning of the track, so that it overlaps with the start.

Ensure that the start and end of these tracks are at a zero crossing, to avoid any discontinuities (audible pops) during playback. This can be accomplished with extremely short fades at the start and end points. This second track will transition seamlessly from the introductory track and loop seamlessly as well.

Add an AudioSource to an object as in the previous section and set the second edit of the track (with the tail overlapped with the start) as the default AudioClip. “Play on Start” should NOT be enabled.

This is where a bit of scripting is required. Create a C# script and add it to the same game object as your AudioSource.


Open it in your IDE of choice. This will only require a few lines of code. First, declare two public variables: an AudioSource and an AudioClip

public AudioSource musicSource;
public AudioClip musicStart;

Save this and switch back to the Unity editor. There will be two new fields for the C# Script component in the Inspector: “Music Source” and “Music Start.”

Click and drag the AudioSource you added to your game object earlier into the “Music Source” field on your script. Do the same with “Music Start,” using the intro edit of the clip (without a tail at the start or end).

This is where the code that makes noise comes in.

void Start () 
    musicSource.PlayScheduled(AudioSettings.dspTime + musicStart.length);

When the scene Starts, the first clip will play once and the second clip will be scheduled to play as soon as the first has ended. This start time is determined simply by adding the length in seconds of the first clip to dspTime (the current time of the audio system in seconds, based on the actual number of samples the audio system processes).

From that point, the track will loop normally with Unity’s default loop functionality.

3. Calculating the Loop Point and Looping Manually

The last approach requires more scripting work, and some extra information about the music itself, but does not require any specific editing of the audio file. We’ll be creating a simple custom looping solution using two AudioSources and AudioSource.PlayScheduled() that calculates the end of the last bar or measure based on some data entered in the Inspector and uses that to determine the loop interval.

Add two AudioSources to your game object and set the default AudioClip for both to the music track you’re going to loop. This will allow each repeat to overlap with the tail of the previous one as it plays out.

Add a new script to your game object and open it on your IDE. First, we need some public variables that we can set in the inspector: an array of AudioSources and three integer values which correlate to simple properties of the music composition itself.

public AudioSource[] musicSources;
public int musicBPM, timeSignature, barsLength;

In the inspector, set the Size of the Music Sources array to 2 and drag the two AudioSources you’ve created to the Element 0 and Element 1 fields.

Then enter a few music properties. Music BPM is the tempo of the music track in Beats Per Minute (BPM). Time Signature is the number of beats per bar/measure. and Bars Length is the number of bars/measures in the track. You need to know these values for this calculation to work.

Next, we need some private variables for some values we will be calculating in the script itself.

private float loopPointMinutes, loopPointSeconds;
private double time;
private int nextSource;

The loopPoint values will be used to store the loop interval once it has been calculated. Time will be the value of dspTime at the start of the scene and be incremented by loopPointSeconds for each PlayScheduled() time. And nextSource will be used to keep track of which AudioSource needs to be be scheduled next.

Now, in the Start() method we need the script to calculate the loop interval, play the first AudioSource, and initialize the time and nextSource values.

void Start () 
    loopPointMinutes = (barsLength * timeSignature) / musicBPM;

    loopPointSeconds = loopPointMinutes * 60;

    time = AudioSettings.dspTime;

    nextSource = 1;

The custom loop functionality itself is defined in the Update() method, which is called every frame.

void Update () 
    if (!musicSources[nextSource].isPlaying)
        time = time + loopPointSeconds;


        nextSource = 1 - nextSource; //Switch to other AudioSource

First, we check if the nextSource is still playing. Then, if it is NOT:

  1. Increment the time by the loop interval (loopPointSeconds).
  2. Schedule the nextSource AudioSource to play at that time.
  3. Toggle the value of nextSource (from 1 to 0 or from 0 to 1), so the script will check and schedule the other audio source.

And that’s it. The music track should begin playing at the start of the scene and continue to repeat at the loop point until the object is destroyed.

Rhetorical Sound Design

Hearing is weird. It’s abstract in a way that sight isn’t. A picture can clearly communicate a sense of size and space. A series of pictures can communicate speed and distance. Sound is only movement. Almost any movement. It’s the vibrations people and things make when they pass through the air and come into contact with each other.

Hearing is also different from sight in part because we have less control over what we hear. We don’t open and close our ears, though we can try to block them. We don’t really focus our ears in the way that we do our eyes. We’re always hearing (as long as we are able to), and so we often become so used to sound that we don’t actively notice it unless we make the effort. We learn to tune a lot of sounds out, but instinctually notice when they are absent.

I think this is why sound design often goes unnoticed unless it is so incongruent that it breaks the audience’s immersion. Effective sound design sells the argument that what they are seeing with their eyes is real. It reinforces all of the concrete information about size, space, and action that they see on a screen. It is simply expected to be there and to sound “right.”

For that reason, it can be useful to have certain heuristics to apply to this problem; the problem of making things sound “right.” I’m loosely adapting Aristotle’s main rhetorical appeals—logos, ethos, and pathos—as a framework for thinking about effective sound design, with a particular focus on game audio. There is overlap between these appeals, because they are all fundamentally related (emotion and logic are never truly separate) and because each sound effect is essentially its own argument that should ideally succeed on multiple levels.

Pathos: The Emotional Appeal

An important function of any synchronized or reactive audio is to reinforce the emotional experience of the scene. This is where the role and function of sound design overlaps most with that of the musical score. Does an impact feel big? Does the gun the player is firing feel powerful? Does the giant monster they’re fighting feel enormous and deadly? Does that abandoned mansion feel haunted? This is the visceral, “game feel” component of game sound effects.

This has important implications for game design. Emotionally satisfying audio cues influence player behavior in a variety of ways.

  • A feeling of constant or impending danger can make players play slower and more cautiously.
  • A powerful-sounding weapon can inspire confidence and encourage players to be more aggressively.
  • A weak-sounding weapon might be used less often, regardless of its practical functionality.

Zander Hulme told a relevant story along these lines at a panel at PAX Aus 2016 about multiplayer weapon sound effects in a Wolfenstein game.

The players with the weaker-sounding weapon believed they were at a disadvantage and performed worse, despite both teams having functionally identical weapons. Replacing the weaker sound effects with something more satisfying fixed the perceived weapon imbalance. Game audio doesn’t simply play a passive support role in game design.

Logos: The Logical Appeal

Another important function in game audio in particular is the ability to communicate factual information to the audience. What exactly is making the sound? What direction is the sound coming from? From how far away? In what kind of space? Is the audience in that space or a different space? Can your audience discern all of these things or are they intended to? Lack of clarity and focus should be an intentional choice, not the result of carelessness or oversight.

Much like the emotional appeal, this too is a practical game design consideration. Audio information provided to the player can directly influence their decision-making and behavior in the game space, in a wide variety of contexts.

  • The recognizable sound of an enemy charging a powerful attack helps the player discern when to evade.
  • The distinct sound of a sniper rifle being fired makes them reconsider peeking around a corner.
  • The suddenly loud crack of their foot-steps on a tile floor tells them that sneaking will be difficult and may require them to slow down.
  • The clarity, volume, and propagation of sounds in competitive multiplayer games can significantly impact what kind of information players have about strategies of their opponents, even without line of sight.

In Counter-Strike, for example, players have to be mindful of moving at full speed, because running foot steps and jump landings can give away valuable information to their opponents with hearing and inform counter strategies. At the same time, being aware of this fact allows players to intentionally make noise to create misinformation.

Below is a clip of a CS:GO streamer, DaZeD, faking a drop by jumping on the ledge above. The opposing players throw a flash grenade and attempt to retake the room, expecting him to be below and blinded, but they don’t predict his superior positioning and lose the fight.

This only works because both teams are aware of the landing sounds and because these sounds are audible from positions outside of the room.

A subsequent update added unique landing sounds per surface, which complicates this scenario. In this clip, he actually jumps on a wood surface at the end of the upper tunnel. Now, an observant player could note that this surface sound effect is not what would they would hear when opposing players drop on the stone floor below. If he instead faked further to the left, the sounds would match as they did on older versions of the game.

Sound effects can provide extremely valuable information to players beyond the limitations of line of sight. It’s important to keep this in mind, even for members of the development team who don’t deal directly with audio. If footstep propagation distance determines when and where players can afford to move at full speed, this can influence how major routes through the map are designed. If this isn’t accounted for, it can have unintended consequences on player behavior and map flow. This applies in many other seemingly non-audio design contexts as well.

Ethos: The Appeal to Character

In the context of sound design, it’s useful to think of ethos as authenticity. Does the audience accept that this sound belongs in the space? Does it fit the art direction of the game? What stylistic considerations must be made to ensure that is the case? If the game is heavily stylized, there is plenty of room for stylized sound effects. If the game strives for pseudo-realism and photo-realistic graphics, it is probably appropriate to keep the sound effects relatively grounded. Often, however, what the audience expects is very different from  reality. Authenticity is what it seems like something should sound like, rather than necessarily what it actually does.

Practically, this has a large degree of overlap with Pathos, the emotional appeal, in that the most emotionally resonant sounds should also be authentic, but they are distinct. An ambience could be suitably unsettling, but not feel authentic in the wrong space. Creaking wood and howling wind might suit a creepy, old house, but be very much out of place in an abandoned space station, even though both evoke a lonely, isolated atmosphere. An impact could be distinct and punchy, but not fit the style of the game or the source object or actor.


A very common example of all of these elements in action is in effective gun shot sound effects, particularly for real world weapons. Fire arm field recordings on their own are rarely very interesting or particularly distinct. This is in part because of the difficulty in capturing the character and impact of sounds at extreme volume levels. Raw field recordings of fire arms tend to sound similar. To account for this, sound designers need create hyper-realistic gun shot sounds with a variety of explosive, mechanical, and environmental layers and processing in order to create the explosive, powerful sounds that audiences expect. This is both more authentic than a simple gun shot field recording, and more emotionally impactful. A core goal of satisfying weapon sounds is to recreate the visceral, explosive impact of firing them.

Given that, in situations where a large variety of weapons are called for, the sound designer will need to differentiate each of these weapons. This is especially true of games with a large selection of realistic weapons. It is important to both establish unique character for each and communicate that distinction to the player, who should ideally be able to tell what weapon is being fired at them from the sound. A sniper rifle might have an exaggerated, long reverb tell to really sell its firepower. Pistols and submachine guns might emphasize the mechanical elements over the explosive punch and the reverb tail to make it feel smaller. An assault rifle might lie somewhere in between.

Establishing these rhetorical choices and applying them consistently provides emotional satisfaction, authenticity, and clarity to the player.