Dear friends, the time has come to say goodbye, at least regarding this article series. This last article is a sort of summary of what we've seen throughout the series, which I hope has been useful.
We started our journey into sound synthesis by recalling that sound is made up of waveforms. Depending on the elements that constitute them, like harmonics, they can be of different nature. When a waveform is repetitive it is called periodic. Audio signals that aren’t periodic are usually considered noise. All periodic waveforms have a fundamental frequency that determines their pitch. A sine wave is a waveform that has no harmonics apart from the fundamental frequency. According to Fourier’s theory, every audio signal can be reduced to a certain number of sine waves. A sine wave is considered a simple waveform.
Waveforms are produced by the so-called oscillators. There are different kinds of oscillators: 100% analog, analog but digitally controlled, or 100% digital. And don’t forget the LFOs (Low Frequency Oscillators), which produce signals below the hearing range (remember: theoretically, humans can hear frequencies from 20Hz to 20,000Hz). These waveforms aren’t meant to be heard, but rather to modulate effects or other waveforms.
Complex waves (square, triangle, sawtooth, etc.) can have some frequencies removed with the use of filters in order to “sculpt” sound. These filters can also have different forms and be aimed at only letting through low frequencies (low-pass), high frequencies (high-pass), a specific frequency range within two limits (bandpass), or frequencies above and below a particular range (notch). The work method based on the use of filters to shape sound is called subtractive synthesis.
Between the audio outputs of a synthesizer and the sound generator and processing section (usually filters, although there are also other ways to process sound, as you’ll see below) sits the amp stage.
The latter is usually made up of an analog or digital amp and it is controlled by a so-called envelope. Envelopes feature several parameters that allow you to define how the amplitude (volume) of a sound is processed when output. These parameters affect the attack, decay, sustain and release time of a note. But there are two things worth noting: First, the envelopes of certain devices can be much more complicated; and second, the behavior of a filter is often controlled by an envelope, too.
Voices, monody, polyphony and multitimbrality
These oscillator/filter/amplifier combinations, together with the envelopes, are the elements that make up a “voice.” A pitch trigger, envelope trigger and gate are usually added to these in order to handle the triggering of the previous elements.
Old synths couldn’t use more than one voice simultaneously, so you could only play one note at a time. Thus, they were “monodic.” It wasn’t until the end of the '70s that synths became polyphonic. And they became multitimbral, meaning they could play different sounds simultaneously, only in the '80s.
Digital technology, MIDI and built-in effects
All these evolutions were possible thanks to the use of digital technology and, especially, the MIDI standard, which allowed digital devices to communicate with one another and/or with a computer.
In addition to allowing you to compose ─ sequence ─ entire songs to have them played back by a synths or virtual modules in a computer, MIDI also allows you to control the different effects available on electronic instruments and their virtual counterparts: pitch bend, unison, portamento, vibrato, etc.
Different forms of sound synthesis
I already mentioned subtractive synthesis, but it is by no means the only type of synthesis there is. Additive synthesis is the oldest synthesis technique, and its goal is to sum different simple waveforms, hence its name. However, the complexity of its implementation led several people to explore with sampling, i.e. the digital storage of simple waveforms and even entire audio signals.
Some other people looked for ways to bypass another constraint of additive synthesis, namely the impossibility to synthesize time-evolving sounds. And that’s how granular synthesis came to be. It’s not only based on simple waveforms, but also on their frequency and location in time. All this together is called a “grain.” This type of synthesis allows you to independently modify the pitch and duration of an audio signal (as you surely know, when you modify the pitch of a signal it usually entails a change of duration as well, and vice versa).
Finally, yet another workaround to the problems of additive synthesis was found when someone realized that you could generate an extremely rich audio signal by combining two oscillators. Thus, FM synthesis was born.
These are the main forms of synthesis, but there are many others: pulsar, phase-distortion, formant, linear arithmetic, stochastic, graphic, etc., although they are not as widespread.
Last but not least, research regarding the instruments themselves has also given birth to amazing tools, including a synth that weighed several tons and transmitted audio signals via telephone lines, a string synth and even an instrument that doesn’t require the musician to touch in in order to play it!