Log in
Log in

or
Learning
Comment

Sampling: Wavetables

Sound synthesis, sound design and audio processing - Part 18

In the previous article, we took a first look at sampling, from the point of view of Pulse Code Modulation (PCM). In this installment we'll study wavetables and sample banks.

View other articles in this series...

The PCM standard is used in digital audio, computer music, CD production, and digital telephony. With PCM, the amplitude of the signal is moderated at regular intervals and digitally codified with a variable approximation rate, depending on the number of bits employed.

We also saw in the last article that sampling is the basic element of digitization. And it’s here that the terminology plays a trick on us, hold on tight

A nice mess

There are two aspects of sampling that are often confusing. The first one concerns the notion of a “sample” itself. Indeed, a signal ─ made up of multiple samples (remember 44.100 samples per second, for a standard CD, for instance) ─ that has been digitized in its entirety, is called…a sample, too! In which cases? When the fully digitized signal is used as an audio source to simulate a real instrument. Collections of these “whole sounds” are referred to as a “sample banks.”

And this is where the second confusing issue arises, because the banks in question must not be confused with “wavetables” (made up of samples, as described above, as well). And this second confusion was fueled by certain marketing people in the '90s calling sample banks “wavetable modules” (like the Wave Blaster daughterboard for Creative Labs cards, for example).

If you’re a bit lost right now, don’t worry we are here to straighten things out for you!

Wavetable Synthesis

Wavetable synthesis only works in the digital realm. It builds on the fact that a periodic waveform is repetitive by nature. Rather than wasting computing resources demanding the synth/computer determine the sample values of an entire audio signal, wavetable synthesis only needs to determine the sample values for the first cycle of the waveform you want to reproduce. Each sample value is then stored in a “block.” Playing back each of these blocks consecutively is like reading the entire data that constitutes a cycle. To reproduce the cycle endlessly, the system simply needs to read the block list in loops. All these blocks together are called a wavetable.

In the third article of this series, we already saw that the pitch of a periodic waveform depends directly on its frequency, in other words, the number of cycles that the wave is reproduced in a second. When it comes to wavetable synthesis, the most effective method found to modify the pitch of a waveform is to read only a certain number of blocks of the table, thus increasing their speed of reproduction and, consequently, the frequency of the signal…and its pitch! This is the principle behind digital oscillators, DOs, which we already mentioned in article 6 of this series.

The method I just described above corresponds to what is called fixed wavetable synthesis, which means it’s for a single waveform. However, several wavetables, each with a different sampled waveform, can coexist within a single synthesizer. This type of synths can move from one wavetable to another one ─ and thus from a waveform to another one, thanks to a modulation parameter, creating very interesting transition effects! This is called multiple wavetable synthesis. 

Sample banks

Sample banks are composed of “whole” sounds, which have been prerecorded and can be loaded to the memory ─ within a hardware module or a computer ─ in order to have a quick access to realistic sounds.

In this case, only certain “key” notes of an instrument are sampled, and an algorithm determines the pitch for the intermediate notes, which can sometimes have a negative impact on the quality of the sound played back. Nowadays, digital storage capacities are such that you can, for example, digitize an entire piano not only with one sample per note but almost one sample per velocity level.

Samplers handle all this in the background while you play, so that you don’t have to worry about anything. For example, if you’re playing a sampled piano and you hit a note, the software inside the hardware module, or your computer, chooses a sample — from the multiple samples that comprise that piano sound —  that best matches your playing intentions in terms of such factors as attack or velocity. In addition, lots of samplers allow you to edit scripts to define playback rules ─ if you want it to play back a third above every not you play, for example.

In practice

la synthèse sonore

When I wrote in earlier in this series that sampling came about due only to the need to find a more practical method than additive synthesis, I hinted that the first sampler was the Fairlight CMI, I was simplifying things a bit. In fact, ever since the 1920s, musicians and engineers had been looking for a way to manipulate pre-recorded sounds in more or less real time. Among the instruments that spawn from this, the most famous one is the Mellotron, introduced in the '60s, which used a length of magnetic tape per note and timbre, and was widely adopted by many bands at the time.

But it was certainly Fairlight that first introduced, within the scope of synthesis, the concept of digital sampling as we know it today, implying the possibility to modify the harmonic content of a sound with a simple tap of a light pen on a screen. It opened the door to the E-MU Emulators, to the famous Akai samplers from the S Series and even to modern software samplers like Native Instruments Kontakt or Spectrasonics Omnisphere.

La synthèse sonore

Wavetable synthesis was mainly introduced and used by Wolfgang Palm with the PPG brand and the famous “Wavecomputer” and “Wave 2” synths in the early '80s; and also by Waldorf with the “Microwave” and “Wave” (1993). But there are many manufacturers that use it as an alternative sound production mode, and ─ given its digital nature ─ also within many virtual synths, sometimes even side-by-side with sample-based synthesis.

Thus, as you can see in the screenshot on the left, the first of the three sound generators of the famous Native Instruments Absynth is using a sample, in the form of a wav file, as a component of the sound its producing.

All this brings us to conclude that, thanks to the galloping virtualization of everything, the frontier between synthesis based on basic waveforms and the playback of pre-recorded sound elements had never been so blurry as today.

← Previous article in this series:
Sample-Based Synthesis ─ Bit by Bit
Next article in this series:
An introduction to Granular Synthesis →

Would you like to comment this article?

Log in
Become a member
cookies
We are using cookies!

Yes, Audiofanzine is using cookies. Since the last thing that we want is disturbing your diet with too much fat or too much sugar, you'll be glad to learn that we made them ourselves with fresh, organic and fair ingredients, and with a perfect nutritional balance. What this means is that the data we store in them is used to enhance your use of our website as well as improve your user experience on our pages and show you personalised ads (learn more). To configure your cookie preferences, click here.

We did not wait for a law to make us respect our members and visitors' privacy. The cookies that we use are only meant to improve your experience on our website.

Our cookies
Cookies not subject to consent
These are cookies that guarantee the proper functioning of Audiofanzine and allow its optimization. The website cannot function properly without these cookies. Example: cookies that help you stay logged in from page to page or that help customizing your usage of the website (dark mode or filters).
Audience analysis (Google Analytics)
We are using Google Analytics in order to better understand the use that our visitors make of our website in an attempt to improve it.
Advertising (Google Ads)
This information allows us to show you personalized advertisements thanks to which Audiofanzine is financed. By unchecking this box you will still have advertisements but they may be less interesting :) We are using Google Ad Manager to display part of our ads, or tools integrated to our own CMS for the rest. We are likely to display advertisements from our own platform, from Google Advertising Products or from Adform.
Marketing (Meta Pixel)

On our websites, we use the Meta Pixel. The Meta Pixel is a remarketing pixel implemented on our websites that allows us to target you directly via the Meta Network by serving ads to visitors of our websites when they visit the social networks Facebook and Instagram. The meta pixel are code snippets which are able to identify your browser type via the browser ID - the individual fingerprint of your browser - and to recognise that you have visited our websites and what exactly you have looked at on our websites. When you visit our websites, the pixel establishes a direct connection to Meta's servers. Meta is able to identify you by your browser ID, as this is linked to other data about you stored by Meta on your Facebook or Instagram user account. Meta then delivers individualised ads from us on Facebook or on Instagram that are tailored to your needs.

We ourselves are not in a position to identify you personally via the meta pixel, as apart from your browser ID no other data is stored with us via the pixel.

For more information about the Meta Pixel, the details of data processing via this service and Meta's privacy policy, please visit Meta Privacy Policy - How Meta collects and uses user data for Facebook and Meta Privacy Policy - How Meta collects and uses user data for Instagram.

Meta Platforms Ireland Ltd. is a subsidiary of Meta Platforms, Inc. based in the USA. It cannot be ruled out that your data collected by Facebook will also be transmitted to the USA.


We did not wait for a law to make us respect our members and visitors' privacy. The cookies that we use are only meant to improve your experience on our website.

Our cookies
Cookies not subject to consent

These are cookies that guarantee the proper functioning of Audiofanzine. The website cannot function properly without these cookies. Examples: cookies that help you stay logged in from page to page or that help customizing your usage of the website (dark mode or filters).

Audience analysis (Google Analytics)

We are using Google Analytics in order to better understand the use that our visitors make of our website in an attempt to improve it. When this parameter is activated, no personal information is sent to Google and the IP addresses are anonymized.

Advertising (Google Ads)

This information allows us to show you personalized advertisements thanks to which Audiofanzine is financed. By unchecking this box you will still have advertisements but they may be less interesting :) We are using Google Ad Manager to display part of our ads, or tools integrated to our own CMS for the rest. We are likely to display advertisements from our own platform, from Google Advertising Products or from Adform.

Marketing (Meta Pixel)

On our websites, we use the Meta Pixel. The Meta Pixel is a remarketing pixel implemented on our websites that allows us to target you directly via the Meta Network by serving ads to visitors of our websites when they visit the social networks Facebook and Instagram. The meta pixel are code snippets which are able to identify your browser type via the browser ID - the individual fingerprint of your browser - and to recognise that you have visited our websites and what exactly you have looked at on our websites. When you visit our websites, the pixel establishes a direct connection to Meta's servers. Meta is able to identify you by your browser ID, as this is linked to other data about you stored by Meta on your Facebook or Instagram user account. Meta then delivers individualised ads from us on Facebook or on Instagram that are tailored to your needs.

We ourselves are not in a position to identify you personally via the meta pixel, as apart from your browser ID no other data is stored with us via the pixel.

For more information about the Meta Pixel, the details of data processing via this service and Meta's privacy policy, please visit Meta Privacy Policy - How Meta collects and uses user data for Facebook and Meta Privacy Policy - How Meta collects and uses user data for Instagram.

Meta Platforms Ireland Ltd. is a subsidiary of Meta Platforms, Inc. based in the USA. It cannot be ruled out that your data collected by Facebook will also be transmitted to the USA.


You can find more details on data protection in our privacy policy.
You can also find information about how Google uses personal data by following this link.