Log in
Log in

or
Create an account

or
Learning
Comment

Introduction to the MIDI standard - Sound synthesis, sound design and audio processing - Part 11

In the previous article, we saw how the evolution of polyphony required ─ and, hence, lead to ─ the introduction of computing in the design of sound synthesis devices. We also saw that the arrival of multi-timbrality marked an additional step in the computerization of sound synthesis systems, given the need to use multiple data channels to control different sounds.

View other articles in this series...

Due in large part to this need, the MIDI standard was born in 1983, mainly thanks to the support of Oberheim, Sequential Circuits and Roland. Considering how fast technology changes in the computing world, the fact that MIDI is still relevant after 32 years is nothing short of remarkable.

General definition

le MIDI dans la synthèse sonore

MIDI stands for Musical Instrument Digital Interface. This interface has both hardware and software components. In the next article we’ll see the different types of messages that can be transmitted via MIDI. Today, we’ll go through a general introduction to how it works and its different fields of application…as well as its limitations.

MIDI was, thus, conceived to allow for data exchange between electronic musical instruments, effects processors and computers, in order to, for example, control several instruments from a single keyboard, record the played notes in a sequencing software or, more recently, control virtual effects and instruments. But it wasn’t conceived originally to transport sound itself, except within the very restricted frame of the “Sample Dump, ” to which we will come to later.

However, the MIDI standard goes way beyond strictly audio, it also allows the interconnection to other types of protocols (like SMPTE for syncing video) or even other types of hardware, like lighting systems. 

The hardware

Apart from the electronic instruments themselves, there are MIDI devices such as interfaces for computers and MIDI controllers, which don’t have any sound generation or internal processing capabilities — they’re exclusively designed to control other devices. You’ll also find MIDI accessories, like cables and patchbays to multiply the connections.

The classic MIDI connectors are in DIN format (where DIN stands for Deutsche Industrie Norm – German Industry Standard), but they shouldn’t be mistaken with the old audio connectors that were visually identical. MIDI cables, which transmit data in optical form, are not interchangeable with audio cables.

le MIDI dans la synthèse sonore

There are three types of MIDI connectors. MIDI IN handles all incoming and OUT all outgoing data, while THRU, only available on hardware instruments and effects, theoretically allows you to daisy chain different devices, even if in real life such a chain can suffer significant data losses. Generally speaking, other types of MIDI gear only feature a MIDI IN and an OUT, and sometimes controllers only have the latter.

Nowadays, it’s more and more common for devices to transmit MIDI data via a USB cable. This system allows you to transmit other types of data besides MIDI, not only control data. For instance, the ID of an external device that tells a computer to launch a device-specific control panel, or even to transmit the sound generated by that same hardware. However, USB doesn’t allow you to connect two electronic instruments or effects directly. So the computer has become an ever more important element to transmit and work with MIDI data. 

The software

From a software point of view, MIDI drivers allow computers to support the protocol and they have been long integrated into their operating systems. 

le MIDI dans la synthèse sonore

The main audio application that makes use of MIDI is the sequencer (CubaseLogicLivePro Tools, Digital PerformerReaper, and all the rest), whose main function – to record the information of the notes played by a hardware synth and to be able to modify and/or recall them later ─ has developed over time into what we now refer to as a digital audio workstation (DAW), which is a true virtual production studio. Together with this “virtualization” of production came software instruments and effects, which use mainly MIDI to be controlled by the MIDI keyboards and controllers mentioned above. Finally, there are control panels, both for virtual and hardware devices.

 

 

Of ports and channels

As we saw in the previous article, it was the need to use various data channels for multi-timbrality that rendered the creation of MIDI indispensable. The latter uses up to 16 channels to transmit data. Each of these channels can be used to reach a different device or to control different timbres within a single multi-timbral unit.

However, 16 channels aren’t always enough, considering that you sometimes need to use more than 16 instruments or 16 timbres simultaneously, and that certain types of messages (which we’ll see in the next article) can saturate a channel. To that end, the MIDI standard has foreseen ports, labeled with letters, which can have 16 channels each.

The limitations

As its long existence shows, MIDI has proven a very popular and effective technology. But, while the 16-channel issue could be easily solved with the multiplying of ports, there are other limitations that are more difficult to overcome. Most notably, the MIDI standard can only codify parameters in 128 steps, which can prove insufficient to correctly reproduce the travel of a potentiometer, for instance. Further, many parameters, like the ones that make up envelopes, can’t be modified.

That’s it for today, but in the next article we’ll explore in detail the types of messages that can be transmitted via MIDI, as well as the different modes employed.

← Previous article in this series:
Polyphony, Paraphony and Multitimbrality
Next article in this series:
Understanding MIDI Modes and Messages →

Would you like to comment this article?

Log in
Become a member