Tutorial: Introduction to MPE

Learn the basics of the MPE standard and how to implement a synthesiser that supports MPE. Hook your application up to a ROLI Seaboard Rise!

Level: Intermediate

Platforms: Windows, Mac OS X, Linux

Classes: MPESynthesiser, MPEInstrument, MPENote, MPEValue, LinearSmoothedValue

Getting started

Download the demo project for this tutorial here: tutorial_mpe_introduction.zip. Unzip the project and open it in your IDE.

If you need help with this step, see Tutorial: Getting started with the Projucer.

Note
It would be helpful to read Tutorial: Synthesiser using MIDI input first, as this is used as a reference point in a number of places.

The demo project

The demo project is a simplified version of the MPETest project in the juce/examples directory. In order to get the the most out of this tutorial you will need an MPE compatible controller. MPE stands for Multidimensional Polyphonic Expression, which is a new specification to allow multidimensional data to be communicated between audio products.

Some examples of such MPE compatible devices are ROLI's own Seaboard range (such as the Seaboard RISE).

Warning
The synthesiser may appear very quiet unless your controller transmits MIDI channel pressure and continuous controller 74 (timbre) in the way that the Seaboard RISE does.

With a Seaboard RISE connected to your computer the window of the demo application should look something like the following screenshot:

tutorial_mpe_introduction_screenshot1.png
The demo application

You will need to enable one of the MIDI inputs (here you can see a Seaboard RISE is shown as an option).

The visualiser

Any notes played on your MPE compatible device will be visualised in the lower portion of the window. This is shown in the following screenshot:

tutorial_mpe_introduction_screenshot2.png
The visualiser

One key feature of MPE is that each new MIDI note event is assigned its own MIDI channel, rather than all notes from a particular controller keyboard being assigned to the same MIDI channel. This allows each individual note to be controlled independently by control change messages, pitch bend message, and so on. In the JUCE implementation of MPE, a playing note is represented by an MPENote object. An MPENote object encapsulates the following data:

  • The MIDI channel of the note.
  • The initial MIDI note value of the note.
  • The note-on velocity (or strike).
  • The pitch-bend value for the note: derived from any MIDI pitch-bend messages received on this note's MIDI channel.
  • The pressure for the note: derived from any MIDI channel pressure messages received on this note's MIDI channel.
  • The timbre for the note: typically derived from any controller messages on this note's MIDI channel for controller 74.
  • The note-off velocity (or lift): this is only valid after the note-off event has been received and until the playing sound has stopped.

With no notes playing you can see that the visualiser represents a conventional MIDI keyboard layout. Each note is represented in the visualiser in the demo application as follows:

  • A grey filled circle represents the note-on velocity (a larger circle for higher velocity).
  • The MIDI channel for the note is displayed above the "+" symbol within this circle;
  • The initial MIDI note name is displayed below the "+" symbol.
  • An overlaid white circle represents the current pressure for this note (again, a larger circle for higher pressure).
  • The horizontal position of the note is derived from the original note and any pitch bend that has been applied to this note.
  • The vertical position of the note is derived from the timbre parameter for the note (from MIDI controller 74 on this note's MIDI channel).

Other setting up

Before delving further into other aspects of the MPE specification, which are demonstrated by this application, let's look at some of the other things our application uses.

First of all, our MainComponent class inherits from the AudioIODeviceCallback [1] and MidiInputCallback [2] classes:

class MainComponent : public Component,
private AudioIODeviceCallback, // [1]
private MidiInputCallback // [2]
{
public:
//...

We also have some important class members in our MainComponent class:

//...
//==============================================================================
AudioDeviceManager audioDeviceManager; // [3]
AudioDeviceSelectorComponent audioSetupComp; // [4]
Visualiser visualiserComp;
Viewport visualiserViewport;
MPEInstrument visualiserInstrument;
MidiMessageCollector midiCollector; // [5]
};

The AudioDeviceManager [3] class handles the audio and MIDI configuration on our computer, while the AudioDeviceSelectorComponent [4] class gives us a means of configuring this from the graphical user interface (see Tutorial: The AudioDeviceManager class). The MidiMessageCollector [5] class allow us to easily collect messages into blocks of timestamped MIDI messages in our audio callback (see Tutorial: Synthesiser using MIDI input).

It is important that the AudioDeviceManager object is listed first since we pass this to the constructor of the AudioDeviceSelectorComponent object:

MainComponent()
: audioSetupComp (audioDeviceManager, 0, 0, 0, 256,
true, // showMidiInputOptions must be true
true, true, false)
{
//...

Notice another important argument that is passed to the AudioDeviceSelectorComponent constructor: the showMidiInputOptions must be true to show our available MIDI inputs.

We set up our AudioDeviceManager object in a similar way to Tutorial: The AudioDeviceManager class, but we need also to add a MIDI input callback [6]:

//...
audioDeviceManager.initialise (0, 2, nullptr, true, String(), nullptr);
audioDeviceManager.addMidiInputCallback (String(), this); // [6]
audioDeviceManager.addAudioCallback (this);
//..

The MIDI input callback

The handleIncomingMidiMessage() is called when each MIDI message is received from any of the active MIDI inputs in the user interface:

void handleIncomingMidiMessage (MidiInput* /*source*/,
const MidiMessage& message) override
{
visualiserInstrument.processNextMidiEvent (message);
midiCollector.addMessageToQueue (message);
}

Here we pass each MIDI message to both:

  • our visualiserInstrument member — which is used to drive the visualiser display; and
  • the midiCollector member — which in turn passes the messages to the synthesiser in the audio callback.

The audio callback

Before any audio callbacks are made, we need to inform the synth and midiCollector members of the device sample rate, in the audioDeviceAboutToStart() function:

void audioDeviceAboutToStart (AudioIODevice* device) override
{
const double sampleRate = device->getCurrentSampleRate();
midiCollector.reset (sampleRate);
synth.setCurrentPlaybackSampleRate (sampleRate);
}

The audioDeviceIOCallback() function appears to do nothing MPE-specific:

void audioDeviceIOCallback (const float** /*inputChannelData*/, int /*numInputChannels*/,
float** outputChannelData, int numOutputChannels,
int numSamples) override
{
// make buffer
AudioBuffer<float> buffer (outputChannelData, numOutputChannels, numSamples);
// clear it to silence
buffer.clear();
MidiBuffer incomingMidi;
// get the MIDI messages for this audio block
midiCollector.removeNextBlockOfMessages (incomingMidi, numSamples);
// synthesise the block
synth.renderNextBlock (buffer, incomingMidi, 0, numSamples);
}
Note
In fact, this is rather similar to the SynthAudioSournce::getNextAudioBlock() function in Tutorial: Synthesiser using MIDI input.

Core MPE classes

All of the MPE specific processing is handled by the MPE classes: MPEInstrument, MPESynthesiser, MPESynthesiserVoice, MPEValue, and MPENote (which we mentioned earlier).

The MPEInstrument class

The MPEInstrument class maintains the state of the currently playing notes according to the MPE specification. An MPEInstrument object can have one or more listeners attached and it can broadcast changes to notes as they occur. All you need to do is feed the MPEInstrument object the MIDI data and it handles the rest.

In the MainComponent constructor we configure the MPEInstrument in legacy mode and set the default pitch bend range to 24 semitones:

visualiserInstrument.enableLegacyMode (24);
Note
See Tutorial: MPE notes, zones and zone layouts for an introuction to more flexible approaches using zones.

In the MainComponent::handleIncomingMidiMessage() function we pass the MIDI messages on to our visualiserInstrument object:

visualiserInstrument.processNextMidiEvent (message);

In this example we are using an MPEInstrument object directly as we need it to update our visualiser display. For the purposes of audio synthesis we don't need to create a separate MPEInstrument object. The MPESynthesiser object contains an MPEInstrument object that it uses to drive the synthesiser.

The MPESynthesiser class

We set our MPESynthesiser with the same configuration as our visualiserInstrument object (in legacy mode with a pitch bend range of 24 semitones):

synth.enableLegacyMode (24);
synth.setVoiceStealingEnabled (false);

The MPESynthesiser class can also handle voice stealing for us, but as you can see here, we turn this off.

As we have already seen in the MainComponent::audioDeviceAboutToStart() function we need to set the MPESynthesiser object's sample rate to work correctly:

synth.setCurrentPlaybackSampleRate (sampleRate);

And as we have also already seen in the MainComponent::audioDeviceIOCallback() function, we simply pass it a MidiBuffer object containing messages that we want it to use to perform its synthesis operation:

synth.renderNextBlock (buffer, incomingMidi, 0, numSamples);

The MPESynthesiserVoice class

You can generally use the MPESynthesiser and MPEInstrument classes as they are (although both classes can be used as base classes if you need to override some behaviours). The most important class you must override in order to use the MPESynthesiser class is the MPESynthesiserVoice class. This actually generates the audio signals from your synthesiser's voices.

Note
This is similar to the SynthesiserVoice class that is used with the Synthesiser class, but it is customised to implement the MPE specification. See Tutorial: Synthesiser using MIDI input.

The code for our voice class is in the MPEDemoSynthVoice.h file within the Source directory of the demo project. Here we implement the MPEDemoSynthVoice class to inherit from the MPESynthesiserVoice class:

class MPEDemoSynthVoice : public MPESynthesiserVoice
{
//...

We have some member variables to keep track of values to control the level, timbre, and frequency of the tone that we generate. In particular, we use the LinearSmoothedValue class, which is really useful for smoothing out discontinuities in the signal that would be otherwise caused by value changes (see Tutorial: Sine synthesis).

//...
//==============================================================================
LinearSmoothedValue<double> level, timbre, frequency;
double phase, phaseDelta, tailOff;
// some useful constants
const double maxLevel = 0.05f;
const double maxLevelDb = 31.0f;
const double smoothingLengthInSeconds = 0.01;
};

In the constructor, we initialise some of our members to zero (the LinearSmoothedValue objects will automatically be zero).

MPEDemoSynthVoice()
: phase (0.0), phaseDelta (0.0), tailOff (0.0)
{
}

Starting and stopping voices

The key to using the MPESynthesiserVoice class is to access its MPESynthesiserVoice::currentlyPlayingNote (protected) MPENote member to access the control information about the note during the various callbacks. For example, we override the MPESynthesiserVoice::noteStarted() function like this:

void noteStarted() override
{
jassert (currentlyPlayingNote.isValid());
jassert (currentlyPlayingNote.keyState == MPENote::keyDown
|| currentlyPlayingNote.keyState == MPENote::keyDownAndSustained);
// get data from the current MPENote
level.setValue (currentlyPlayingNote.pressure.asUnsignedFloat());
frequency.setValue (currentlyPlayingNote.getFrequencyInHertz());
timbre.setValue (currentlyPlayingNote.timbre.asUnsignedFloat());
phase = 0.0;
const double cyclesPerSample = frequency.getNextValue() / currentSampleRate;
phaseDelta = 2.0 * double_Pi * cyclesPerSample;
tailOff = 0.0;
}

The following "five dimensions" are stored in the MPENote object as MPEValue objects:

MPEValue objects make it easy create values from 7-bit or 14-bit MIDI value sources, and to obtain these values as floating-point values in the range 0..1 or -1..+1.

Note
The MPEValue class stores the value internally using the 14-bit range.

The MainComponent::noteStopped() function triggers the "release" of the note envelope (or stops it immediately, if requested):

void noteStopped (bool allowTailOff) override
{
jassert (currentlyPlayingNote.keyState == MPENote::off);
if (allowTailOff)
{
// start a tail-off by setting this flag. The render callback will pick up on
// this and do a fade out, calling clearCurrentNote() when it's finished.
if (tailOff == 0.0) // we only need to begin a tail-off if it's not already doing so - the
// stopNote method could be called more than once.
tailOff = 1.0;
}
else
{
// we're being told to stop playing immediately, so reset everything..
clearCurrentNote();
phaseDelta = 0.0;
}
}
Note
This is very similar to SineWaveVoice::stopNote() function in Tutorial: Synthesiser using MIDI input. There isn't anything MPE-specific here.
Exercise
Modify the MainComponent::noteStopped() function to allow the note-off velocity (lift) to modify the rate of release of the note. Faster lifts should result in a shorter release time.

Parameter changes

There are callbacks that tell us when either the pressure, pitch bend, or timbre have changed for this note:

void notePressureChanged() override
{
level.setValue (currentlyPlayingNote.pressure.asUnsignedFloat());
}
void notePitchbendChanged() override
{
frequency.setValue (currentlyPlayingNote.getFrequencyInHertz());
}
void noteTimbreChanged() override
{
timbre.setValue (currentlyPlayingNote.timbre.asUnsignedFloat());
}

Again, we access the MPESynthesiserVoice::currentlyPlayingNote member to obtain the current value for each of these parameters.

Generating the audio

The MainComponent::renderNextBlock() actually generates the audio signal, mixing this voice's signal into the buffer that is passed in:

void renderNextBlock (AudioBuffer<float>& outputBuffer,
int startSample,
int numSamples) override
{
if (phaseDelta != 0.0)
{
if (tailOff > 0.0)
{
while (--numSamples >= 0)
{
const float currentSample = getNextSample() * (float) tailOff;
for (int i = outputBuffer.getNumChannels(); --i >= 0;)
outputBuffer.addSample (i, startSample, currentSample);
++startSample;
tailOff *= 0.99;
if (tailOff <= 0.005)
{
clearCurrentNote();
phaseDelta = 0.0;
break;
}
}
}
else
{
while (--numSamples >= 0)
{
const float currentSample = getNextSample();
for (int i = outputBuffer.getNumChannels(); --i >= 0;)
outputBuffer.addSample (i, startSample, currentSample);
++startSample;
}
}
}
}

It calls MainComponent::getNextSample() to generate the waveform:

float getNextSample() noexcept
{
const double levelDb = (level.getNextValue() - 1.0) * maxLevelDb;
const double amplitude = std::pow (10.0f, 0.05f * levelDb) * maxLevel;
// timbre is used to blend between a sine and a square.
const double f1 = std::sin (phase);
const double f2 = std::copysign (1.0, f1);
const double a2 = timbre.getNextValue();
const double a1 = 1.0 - a2;
const float nextSample = float (amplitude * ((a1 * f1) + (a2 * f2)));
const double cyclesPerSample = frequency.getNextValue() / currentSampleRate;
phaseDelta = 2.0 * double_Pi * cyclesPerSample;
phase = std::fmod (phase + phaseDelta, 2.0 * double_Pi);
return nextSample;
}

This simply cross fades between a sine wave and a (non-bandlimited) square wave, based on the value of the timbre parameter.

Excerise
Modify the MPEDemoSynthVoice class to crossfade between two sine waves, one octave appart, in response to the timbre parameter.

Summary

In this tutorial we have introduced some of the MPE based classes in JUCE. You should now know:

  • What MPE is.
  • That MPE compatible devices will allocate each note to their own MIDI channels.
  • How the MPENote class stores information about a note including its MIDI channel, the original note number, velocity, pitch bend, and so on.
  • That the MPEInstrument class maintains the state of the currently playing notes.
  • That the MPESynthesiser class contains an MPEInstrument object that it uses to drive the synthesiser.
  • That you must implement a class that inherits from the MPESynthesiserVoice class to implement your synthesiser's audio code.

See also