Best Practices? Midi File Player

I’m interested in using MIDI files as the basis for some backing parts, sequencing, automating parameters, etc. I’m interested in developing and using best practices. Here are some goals:

  • Composing and authoring the MIDI in a WYHIWYG (What you hear is what you get) way. I don’t want to edit the MIDI in a sequencer blind, load it in, play it, find mistakes, and try, try again.
  • Avoiding rewiring the rackspaces when going between authoring and performance modes.
  • Automatically changing song parts without missing a beat.
  • Possibly bridging songs (think Abbey Road, side 2 or most any concept album), though this could be done as one big song with song parts.

Let’s start with the micro. Composing the MIDI. In my case, I’m running Logic on a Mac. To start, I created a virtual MIDI connection and named it “Virtual GP.” (On a Mac, you open the Audio MIDI Setup app to do so.) I can now add a new MIDI Input in GP and map it to Virtual GP. IMPORTANT: In the DAW disable the Virtual GP input, or else you risk a loop back. When creating a new MIDI track, map it to Virtual GP and set the channel number as you please. (You can use a convention, such as drum kit on channel 10, but it’s arbitrary, really. It’s probably better to use channel 1 for track 1, etc.)

I found that it was best to create a MIDI Channel Constrainers in GP and put them between the Virtual GP MIDI input and the MIDI instruments. Load up your various sounds, and your DAW is now using GP as a virtual multi-instrument.

Now compose in your DAW as you normally would. Once complete (or partially complete), export the MIDI from all tracks, and save the file. You can now quit the DAW.

In the rackspace, add the MIDI File Player. Add the MIDI file to it. Map each track to its channel. Now connect the MIDI file player to each of the MIDI Channel Constrainers that feed your instruments. Hit Play in the Midi File Player, and the result should sound just like what you heard when you composed it in the DAW. WYHIWYG!

You can add a play widget to a panel, connect it up, and start and stop playback. (I didn’t find a Pause control. That would be nice, but way more complex for the developers to implement than Stop.)

One thing to note is that the mixer controls in the DAW don’t work. You need to pan and balance with a mixer in GP. And then we need to send volume and pan for each track and control the mixer. I haven’t done that yet. This would let the DAW (and the MIDI file) set the mix. We might add a way to tweak the mix live, just in case.

Okay, so much for the micro. That is, the song that plays (WYHIWYG) within one song part and rackspace of one song. As we move to the macro, we need to consider how to bridge rack spaces, change song parts, etc. I haven’t gotten that far yet, and I’m really hoping that others can contribute their proven methods. And if there are better ways to implement the micro, I’m all ears. The key for me was to play the actual sounds in GP from the DAW, rather than to manually load similar plugins into the DAW and GP.

Looking forward to building the skills and gigs to bring this all together!

Here is the next step…

I moved the MIDI File Player into the Global rackspace. I also found that I don’t need to do the channel mapping manually. It’s there by default. (I messed with it previously when troubleshooting.)

So, how to connect it to the local rackspaces? I connected it to the “Virtual GP” MIDI Output. (I created the virtual MIDI port in the previous post.) Make sure that the input and output ports are enabled in GP by going to the menu Options | MIDI Ports…

In the local rackspaces, connect the “Virtual GP” MIDI Input to the MIDI Channel Constrainers that feed the instruments. Here’s a cool thing. When you play the MIDI in GP it plays through the “Virtual GP” MIDI port - and so does the DAW. This meets requirement #2 above, which states that we shouldn’t need to rewire when going from composing (in the DAW) to performing (with the local MIDI File Player.)

By hosting the MIDI File Player in the Global rackspace, we should also meet requirement #3, which requires changing song parts (and rack spaces) without missing a beat.

So far so good. Next, I’ll work on getting the mixer in the DAW (and the volume automation in the MIDI files) to work. Clearly, this means wiring a mixer in the global rackspace and wiring the local rackspaces to send the instruments’ individual audio outputs to the global rackspace. I should then be able to create widgets to control the mix, and control those widgets via the midi volume parameters (CC7, as I recall.)

Here we go…

Unfortunately, the global rackspace doesn’t yet support the ability to send MIDI the way it sends audio.

1 Like

Yeah, to/from global/rackspaces for MIDI will be cool. In any case, it works today with a virtual port. By connecting my DAW to the same virtual port, I get the same behavior whether I play from the DAW or from the MIDI File Player in the Global Rackspace.

As to the mixer… Almost everything worked as planned. I routed the individual instruments back to the global rackspace, added a mixer, added widgets, connected the widgets from the “Virtual GP Port” CC7 to the mixer’s volume controls, and it works great from the DAW.

The problem I have now is that the CC messages don’t come through from the MIDI File Player. (I connected a MIDI monitor to check it. No volume, pitch bend, or anything.)

I assume that MIDI File Player plays CCs, right? In that case, Logic might not be writing the CCs to the MIDI file. Weird. I’ve tried automation for tracks and for regions as well as for all channels and the specific channel. Hmmm.

Yes, it even handles Sysex messages

Yes. I confirmed the problem was with saving MIDI from Logic. Working out the details now…

The method for export from Logic is available here in the “Prepare MIDI Regions for Export” section. You then “Export region as MIDI file.”

Frankly, the process is almost unbelievable. It’s gibberish. Shouldn’t there just be a checkbox in the export that says something like “Join regions and export automation data?”

FWIW, I wrote the volume automation as per region and specified the track to do this, make sure to select the MIDI regions of interest before writing the automation. No idea if this was necessary.

Anyway, I have what I want. I can automate parameters, including volume. This lets me set the initial mix in Logic and even fade in, etc. My next step will be to add a second mixer after this one that includes my “live” mixer adjustments. I’ll need to be careful to set this second mixer to unity gain across the board when composing and setting the mix in the DAW. The second mixer will be controllable by faders so I can fine tune the mix live, if need be. I would also be able to mute tracks with this live mixer.

Note that I wouldn’t want some other controller trying to override the mixer for the DAW/MIDI File Player as that would conflict with any automation of the volume.


Okay, I’ve added the second mixer, also in the Global Rackspace. It follows the mixer from the DAW automation. I will probably want to add a macro that resets these second, live faders to 0dB. When composing the MIDI file and when starting a live performance, I want them to all start in a neutral place. They are just for fine tuning live, rather than having to go back to re-mix the MIDI file.

I recently got an Arturia KeyLab MkII. It includes three banks of faders. (Unfortunately, the 3rd bank has the same CCs as the second in the default, so I had to go into the MIDI Control Center to give these unique parameters and save this as a user configuration.) Anyway, I have Bank 1 mixers for Front of House, and Bank 2 Mixers for my personal monitor. Bank 3 is for my backing submix from the MIDI (and eventually audio) files.

So far, it’s all coming together nicely. Next, I will try adding automation to change song parts. I expect that will be straightforward.

A challenge ahead is tempo and time signatures. I’m not sure how to make GP follow the MIDI file and DAW. I can make my own metronome in the MIDI file, if necessary, but it would be simpler to use GP’s metronome. Regarding tempo, the MIDI File Player works properly. It would be great to get this parameter into the system, not only for the metronome, but to get vibrato and delays to follow it as well. Any tips for following MIDI tempo and time signature are appreciated!

Okay, I hit a brick wall on MIDI playback when the instruments are in Local Rackspaces that are referenced by Song Parts. It works fine if MIDI Persist is disabled, but then notes get chopped off when I change rackspaces. (I copied the same MIDI instruments in the various local rackspaces for a given song.) Here’s what happens:

With MIDI Persist enabled, the MIDI playback is great, even with the song part changes. The GUI in the setlist view changes as one would expect. The problem is that the rackspaces aren’t actually changing. They continue to put out sound, so the whole rackspace seems to get “stuck.”

With MIDI Persist disabled, the local rackspaces change just fine, but notes get cut off.

I must be missing something. Live keyboard players must face this all the time. They don’t load all their synths globally, or just use variations, do they?

In my case, I play guitar and keyboards and might do some call and response between the two, so I really need to change rigs, unless…

I guess I need to stick within a single rackspace and use variations for a song or extended piece of music. This would mean rigging multiple amps/effects/instruments/whatever into a single rack and using mixers and other tricks to switch between sounds with variations-only for the song parts. I get it now!

Okay dead-end averted. I’ll sleep on it and keep building things out tomorrow…