Mute vs. BlockNoteOn for CPU Usage

While adjusting my horn patch in EastWest Play, I noticed that the CPU spiked up to 80% at one point. I know that’s single-core, and while I’m not super worried about it (I’m working with a Ryzen 7 3700X), I would like to avoid it. Initially, I thought it was the mic placements I had on, but then I noticed that the other instruments were triggering despite being muted. (A duh moment, but it didn’t cross my mind before.)

The rackspace I’m using has 15 instruments open, so I can use variations and just mute when I need to. I had the thought: instead of muting, what about BlockNoteOn? If it doesn’t send MIDI information, the instrument won’t get triggered, and theoretically, that 80% spike shouldn’t happen, right? Out of curiosity, I asked ChatGPT, and this was its response:

[…] it’s worth noting that blocking NoteOn messages can also have unintended consequences, such as preventing certain parts of a MIDI sequence from playing correctly. Additionally, the amount of CPU saved by blocking NoteOn messages may be negligible in some cases, depending on the specific instrument and the number and frequency of incoming MIDI messages.

For the instruments that aren’t being used, I’m not worried about potential silence or lack of note-overlap. What do y’all think? Is there a reason why BlockNoteOn isn’t recommended in this use case?

UPDATE: I tried it, and so far I’m saving at least 10% CPU. I’ll have a better idea after prolonged use. This is the code I used:

//$<AutoDeclare>
// DO NOT EDIT THIS SECTION MANUALLY
Var
   VCBlock : MidiInBlock
   CLBlock : MidiInBlock
   BCLBlock : MidiInBlock
   PNOBlock : MidiInBlock
   WWBlock : MidiInBlock
   FLBlock : MidiInBlock
   HNBlock : MidiInBlock
   RDSBlock : MidiInBlock
   VBBlock : MidiInBlock
   VLNBlock : MidiInBlock
   TREMBlock : MidiInBlock
   STRBlock : MidiInBlock
   CLBlock_1 : MidiInBlock
   WWBlock_1 : MidiInBlock
   HNBlock_1 : MidiInBlock
//$</AutoDeclare>

var
muteMaster, vlnM, vlnTM, vcM, strM, vbM, wwLM, wwSM, flM, clLM, clSM, bclM, bsnM, rdsM, pnoM, synthM, hnLM, hnSM : widget
muteGroup : widget Array = [vlnM, vlnTM, vcM, strM, vbM, wwLM, wwSM, flM, clLM, clSM, bclM, bsnM, rdsM, pnoM, synthM, hnLM, hnSM]

function silence()
var
index : integer
    For index =0; index < Size (muteGroup); index = index +1 Do
        SetWidgetValue(muteGroup[index],1.0)
    end
End

On WidgetValueChanged (newVal : double) from muteMaster
    If newVal >0.6 Then
        silence()
    end
End

//This triggers NoteOn event blocking for each instrument MIDI block.

On WidgetValueChanged (newValue : double) from vlnM
    SetParameter(VLNBlock, 288, newValue)
End

On WidgetValueChanged (newValue : double) from vlnTM
    SetParameter(TREMBlock, 288, newValue)
End

On WidgetValueChanged (newValue : double) from vcM
    SetParameter(VCBlock, 288, newValue)
End

On WidgetValueChanged (newValue : double) from strM
    SetParameter(STRBlock, 288, newValue)
End

On WidgetValueChanged (newValue : double) from vbM
    SetParameter(VBBlock, 288, newValue)
End

On WidgetValueChanged (newValue : double) from wwLM
    SetParameter(WWBlock, 288, newValue)
End

On WidgetValueChanged (newValue : double) from wwSM
    SetParameter(WWBlock_1, 288, newValue)
End

On WidgetValueChanged (newValue : double) from flM
    SetParameter(FLBlock, 288, newValue)
End

On WidgetValueChanged (newValue : double) from clLM
    SetParameter(CLBlock, 288, newValue)
End

On WidgetValueChanged (newValue : double) from clSM
    SetParameter(CLBlock_1, 288, newValue)
End

On WidgetValueChanged (newValue : double) from bclM
    SetParameter(BCLBlock, 288, newValue)
End

On WidgetValueChanged (newValue : double) from rdsM
    SetParameter(RDSBlock, 288, newValue)
End

On WidgetValueChanged (newValue : double) from pnoM
    SetParameter(PNOBlock, 288, newValue)
End

On WidgetValueChanged (newValue : double) from hnLM
    SetParameter(HNBlock_1, 288, newValue)
End

On WidgetValueChanged (newValue : double) from hnSM
    SetParameter(HNBlock, 288, newValue)
End

There’s probably a way of consolidating the list of BlockNoteOn triggers, but I’m not a coder. :upside_down_face:

This is also the wiring setup:

Out of curiosity, why not just use the built in block note on via the midi in blocks?

I wanted the mute button to mute and block note at the same time. It’s redundant, but it seemed like a good idea at the time.

I’m not sure I understand what you are trying to accomplish. What exactly are you “muting”? The audio of a mixer block? That’s not going to save you much?

Blocking NoteOn events won’t save you much either unless the plugins that receive those events are extremely well behaved so has to reduce their impact when they are not making any sounds.

If you must have all your plugins in a single rackspace, then the only way to significantly reduce audio cpu cycles is to bypass synth and effects plugins, particularly the ones that are not well behaved.

1 Like

I put all of the plugins on a single rackspace to save RAM and time. Most of the patch changes are the same instruments, but are either layered, or change register and articulations, etc. Even with saving rack and wiring block favorites, it seemed to eat more time than just having everything in one spot. But I’m also still learning the ropes, so that factors in, too.

When I was adding a mic placement in Play (adding a Close mic with the Main mic) and played chords, there was a huge CPU spike to upwards of 80%. The only thing I could think is that the spike was a result of MIDI information going to each plugin at the same time, even when they were muted at the mixer block, which makes sense. What I should have done is just switched all of the mute buttons to trigger BlockNoteOn instead of trying to script them to both mute and block NoteOn. If the instruments aren’t all being triggered by omni, my thought was that the spike wouldn’t exist. It seems to have done something since I haven’t seen it spike that high.

  1. Why do you need to save RAM? Are you planning to read your emails, surf the internet and play video games during a show???

  2. Putting the same plugin in multiple rackspaces does not significantly increase physical RAM because the DLLs are shared – only the instance data is duplicated – and unless you’re using a sample player that can’t stream from disk, the differences wont matter that much in most cases

  3. Predictive loading can be used as a last resort to save RAM if that becomes criticial, and you can still using multiple rackspaces

Were you doing that during a show (in which case it would matter) or while editing your rackspace (in which case, so what!)

a) Use bypass and/or
b) Use multiple rackspaces as the plugins in non-active rackspaces are bypassed

1 Like

At the rate I was going (6GB at only two songs), I was worried about shooting past 32GB after about 4-5 songs.

This is what I’m confused on, I guess. My understanding is the same plugin in multiple rackspaces is frowned upon, and we should be using variations where possible, because the alternative will bog down our system.

While editing the rackspace, which made me worried about what would happen during the show.

Again, this is where I had some confusion, since I thought I would be saving CPU and/or RAM by avoiding multiple instances of the same plugin.

I don’t know from where you got that understanding — reusing plugins in different rackspaces is the normal way to do things in Gig Performer.

3 Likes

Maybe I am wrong, but I think it uses up ram to do that (in the case of a sample-heavy plugin/instrument) as compared to re-using that same rackspace. I didn’t think I was alone on this one.

Obviously, David knows much more about this than me.

The code gets shared so there’s little extra RAM used. The instance data will be different for each instance (e.g, the parameter settings) and that will use some RAM.

As for sample-based plugins, it really depends how they work. Decent samplers will do direct-from-disk streaming so they don’t have to load huge amounts of gigabytes of sample data into RAM so they’ll use some RAM but not as much as one expects. And again, GP supports predictive loading to reduce the RAM footprint if necessary.

3 Likes

One thing I’ve noticed is that multiple rackspaces do, in fact, have a huge impact on my RAM usage. With the EastWest Play interface, loading different mic placements really suck up memory; with the idea that unused rackspaces are bypassed, I kept duplicating one rackspace for each song – in my case, it was a time saver to have one rackspace with commonly-used instruments already laid out, as opposed to creating a new one each time. The problem is that, unless you do turn on predictive loading, every instance – with every RAM-demanding mic placement – is loaded up, and I jumped from 7GB to ~20GB @ 70% RAM usage. I tried predictive loading, but it did not facilitate fast patch loading. My computer froze way too long to be usable on-stage. My current solution is to just have a single rackspace of commonly-used instruments with a ton of variations, and only unique rackspaces for the outliers. I wasn’t concerned with sounds being bypassed so much as their RAM usage.

To my original thread question, BlockNoteOn and Mute are useful for CPU usage. If I have a commonly-used-instruments rackspace firing everything at the same time, it’s killer.

1 Like

Depends completely on the plugin

I know nothing about that particular plugin but it must be using an absurd amount of memory to represent the mike placement. For why that is, you’d have to ask EastWest, who most likely won’t tell you. I hope not but I wonder if they are just loading completely different sets of samples depending on the mike placement.

Predictive loading should be instantaneous as long as you are following a setlist, or moving forwards or backwards in a small subset of rackspaces. If you jump arbitrarily to a rackspace that’s too far away (based on what value you chose), then it will certainly not be instantaneous.

First of all, CPU usage has nothing to do with RAM usage and muting (as opposed to bypassing) will have no effect on CPU usage except perhaps if some particular effects plugin “notices” that incoming audio is non-existent and so doesn’t bother to do any effects processing. (Would be interesting to know if any effects plugins do that particular optimization)

I know East West a bit. They make (among other things) large orchestral sample libraries.

Yep, a lot of big sample libraries can load up huge numbers of different articulations. This could be viewed as an absurd amount of memory, but remember these libraries were designed for DAWs and creating orchestral (and other) soundtracks for use in television, etc.

The fact we can now use these in certain cases for live performance is amazing in and of itself (there was a time this was not really possible).

So, I think you do have to think about ram efficiency if you are trying to use these (heavy sample-based) libraries in GP.

There are a few guides on various threads on this topic. Some are Kontakt oriented, but they may apply to the Play player.

Based on prior comments, I think David, smartly, avoids some of these issues by using more physically modeled plugins and less sample heavy pluigins. That makes sense as an option.

Jeff

1 Like

I forgot to mention that it is streaming from disk, but to that, I only know based on load times (it’s a bit) and watching my RAM increase from Task Manager.

It was just next the rackspace in line, but it was the commonly-used-instruments one, so it had to load up all the memory hog plugins, which took awhile.

On the last part, I misspoke. I meant to say that BlockNoteOn as opposed to Mute made a difference for me. I saw it lower from a 60% spike to hovering below 20%.

Absolutely. And, like you said, it’s amazing that we can use them live, but I’m discovering the limitations. I’m also subscribed to Cinesample’s Musio, so I think I’m going to use a separate .gig file for Act II and try their brass library to see what Musio’s optimization is like. I’m also only using single articulations in Play, but Musio has only one mic placement, which I think is a decent combo of close and main.

I’ll check them out, thank you.

Also, you have to consider whether many (most) of those articulations are basically wasted.

If you are playing live, it is difficult to get a benefit of all the different articulations in real time. (If you even have a hand free to access key switches).

Especially for a big string library, for me it is a lot of wasted memory. I think you may be better off just grabbing a big string orchestral patch from (for example, Neo Soundstation, at $30 total) and you’re are set.

Actually, I still usually use the Synth Strings patch on the Casio CTK-7200. For my ears, it does the job (along with the Choir Aah patch).

My thoughts for your consideration…

1 Like

Agreed. That’s cropped up for me. I’ve found that played-staccato in a legato patch works fine enough for a majority of the music. I’m using Spitfire Albion One for overall strings samples.

That’s what I’m going to try for the outlier patches like accordion. I know the Kurzweil that the theatre uses has a good accordion patch – I just need to read up on how to incorporate on-board sounds into Gig Performer.

I appreciate the thoughts and suggestions!

That doesn’t make sense – if it was the next rackspace in line then those plugins should have been getting reloaded in the background while you were still playing using the previous rackspace or even a few rackspaces back if you had the predictive loading value set high enough.

1 Like