I’m curious about how resource management works in the Global Rackspace. I mostly handle my instrument processing there, but I’m often only using one or two instruments at a time — even though more might be loaded.
Here’s my main question:
If there’s no audio flowing into a plugin chain, does it automatically idle, or do I need to manually bypass plugins to reduce CPU usage?
The reason I’m asking is that on the iPad side of my rig, I use Loopy Pro, which has an idle feature. It significantly reduces the processing load when no signal is passing through a plugin — even though a hard bypass will be slightly less than an idle load . That’s been a big help in keeping things efficient, especially in a live setting.
Does anything like that happen automatically in the Global Rackspace? Or should I be automating plugin bypass (or even unloading plugins) for better performance?
I just upgraded to M4 Pro 48gb ram from a M4 standard 16gb ram because i was getting clicking and popping in my omnisphere and keyscape. (expensive drag). And Im still having some performance issues.
Nothing happens automatically on the GP end of things. Certain plugins are coded to reduce processor usage when idle, but there’s no standard for such things.
Incredible implementation. Im constantly impressed by so many aspects of GP.
This brings me to my next question: is it possible to expand the channel count of the relayer? I find this a supremely helpful module and wish it had double the channels!!.
ALSO, is there any reading resources that yall could point me towards to understand the relationship between instances and cores used? I know that in logic, all the plugins on a single channel strip have to fit into a single core or something… UAD has similar bottlenecks. I just want to understand how to best leverage my new, more powerful machine.
I basically got it because i was getting clicks and dropouts in my large sample library patches in omnisphere and keyscape. But im still getting them on the more powerful machine (not as bad though)
Each instance of Gig Performer uses a different core.
Some plugins like Kontakt or Diva support Multi Core usage - you have to enable that feature within the plugin.
By the way, what Sample Rate and Buffer size are you using?
What is your Audio Interface?
I do all my live sets in 44.1k and have been experimenting with lower buffer rates. I couldnt really go lower than 64 on my old standard m4 mini. Generally 32 on the new machine but have been trying out 16. Definitely will glitch out sometimes at 16. I’m using RME UCX II. Doing a lot of the routing work in total mix using the software playback channels. Im also using room eq from math audio for full system tuning, building a “main mix” on an unused output channel and looping that back to input in TMFX and bringing it into GP > RoomEQ from there. Main output only has the Corrected stereo mix coming from GP.
Tryin to only use zero latency plugs but some of the soundtoys and uadx will add some. Im particularly sensitive to latency so I would rather decrease streaming and load VS sacrifice with higher buffers.
My concern is about the feeling. Maybe its my divergence but anything more than 5 or 6 ms of latency just does not feel the same. I understand that many people get used to the latency of their system. Im not closed minded about learning to live with more, but its always noticeable and it doesn’t feel good to me. Thanks for the link… i remember watching that video and him talkin’ about it. I’ll revisit this…
The only issues I’m having are with the spectrasonics glitches. So I’ll look into reducing their streaming demands too. Weird thing is It mostly only happens when relating to the sustain pedal.
Most people around here are at 128 samples or higher. There is really no point in going much lower than that. You’ll just stress your system out for (near) imperceivable latency gains.
Remember, being just 2 meters from your speaker cabinet gives near 6ms of latency, and that pretty well accounts for every live performer’s experience. On larger stages, that distance is even greater. Use IEM if latency is truly an issue and spare your machine.
I understand that mute is not the same as bypass. But when I mute a strip with the last stage of gain, I can build in link that will also bypass all the plugins in that chain.
I hear what you are saying DHJ, and i’ve of course considered this It is weird because it will make a pop sometimes even if i am not playing a huge cluster of notes or adding extra notes after depressing sustain. I could hold a full chord with two hands and its sounding great, then I depress the sustain pedal (no extra notes added) and I get a pop. I will keep playing with the buffer…
I don’t know what pressing the pedal down by itself does on that plugin. But for example, if you play a single note on Pianoteq and then press the pedal, you are enabling sympathetic resonance which means that multiple “strings” can be producing the harmonics even with just one note. So there can be more going on than just preventing Note Off message from being processed.
Ahh of course! Totally forgot about that. So a whole ton of samples are being accessed at that exact moment. Makes perfect sense now!! Thats seems a heavy process… I’ve not done anything at 128. I was always at 64 or lower, so maybe its time to join the herd..
… at least investigate the multi-instance feature for the omnisphere/keyscape. And have things like my guitar/vocal which I am the most sensitive to latency on a lower buffer. So cool you can do that. Great job devs. My HEROS!!