So I have got my gig file together running spendidly on my MacBook Pro (M1 Max). A guitar player using Midi Guitar 2 (fabulous by the way) with 128 buffer setting (could go to 256 but can feel a difference I think)
I’ve taken that same gig file and loaded it up on my iMac Pro (a beast of a machine or so I thought). After reconfiguring audio wiring it also was running great.
Then I loaded up a rackspace for Jungleland. 5 Instances of Zenology (2 Sax, 1 organ, 1 strings and 1 violin) as well as Mercurial Audio Ampbox and a number of other Global Effects in the Global Rackspace.
CPU went to almost 50% and then when I started playing, it would glitch unusably hitting levels of 80-98%. (My MacBook Pro peaks out at 35% on this same preset)
I went through the following solution attempts and thought I’d provide my experience.
Reduced buffer to 256. Still unusable but CPU did lower a bit. L
Turned ON Predictive loading with no real difference as I figure that is more a function of RAM vs CPU.
Reconfigured my rackspace of 4 variations (each of which had 5 instances of Zenology) to 4 rackspaces with 1 variation (each of which has 1 or 2 instances of Zenology that were needed for the variation. Then used Song parts to recreate the song. THIS WORKED.
So my questions are does this make sense? and if so Why does switching rackspaces reduce the load so significantly .
(Also a promotional advert for Apple Silicon for those on Intel. It’s worth the jump)
(And a second promo for MG2 for you guitar players. Here’s a quick clip.)
Thanks @edm11 . So I assume, that should be step 1 when dealing with CPU load? I was figuring the hierarchical order was that and then Audio Buffer size. . (Unless you have a poorly designed plug in) Predictive Loading has little to do with it I guess.
Definitely one of the big steps, yes.
In addition, introducing multiple instances of GP to balance CPU load out over more cores of the processor. Works well on Windows, at least. I’m not a Mac user so can’t attest to how load balancing works there.
It was before I moved away from the one song per rackspace approach. I now see the benefits of this but it does make the audio file player and midi file player less useful. I would use them to practice along with the tunes but that is a tradeoff that’s worth the upside. Hoping we figure out a way to incorporate midi files and audio files into songs like chord charts.
You need to ensure your audio interface drivers are written to allow more than one application to use them simultaneously. When I tried this on a Windows machine, I found that only one instance could access my Dante PCIe card at a time, which was not helpful.
Macs use Core Audio, which I believe handles this natively.
You can also look at using AudioGridder to spread out processing between cores. There’s a nice writeup by the GP guys if you do a search on ‘Audiogridder.’