I would like to decrease my audio buffer settings in GP5

Many people don’t have a good grasp of how RAM, CPU - computer “power”, and clock speed are interrelated.

In the simplest terms, the core count (CPU power) is like the number of workers you have, and clock speed is how fast each worker operates. RAM is the amount of storage space for the “work” that needs to be processed.

A core is a single processing unit of the CPU. Modern CPUs can have anywhere from 2 to 64 cores (or even more in server setups). More cores mean the CPU can handle multiple tasks simultaneously, increasing its multitasking capabilities.

Measured in Gigahertz (GHz), clock speed indicates how many cycles a core can complete in a second. A higher clock speed means each core can process tasks faster.

RAM, or Random Access Memory, is a type of computer memory that stores data that is being actively used by the computer. It is a temporary storage space where the computer can quickly read and write data. Unlike permanent storage devices such as hard drives or SSDs, RAM provides much faster access to data, allowing the computer to retrieve and process information more rapidly.

As you can see from the above, any of these has the potential to act as a bottleneck for any of the others.

In general - and a vast oversimplification with numerous caveats - audio processing is linear and often is not great at taking advantage of multitasking capabilities - i.e. may not make the best use of multiple cores that a CPU may have, so in my experience faster clock speeds make a huge difference in running GP assuming you have a modern CPU. The amount of RAM becomes significant depending on how sample heavy your “instruments” (VSTs/AUs) are, and some VST/AUs - like the Korg Triton VST - are well known to be CPU/RAM intensive (for whatever reason it turns out that way) in comparison to other manufacturers/instruments.

As you stated, a recent, powerful CPU - MAC or Intel - with high clock speeds and lots of RAM, will usually play any complex arrangement/instruments that you might have. My two cents, for what it is worth, is that the most recent, powerful CPU and lots of RAM is a good investment and future proofs your setup for technology advances in the future for a number of years. When trying to find something that is just good enough, you will have to find someone who has a setup and use case approximating yours to determine what hardware will be good enough to work. The tradeoff is as others have said - to rearrange your setup to something that will run on the hardware you have or can afford. Good luck!

Bonus tidbits: I have an old (abt 10 years old) dual Zeon server with 128gb ram HP Z840 workstation that I can overload with GP and some large gig files, but my newer overclocked Intel i9 on a mini-ITX board with 64gm ram doesn’t even get past 15% CPU usage. Modern hardware, speed and lots of RAM rule the day in my world.

1 Like

Maybe I have a wrong understanding of rack spaces, but if you play back a complete arrangement with lets say what in Logic or Cubase would be like 30 instruments or tracks. In that case you can not use separate rackspaces or am I wrong? You are right that a separate instance would be an option, but I don’t want to manage two instances. In that case I prefer to use my external MIDI Devices.

Thank you for these interesting insights.

In a typical DAW all plugins in active rackspaces use CPU. (Unless they are archived or frozen etc.)

In GP, inactive (local) rackspace do not use CPU. I think that is a primary reason the developers designed GP the way it is.

1 Like

Using plugins that require too much CPU (assuming of course that there isn’t an issue with a buggy or misconfigured audio device driver)

There’s no magic here — each plugin requires some portion of the CPU to perform its processing. Presumably it is obvious then in general you cannot have an unlimited number of plugins running simultaneously - that’s true for any plugin host and in fact that’s the reason most DAWs offer a “freeze” function which basically just records a single run-through of a plugin and then turns the plugin off…obviously that’s not useful for any real-time purpose.

So, for example, if you are running your machine with a sample rate of 44.1k and a buffer size of 128, then to avoid any glitching, processing to fill a single buffer must occur within 128/44.1k which is about 2.9 milliseconds. If the total number of active plugins in your rackspace require more than 2.9ms to complete a single cycle, you will get glitching. This is basic physics.

You have already indicated that the Neural plugins are very CPU intensive. So if you have too many of them (or even just one of them but too many other plugins), the entire collection of active plugins might not be able to finish a single round of processing within the allotted window (2.9ms for the example above). Increasing the buffer size will help but of course that will increase the latency to a point where it will probably not be usable in real time.

Some plugins have support for multiple cores which can help, though often not as much as people might think, and multi-core processing introduces other issues. Some plugin developers also offer different “quality” values where you can trade CPU cycles against audio quality — in the live performance world, lower audio quality is often just fine, it’s not necessarily bad quality.

So frankly, you may just be trying to do something that your computer just can’t handle.

As for RAM, obviously there has to be enough RAM for all your plugins to “fit” in physical memory. For non-sampled plugins, that’s actually not as bad as you might think because, for example, if you load two instances of the same plugin, the computer will not require double the amount of RAM because the code in the plugin is shared - only the data (e.g, parameter values and other internal data that contribute to the actual sound) will be different for each instance)

That said, if you still don’t have enough RAM, the OS will swap code in and out of RAM and that would certainly cause glitching.

2 Likes

That is a completely different scenario. The information needed (e.g, MIDI messages) already exist in tracks so the DAW can precalculate the audio from each plugin and just cache it until it’s needed. You can’t do that in a real-time system since you have no idea what will be played “next”

1 Like

Thank you. We live and learn :slight_smile:

Tomorrow I will have a look at the activity monitor CPU Load. Today I only checked the RAM Load. One alway finds things which keep you away from making music :wink:

LoL! I can sympathize, but years ago decided to embrace the hardware - computers, peripherals and all the other do-dads - as just one more part of the process of creating the sound - as I like it! But to me it’s like practicing, I like to keep the hardware as up to date as possible because when I hit that magic groove where it all consistently works, life is good!!!, and that aspect of it remains rock solid for many years.

By the way, I forgot to add that I usually run with a buffer size of 64ms. If I have a couple of bad behaving, hungry VSTs, up to 128ms but never more than that. When the Intel 15th gen chips come out later this year, I’ll build a new dedicated audio only mini-ATX right into my 6U electronics rack and have no doubts I’ll never see anything higher than 64ms buffer size again.

What about the new GP Relayer Feature in GP5? At the end of the day the following might be more complicated than just using a second instance of GP, but I am curious:
Let’s say I have in Logic an E Piano VST running with a very low buffer setting and I play this in real time. Via GP Relayer I send the Audio to Gig Performer which has in my case a very high buffer setting. What will happen? Will the high buffer setting affect my realtime playing and increase latency or would this theoretically be another possible approach?

You will get latency

OK, so I think I just have to convince myself again that a second instance of GP is really the best solution :wink:
In the meantime is it possible with GP5 to have on a Mac aliases of two instances for instant recall without using the complicated method which involves a script described here:

It’s really not bad, once you get your mind around it and commit to it. It actually gives you a good excuse to look at your setup and what you’re wanting to do, and find ways to group and organize your sounds in ways that make logical sense to you.
I’ve personally used 4 instances, and it became much less stressful to address each individual group.

Seems that my system is capable enough because the CPU Pressure is also not very high with my “almost 80 Percent Gig File” from Sheltering Sky.

CPU usage in activity monitor is not that important.
Important is the CPU usage shown in Gig Performer which shows the usage for Audio Processing.

2 Likes

No, that’s just an average that includes all the other threads including GUI handling, timer threads and other things an application does. . They are not real-time.

For analogy, suppose I gave you one hundred numbers, all of which had the value 1

The average would be 1

Now suppose I change one of those numbers to 100

Now the average is still only 1.99 so everything still looks fine but there’s still one thread that MUST run at that high speed and finish each cycle in time to fill the buffer. That won’t be reflected in an average measurement.

Another way to think about it is to consider that none of those other tasks are time-constrained. For example, supposing your computer has to calculate payroll numbers for employees. As long as it produces the right salary values for each employee, it doesn’t really matter whether the computer takes 2 seconds or 10 minutes every morning to do the calculation. The employees will get their checks in the afternoon with the correct amounts. The time factor is not relevant.

But suppose your computer is tracking a missile that was fired from somewhere and will land in your back garden in exactly 18 seconds. If your computer cannot do the necessary calculations such that it can calculate when and where to fire a missile to knock down that incoming missile within that 18 seconds, it will fail.

Essentially that problem has “external” constraints on it that MUST be met and a system that can handle problems with some “average” time is simply not good enough.

Audio processing is the same — if all the calculations cannot be done within a given window (defined by that latency), it will fail.

That’s the essence of what is called a hard real-time system where “hard” doesn’t mean difficult, it just means time is a factor.

3 Likes

Does it make in my case (using only one backspace and a lot of instruments playing at the same time, not using setlist) if I use predictive loading or not?
After all those interesting insights I got the impression that if I would buy a super fast new Mac with a lot of RAM I still wouldn’t be able to decrease the buffer form 2048 to something like 96 which would be the desired value for smooth realtime playing. I still find it sometimes confusing to use two instances of GP, but we User have to appreciate that this option exists instead of complaining about “the laws of physics” I guess which can’t be changed… ,-)

Unlikely — predictive loading is there to compensate for insufficient RAM - it can’t compensate for insufficient CPU cycles. I don’t know how fast your machine is but a significantly faster CPU will certainly let your plugins execute faster. Whether it will be enough for what you’re trying to do, I can’t answer that. Frankly, it may be that your expectation of what’s possible is just not actually viable.

The whole point of multiple rackspaces is so that you just have the instruments you NEED at any particular time - then GP takes care of making sure unneeded plugins are deactivated. You’re trying to break the paradigm. Not sure why you need so many instruments in a single rackspace but if they’re too CPU intensive, then try sampling the sounds and use a lightweight playback sampler instead.

2 Likes

Thank you for all your patience with me. I have explained this already a couple of times, but ok, here we go one more time. Imagine you have in Logic an arrangement for a complex song with like 30 tracks which have something 30 different instruments. Most of the time all instruments are playing at the same time because you have for example some layered instruments which consist each of more than just one Preset. Simple Example: Piano with String. The Piano could be coming from Falcon and the String form an Arturia Plug in. You know what I mean.
So I am using GP as a sound module which plays back complete complex arrangements. Just instead of using Logic where you have all you virtual instruments inside Logic I have my instruments in GP and instead of using a DAW I use the Guitar Pro Software as a MIDI Sequencer who triggers all the instruments inside GP. So in my case if I am not overlooking something it just makes no sense to use different rackspaces because most of the time all the instruments are playing at the same time. I know this is an exotic approach, but for me it is working fine because this way I can read in realtime the guitar tab in Guitar Pro and I am getting old. I can not remember everything :wink:

From what I can see, this will be your only option if you’re trying to use GP as your sound module. It will allow you to use different buffer settings on the ‘playback/backing’ instance(s) versus your ‘live playing’ instance.